Zuckerberg reveals what the glasses will be like to enter the metaverse | Technology
is the headline of the news that the author of WTM News has collected this article. Stay tuned to WTM News to stay up to date with the latest news on this topic. We ask you to follow us on social networks.
The commitment to the metaverse of Mark Zuckerberg is serious. The founder of Facebook is totally committed to paving the way so that his great project for the future, that immersive virtual world in which he wants us to spend several hours a day, can materialize as soon as possible. To demonstrate that his engineers are not wasting their time, Zuckerberg last week invited a group of media from around the world, including EL PAÍS, to learn about the latest developments in which his company is working. More specifically, the executive and his team talked about how they are trying to improve the visualization and experience of their Oculus glasses, which will allow us to immerse ourselves in the metaverse and will be able to pass what he calls “a visual Turing Test”.
“Current virtual reality systems can already give you the feeling that you are somewhere else. And it’s hard to describe how significant this is, it’s something you have to experience for yourself. But we still have a long way to go on screens and graphics engines before we get to visual realism,” Zuckerberg said in a first recorded speech. The complexity faced by Meta engineers has to do with a differential fact. When looking at a conventional screen, the 2D ones, we fix our eyes where we want, we look to one side or the other, we separate or move away, etc. Despite these natural movements, the resolution of the monitor is always the same.
With 3D images, projected directly into our eyes by special glasses, the situation is different. “You have to be able to render objects and focus the view at different distances. You need a display that can span a much wider viewing angle, and having retina-level resolution across that entire field of view requires far more pixels than any traditional display,” Zuckerberg explained. “You need displays that can approach the brightness and dynamic range of the physical world, and that requires nearly ten times the brightness of what we currently get from HDTVs. You need realistic motion tracking that has low latency, so when you turn your head it looks like it’s correctly positioned in the immersive world you’re in.”
All of this should be able to fit into a headset that is comfortable to wear, doesn’t get too hot, and has long-lasting batteries (plugged-in devices are out of the question). Reality Labs, the team of engineers focused on these developments, showed some of their prototypes. The one that seems to be most advanced is the Holocake 2, which Zuckerberg proudly showed off and which are the thinnest and lightest lenses they’ve developed to date. They also showed the plans for the Mirror Lake glasses, which integrates the latest Meta advances. They are still in the conceptualization phase and, as the company recalled, there is a long way to go before they can be translated into a consumer product. But the technologies they incorporate and the problems they try to solve will be seen later in the lens models that Meta brings to the market.
The four great visual challenges
Douglas Lanman, Director of Systems Research at Reality Labs, explained the four major challenges Meta developers face in making the display in VR goggles as realistic as possible. First, the lenses must be able to focus on the object being viewed, whether it is ahead or behind. Second, the resolution has to be much better than 2D to be convincing.
Thirdly, the graphics engine of the glasses has to be able to correct the distortion effect by which the structures appear to be deformed if they are far away. And fourthly, it must also be able to balance the dimension and proportion of all the elements in the visual field, which is achieved by making each element emit more or less light.
The technical solution that Reality Labs engineers have arrived at has two legs. On the one hand, the intentional distortion of the elements. “It’s about deforming the digital image in real time so that your brain perceives it as it would look if you were in real life, applying the necessary light, focus and resolution corrections,” Lanman illustrated. The other breakthrough they call high dynamic range (high dynamic rangeHDR) and consists of a technique to apply the necessary lighting at each moment to the images.
The Meta team has developed a still tremendously large prototype goggle that incorporates this latest development. When they manage to incorporate it into the lighter lens versions, they say, the user experience will skyrocket.
You can follow THE COUNTRY TECHNOLOGY in Facebook Y Twitter or sign up here to receive our weekly newsletter.
Leave a Reply