eye tracking

how-eye-tracking-contributes-to-xr-analytics-and-experience

How Eye Tracking Contributes to XR Analytics and Experience

Near-to-eye displays offer powerful new ways to understand what the wearer is doing – and maybe even thinking. Right now, the use of XR analytics like eye tracking is largely limited to enterprise including use cases like education and assessment, though eye tracking also enables new input modes and display improvements.

To learn more about the present and potential future of this technology, ARPost spoke with hardware manufacturer Lumus and XR analytics specialists Cognitive3D.

Why Do We Need XR Analytics?

XR analytics can be broken down generally into learning about an experience and learning about the person in that experience.

Learning About Experiences

Learning about an experience is important for people building XR applications – both in terms of building applications that people will want to use, and in building applications that people will be able to use.

“The stakes are much higher in creating high-quality content,” Cognitive3D founder and CEO Tony Bevilacqua said in an interview with ARPost. “That means creating something that’s comfortable, that’s not going to make you sick, that’s going to be accessible and well-received.”

This kind of thing is important for anyone building anything, but it is crucial for people building in XR, according to Bevilacqua. When a gamer experiences a usability problem in a console game or mobile app, they’re likely to blame that specific game or app and move onto a different one. However, XR is still new enough that people aren’t always so understanding.

“A bad experience can create attrition not just for an app, but for the platform itself,” said Bevilacqua. “That headset might go back into the box and stay there.”

Of course, developers are also interested in “success metrics” for their experiences. This is an issue of particular importance for people building XR experiences as part of advertising and marketing campaigns where traditional metrics from web and mobile experiences aren’t as useful.

“We all kind of know that opening an app and how much time people spent, those are very surface-level metrics,” said Bevilacqua. For XR, it’s more important to understand participation – and that means deeper analytical tools.

Learning About People

In other use cases, the people interacting with an experience are the subject that the XR analytics are most interested in. In these situations, Bevilacqua describes “the headset as a vehicle for data collection.” Examples include academic research, assessing skills and competency, and consumer research.

Competency assessment and consumer research might involve digital twins that the individual interacts with in VR. How efficiently can they perform a task? What do they think about a virtual preproduction model of a car? What products draw their eyes in a virtual supermarket?

“We focus more on non-consumer-focused use cases like focus groups,” said Bevilacqua. “We try to build off of the characteristics that make VR unique.”

At least part of the reason for this is that a lot of XR devices still don’t have the hardware required for XR analytics, like eye tracking capabilities.

Building Hardware for Eye Tracking

David Goldman is the Vice President of AR Optics at Lumus. The company is primarily focused on making displays but, as a components manufacturer, they have to make parts that work with other customer requirements – including eye tracking. The company even has a few patents on its own approach to it.

According to Goldman, traditional approaches to eye tracking involve cameras and infrared lights inside of the headset. The invisible light reflects off of the eye and is captured by the camera. Those lights and cameras add some cost but, more importantly, they take up “valuable real estate, from an aesthetic perspective.”

The patented Lumus system uses the waveguide itself as the light source because waveguides already require projecting light. This light reflects off of the eye, so all that is required is an additional inside camera, which is a lot more affordable in terms of cost and space. However, high standards for emerging experiences plays a role here too.

“When you’re a company trying to introduce a whole new product, you’re trying to shave pennies off of the dollar on everything,” said Goldman. “Looking at the bill of materials, it’s unlikely to make a first generation.”

Though, more and more devices coming to market do include this hardware – including consumer devices. Why? In part because the hardware enables a lot more than just XR analytics.

Enabling New Kinds of Interactions

Eye tracking enables advanced display technologies like foveated rendering, which is one of the big reasons that it’s increasingly being included in consumer VR devices. Foveated rendering is a technique that improves the graphic fidelity of a small area of the overall display based on where your eye is looking at the moment.

AR devices currently don’t have a field-of-view large enough to benefit from foveated rendering, but Goldman said that Lumus will have a device with a field-of-view over 50 degrees before 2030.

Eye tracking also has promise as an advanced input system. Goldman cited the Apple Vision Pro, which uses a combination of eye tracking and hand-tracking to go completely controller-free. Mixed reality devices like the Apple Vision Pro and Meta Quest 3 also bring up the fact that eye tracking has different implications in AR than it does in VR.

“Effectively, you can know exactly where I’m looking and what I’m interested in, so it has its implications for advertisers,” said Goldman. “What’s less nefarious for me and more interesting as a user is contextual search.”

Power and Responsibility

As more advanced XR analytics tools come to more consumer-focused hardware, do we need to be concerned about these tools being turned on casual XR fans? It’s certainly something that we need to be watchful of.

“It’s certainly a sensitive issue,” said Goldman. “It’s certainly a concern for the consumer, so I think every company will have to address this up front.”

Bevilacqua explained that his company has adopted the XR Privacy Framework. Cognitive3D notifies individuals when certain kinds of data might be collected and gives them the option to opt out. However, Bevilacqua believes that the best option is to avoid certain kinds of data collection in the first place.

“It’s important to balance data collection with user privacy. … We have a pretty balanced view on what needs to be collected and what doesn’t,” said Bevilacqua. “For us, eye tracking is something we do not find acceptable in a consumer application.”

Bevilacqua also pointed out that platforms and marketplaces have their own internal guidelines that make it difficult for app developers to collect too much information on their own.

“There is acceptable use policy about what kinds of data exist and what can be used,” said Bevilacqua. “You can’t just go out and collect eye tracking data and use it for ads. That’s not something Meta is going to allow.”

All About Balance

We need XR analytics. They make for better experiences and can even improve the quality of goods and services that we enjoy and rely on in the physical world. Not to mention the benefits that the required hardware brings to consumer applications. While technologies like eye tracking can be scary if used irresponsibly, we seem to be in good hands so far.

How Eye Tracking Contributes to XR Analytics and Experience Read More »

eye-tracking-is-a-game-changer-for-xr-that-goes-far-beyond-foveated-rendering

Eye-tracking is a Game Changer for XR That Goes Far Beyond Foveated Rendering

Eye-tracking—the ability to quickly and precisely measure the direction a user is looking while inside of a VR headset—is often talked about within the context of foveated rendering, and how it could reduce the performance requirements of XR headsets. And while foveated rendering is an exciting use-case for eye-tracking in AR and VR headsets, eye-tracking stands to bring much more to the table.

Updated – May 2nd, 2023

Eye-tracking has been talked about with regards to XR as a distant technology for many years, but the hardware is finally becoming increasingly available to developers and customers. PSVR 2 and Quest Pro are the most visible examples of headsets with built-in eye-tracking, along with the likes of Varjo Aero, Vive Pro Eye and more.

With this momentum, in just a few years we could see eye-tracking become a standard part of consumer XR headsets. When that happens, there’s a wide range of features the tech can enable to drastically improve the experience.

Foveated Rendering

Let’s first start with the one that many people are already familiar with. Foveated rendering aims to reduce the computational power required for displaying demanding AR and VR scenes. The name comes from the ‘fovea’—a small pit at the center of the human retina which is densely packed with photoreceptors. It’s the fovea which gives us high resolution vision at the center of our field of view; meanwhile our peripheral vision is actually very poor at picking up detail and color, and is better tuned for spotting motion and contrast than seeing detail. You can think of it like a camera which has a large sensor with just a few megapixels, and another smaller sensor in the middle with lots of megapixels.

The region of your vision in which you can see in high detail is actually much smaller than most think—just a few degrees across the center of your view. The difference in resolving power between the fovea and the rest of the retina is so drastic, that without your fovea, you couldn’t make out the text on this page. You can see this easily for yourself: if you keep your eyes focused on this word and try to read just two sentences below, you’ll find it’s almost impossible to make out what the words say, even though you can see something resembling words. The reason that people overestimate the foveal region of their vision seems to be because the brain does a lot of unconscious interpretation and prediction to build a model of how we believe the world to be.

Foveated rendering aims to exploit this quirk of our vision by rendering the virtual scene in high resolution only in the region that the fovea sees, and then drastically cut down the complexity of the scene in our peripheral vision where the detail can’t be resolved anyway. Doing so allows us to focus most of the processing power where it contributes most to detail, while saving processing resources elsewhere. That may not sound like a huge deal, but as the display resolution of XR headsets and field-of-view increases, the power needed to render complex scenes grows quickly.

Eye-tracking of course comes into play because we need to know where the center of the user’s gaze is at all times quickly and with high precision in order to pull off foveated rendering. While it’s difficult to pull this off without the user noticing, it’s possible and has been demonstrated quite effectively on recent headset like Quest Pro and PSVR 2.

Automatic User Detection & Adjustment

In addition to detecting movement, eye-tracking can also be used as a biometric identifier. That makes eye-tracking a great candidate for multiple user profiles across a single headset—when I put on the headset, the system can instantly identify me as a unique user and call up my customized environment, content library, game progress, and settings. When a friend puts on the headset, the system can load their preferences and saved data.

Eye-tracking can also be used to precisely measure IPD (the distance between one’s eyes). Knowing your IPD is important in XR because it’s required to move the lenses and displays into the optimal position for both comfort and visual quality. Unfortunately many people understandably don’t know what their IPD off the top of their head.

With eye-tracking, it would be easy to instantly measure each user’s IPD and then have the headset’s software assist the user in adjusting headset’s IPD to match, or warn users that their IPD is outside the range supported by the headset.

In more advanced headsets, this process can be invisible and automatic—IPD can be measured invisibly, and the headset can have a motorized IPD adjustment that automatically moves the lenses into the correct position without the user needing to be aware of any of it, like on the Varjo Aero, for example.

Varifocal Displays

A prototype varifocal headset | Image courtesy NVIDIA

The optical systems used in today’s VR headsets work pretty well but they’re actually rather simple and don’t support an important function of human vision: dynamic focus. This is because the display in XR headsets is always the same distance from our eyes, even when the stereoscopic depth suggests otherwise. This leads to an issue called vergence-accommodation conflict. If you want to learn a bit more in depth, check out our primer below:

Accommodation

Accommodation is the bending of the eye’s lens to focus light from objects at different distances. | Photo courtesy Pearson Scott Foresman

In the real world, to focus on a near object the lens of your eye bends to make the light from the object hit the right spot on your retina, giving you a sharp view of the object. For an object that’s further away, the light is traveling at different angles into your eye and the lens again must bend to ensure the light is focused onto your retina. This is why, if you close one eye and focus on your finger a few inches from your face, the world behind your finger is blurry. Conversely, if you focus on the world behind your finger, your finger becomes blurry. This is called accommodation.

Vergence

Vergence is the inward rotation of each eye to overlap each eye’s view into one aligned image. | Photo courtesy Fred Hsu (CC BY-SA 3.0)

Then there’s vergence, which is when each of your eyes rotates inward to ‘converge’ the separate views from each eye into one overlapping image. For very distant objects, your eyes are nearly parallel, because the distance between them is so small in comparison to the distance of the object (meaning each eye sees a nearly identical portion of the object). For very near objects, your eyes must rotate inward to bring each eye’s perspective into alignment. You can see this too with our little finger trick as above: this time, using both eyes, hold your finger a few inches from your face and look at it. Notice that you see double-images of objects far behind your finger. When you then focus on those objects behind your finger, now you see a double finger image.

The Conflict

With precise enough instruments, you could use either vergence or accommodation to know how far away an object is that a person is looking at. But the thing is, both accommodation and vergence happen in your eye together, automatically. And they don’t just happen at the same time—there’s a direct correlation between vergence and accommodation, such that for any given measurement of vergence, there’s a directly corresponding level of accommodation (and vice versa). Since you were a little baby, your brain and eyes have formed muscle memory to make these two things happen together, without thinking, anytime you look at anything.

But when it comes to most of today’s AR and VR headsets, vergence and accommodation are out of sync due to inherent limitations of the optical design.

In a basic AR or VR headset, there’s a display (which is, let’s say, 3″ away from your eye) which shows the virtual scene, and a lens which focuses the light from the display onto your eye (just like the lens in your eye would normally focus the light from the world onto your retina). But since the display is a static distance from your eye, and the lens’ shape is static, the light coming from all objects shown on that display is coming from the same distance. So even if there’s a virtual mountain five miles away and a coffee cup on a table five inches away, the light from both objects enters the eye at the same angle (which means your accommodation—the bending of the lens in your eye—never changes).

That comes in conflict with vergence in such headsets which—because we can show a different image to each eye—is variable. Being able to adjust the imagine independently for each eye, such that our eyes need to converge on objects at different depths, is essentially what gives today’s AR and VR headsets stereoscopy.

But the most realistic (and arguably, most comfortable) display we could create would eliminate the vergence-accommodation issue and let the two work in sync, just like we’re used to in the real world.

Varifocal displays—those which can dynamically alter their focal depth—are proposed as a solution to this problem. There’s a number of approaches to varifocal displays, perhaps the most simple of which is an optical system where the display is physically moved back and forth from the lens in order to change focal depth on the fly.

Achieving such an actuated varifocal display requires eye-tracking because the system needs to know precisely where in the scene the user is looking. By tracing a path into the virtual scene from each of the user’s eyes, the system can find the point that those paths intersect, establishing the proper focal plane that the user is looking at. This information is then sent to the display to adjust accordingly, setting the focal depth to match the virtual distance from the user’s eye to the object.

A well implemented varifocal display could not only eliminate the vergence-accommodation conflict, but also allow users to focus on virtual objects much nearer to them than in existing headsets.

And well before we’re putting varifocal displays into XR headsets, eye-tracking could be used for simulated depth-of-field, which could approximate the blurring of objects outside of the focal plane of the user’s eyes.

As of now, there’s no major headset on the market with varifocal capabilities, but there’s a growing body of research and development trying to figure out how to make the capability compact, reliable, and affordable.

Foveated Displays

While foveated rendering aims to better distribute rendering power between the part of our vision where we can see sharply and our low-detail peripheral vision, something similar can be achieved for the actual pixel count.

Rather than just changing the detail of the rendering on certain parts of the display vs. others, foveated displays are those which are physically moved (or in some cases “steered”) to stay in front of the user’s gaze no matter where they look.

Foveated displays open the door to achieving much higher resolution in AR and VR headsets without brute-forcing the problem by trying to cram pixels at higher resolution across our entire field-of-view. Doing so is not only be costly, but also runs into challenging power and size constraints as the number of pixels approach retinal-resolution. Instead, foveated displays would move a smaller, pixel-dense display to wherever the user is looking based on eye-tracking data. This approach could even lead to higher fields-of-view than could otherwise be achieved with a single flat display.

A rough approximation of how a pixel-dense foveated display looks against a larger, much less pixel-dense display in Varjo’s prototype headset. | Photo by Road to VR, based on images courtesy Varjo

Varjo is one company working on a foveated display system. They use a typical display that covers a wide field of view (but isn’t very pixel dense), and then superimpose a microdisplay that’s much more pixel dense on top of it. The combination of the two means the user gets both a wide field of view for their peripheral vision, and a region of very high resolution for their foveal vision.

Granted, this foveated display is still static (the high resolution area stays in the middle of the display) rather than dynamic, but the company has considered a number of methods for moving the display to ensure the high resolution area is always at the center of your gaze.

Continued on Page 2: Better Social Avatars »

Eye-tracking is a Game Changer for XR That Goes Far Beyond Foveated Rendering Read More »

psvr-2-horror-shooter-‘switchback’-shows-off-unique-eye-tracking-uses-in-new-video

PSVR 2 Horror Shooter ‘Switchback’ Shows Off Unique Eye-tracking Uses in New Video

Don’t blink, because PSVR 2’s eye-tracking may get you more than you bargained for in the headset’s upcoming on-rails horror shooter The Dark Pictures: Switchback VR, which aims to toss some extra scares your way when you least suspect it.

PSVR 2 is releasing on February 22nd, and in its 100+ game content lineup is a unique horror game from the makers of Until Dawn: Rush of Blood which tosses you back into another rollercoaster thrill ride that arms you with plenty of guns to fend off what bumps in the night.

Besides bringing high-quality VR to PS5, Sony’s next-gen headset also packs in eye-tracking, which is many games are using for easier UI selection and foveated rendering—useful, but not terribly exciting stuff.

Some developers though, including Supermassive Games, are integrating the feature into their core gameplay loop, which in Switchback’s case allows enemies to move around specifically when your eyes are closed.

In a new gameplay video, Supermassive shows off the feature as it plays out beyond the big ‘DON’T BLINK’ doors, revealing a room full of grotesque mannequins which only move when you blink—and they’re entirely focused on attacking you if they can.

Alejandro Arque Gallardo, Game Director at Supermassive, says there’s also set to be another mannequin type that works with eye-tracking, but cryptically will work in “a completely different way.”

We’ve linked to the timestamp (above) where Arque Gallardo discusses Switchback’s eye-tracking mechanic. The full video also delves into haptics, adaptive triggers, spatial audio, and the multiple areas you can encounter in the game.

The Dark Pictures: Switchback VR is launching on March 16th, priced at $40. You can pre-order the game here. In the meantime, make sure to check out our growing list of all confirmed games coming to PSVR 2.

PSVR 2 Horror Shooter ‘Switchback’ Shows Off Unique Eye-tracking Uses in New Video Read More »