optics

new-compact-facial-recognition-system-passes-test-on-michelangelo’s-david

New compact facial-recognition system passes test on Michelangelo’s David

A face for the ages —

Flatter, simpler prototype system uses 5-10 times less power than smartphone tech.

A new lens-free and compact system for facial recognition scans a bust of Michelangelo’s David and reconstructs the image using less power than existing 3D surface imaging systems.

Enlarge / A new lens-free and compact system for facial recognition scans a bust of Michelangelo’s David and reconstructs the image using less power than existing 3D-surface imaging systems.

W-C Hsu et al., Nano Letters, 2024

Facial recognition is a common feature for unlocking smartphones and gaming systems, among other uses. But the technology currently relies upon bulky projectors and lenses, hindering its broader application. Scientists have now developed a new facial recognition system that employs flatter, simpler optics that also requires less energy, according to a recent paper published in the journal Nano Letters. The team tested their prototype system with a 3D replica of Michelangelo’s famous David sculpture, and found it recognized the face as well as existing smartphone facial recognition.

The current commercial 3D imaging systems in smartphones (like Apple’s iPhone) extract depth information via structured light. A dot projector uses a laser to project a pseudorandom beam pattern onto the face of the person looking at a locked screen. It does so thanks to several other built-in components: a collimator, light guide, and special lenses (known as diffractive optical elements, or DOEs) that break the laser beam apart into an array of some 32,000 infrared dots. The camera can then interpret that projected beam pattern to confirm the person’s identity.

Packing in all those optical components like lasers makes commercial dot projectors rather bulky, so it can be harder to integrate for some applications such as robotics and augmented reality, as well as the next generation of facial recognition technology. They also consume significant power. So Wen-Chen Hsu, of National Yang Ming Chiao Tung University and the Hon Hai Research Institute in Taiwan, and colleagues turned to ultrathin optical components known as metasurfaces for a potential solution. These metasurfaces can replace bulkier components for modulating light and have proven popular for depth sensors, endoscopes, tomography. and augmented reality systems, among other emerging applications.

Schematic of a new facial recognition system using a camera and meta surface-enhanced dot projector.

Enlarge / Schematic of a new facial recognition system using a camera and meta surface-enhanced dot projector.

W-C Hsu et al., Nanoletters, 2024

Hsu et al. built their own depth-sensing facial recognition system incorporating a metasurface hologram in place of the diffractive optical element. They replaced the standard vertical-cavity surface-emitting laser (VCSEL) with a photonic crystal surface-emitting laser (PCSEL). (The structure of photonic crystals is the mechanism behind the bright iridescent colors in butterfly wings or beetle shells.) The PCSEL can generate its own highly collimated light beam, so there was no need for the bulky light guide or collimation lenses needed in VCSEL-based dot projector systems.

The team tested their new system on a replica bust of David, and it worked as well as existing smartphone facial recognition, based on comparing the infrared dot patterns to online photos of the statue. They found that their system generated nearly one and a half times more infrared dots (some 45,700) than the standard commercial technology from a device that is 233 times smaller in terms of surface area than the standard dot projector. “It is a compact and cost-effective system, that can be integrated into a single chip using the flip-chip process of PCSEL,” the authors wrote. Additionally, “The metasurface enables the generation of customizable and versatile light patterns, expanding the system’s applicability.” It’s more energy-efficient to boot.

Nano Letters, 2024. DOI: 10.1021/acs.nanolett.3c05002  (About DOIs).

Listing image by W-C Hsu et al., Nano Letters, 2024

New compact facial-recognition system passes test on Michelangelo’s David Read More »

novel-camera-system-lets-us-see-the-world-through-eyes-of-birds-and-bees

Novel camera system lets us see the world through eyes of birds and bees

A fresh perspective —

It captures natural animal-view moving images with over 90 percent accuracy.

A new camera system and software package allows researchers and filmmakers to capture animal-view videos. Credit: Vasas et al., 2024.

Who among us hasn’t wondered about how animals perceive the world, which is often different from how humans do so? There are various methods by which scientists, photographers, filmmakers, and others attempt to reconstruct, say, the colors that a bee sees as it hunts for a flower ripe for pollinating. Now an interdisciplinary team has developed an innovative camera system that is faster and more flexible in terms of lighting conditions than existing systems, allowing it to capture moving images of animals in their natural setting, according to a new paper published in the journal PLoS Biology.

“We’ve long been fascinated by how animals see the world. Modern techniques in sensory ecology allow us to infer how static scenes might appear to an animal,” said co-author Daniel Hanley, a biologist at George Mason University in Fairfax, Virginia. “However, animals often make crucial decisions on moving targets (e.g., detecting food items, evaluating a potential mate’s display, etc.). Here, we introduce hardware and software tools for ecologists and filmmakers that can capture and display animal-perceived colors in motion.”

Per Hanley and his co-authors, different animal species possess unique sets of photoreceptors that are sensitive to a wide range of wavelengths, from ultraviolet to the infrared, dependent on each animal’s specific ecological needs. Some animals can even detect polarized light. So every species will perceive color a bit differently. Honeybees and birds, for instance, are sensitive to UV light, which isn’t visible to human eyes. “As neither our eyes nor commercial cameras capture such variations in light, wide swaths of visual domains remain unexplored,” the authors wrote. “This makes false color imagery of animal vision powerful and compelling.”

However, the authors contend that current techniques for producing false color imagery can’t quantify the colors animals see while in motion, an important factor since movement is crucial to how different animals communicate and navigate the world around them via color appearance and signal detection. Traditional spectrophotometry, for instance, relies on object-reflected light to estimate how a given animal’s photoreceptors will process that light, but it’s a time-consuming method, and much spatial and temporal information is lost.

Peacock feathers through eyes of four different animals: (a) a peafowl; (b) humans; (c) honeybees; and (d) dogs. Credit: Vasas et al., 2024.

Multispectral photography takes a series of photos across various wavelengths (including UV and infrared) and stacks them into different color channels to derive camera-independent measurements of color. This method trades some accuracy for better spatial information and is well-suited for studying animal signals, for instance, but it only works on still objects, so temporal information is lacking.

That’s a shortcoming because “animals present and perceive signals from complex shapes that cast shadows and generate highlights,” the authors wrote. ‘These signals vary under continuously changing illumination and vantage points. Information on this interplay among background, illumination, and dynamic signals is scarce. Yet it forms a crucial aspect of the ways colors are used, and therefore perceived, by free-living organisms in natural settings.”

So Hanley and his co-authors set out to develop a camera system capable of producing high-precision animal-view videos that capture the full complexity of visual signals as they would be perceived by an animal in a natural setting. They combined existing methods of multispectral photography with new hardware and software designs. The camera records video in four color channels simultaneously (blue, green, red, and UV). Once that data has been processed into “perceptual units,” the result is an accurate video of how a colorful scene would be perceived by various animals, based on what we know about which photoreceptors they possess. The team’s system predicts the perceived colors with 92 percent accuracy. The cameras are commercially available, and the software is open source so that others can freely use and build on it.

The video at the top of this article depicts the colors perceived by honeybees watching fellow bees foraging and interacting (even fighting) on flowers—an example of the camera system’s ability to capture behavior in a natural setting. Below, Hanley applies UV-blocking sunscreen in the field. His light-toned skin looks roughly the same in human vision and honeybee false color vision “because skin reflectance increases progressively at longer wavelengths,” the authors wrote.

Novel camera system lets us see the world through eyes of birds and bees Read More »

digilens-expands-ecosystem-with-hardware,-software-announcements

DigiLens Expands Ecosystem With Hardware, Software Announcements

DigiLens may not be on every XR user’s mind, but we all owe them a lot. The optical components manufacturer only recently released its first branded wearable, but the organization makes parts for a number of XR companies and products. That’s why it’s so exciting that the company announced a wave of new processes and partnerships over the last few weeks.

SRG+

“Surface Relief Gratings” is one complicated process within the production of the complicated system that is a waveguide – the optical component that put DigiLens on the map. The short of it is that waveguides are the translucent screen on which a feed is cast by an accompanying “light engine” in this particular approach to AR displays.

DigiLens doesn’t make light engines, but the methods that they use to produce lenses can reduce “eye glow” – which is essentially wasted light. The company’s new “SRG+” waveguide process achieves these ends at a lower cost, while also increasing the aspect ratio for an improved field of view on a lighter lens that can be produced more efficiently at a larger scale.

DigiLens announces SRG+

Lens benefits aside, this process improvement also allows for a more efficient light engine. A more efficient light engine translates to less energy consumption and a smaller form factor for the complete device. All of those are good selling points for a head-worn display. Many of those benefits are also true for Micro OLED lenses, a different approach to AR displays.

“I am excited about Digilens’ recent SRG+ developments, which provide a new, low-cost replication technology satisfying such drastic nanostructure requirements,” Dr. Bernard Kress, President of SPIE, the international society for optics and photonics, said in a release. “The AR waveguides field is the tip of the iceberg.”

A New Partner in Mojo Vision

The first major partner to take advantage of this new process is Mojo Vision, a Micro-LED manufacturer that became famous in the industry for pursuing AR contact lenses. While that product has yet to materialize, its pursuit has resulted in Mojo Vision holding records for large displays on small tech. And, it can get even larger and lighter thanks to SRG+.

“Bringing our technologies together will raise the bar on display performance, and efficiency in the AR/XR industry,” Mojo Vision CEO Nikhil Balram said in a release shared with ARPost. “Partnering with DigiLens brings AR glasses closer to mass-scale consumer electronics.”

This partnership may also help to solve another one of AR’s persistent challenges: the sunny problem. AR glasses to date are almost always tinted. That’s because, to see AR elements in high ambient light conditions, the display either needs to be exceptionally bright or artificially darkened. Instead of cranking up the brightness, manufacturers opt for tinted lenses.

“The total form factor of the AR glasses can finally be small and light enough for consumers to wear for long periods of time and bright enough to allow them to see the superimposed digital information — even on a sunny day — without needing to darken the lenses,” DigiLens CEO Chris Pickett said in the release.

ARGO Is DigiLens’ Golden Fleece

After years of working backstage for device manufacturers, DigiLens announced ARGO at the beginning of this year, calling it “the first purpose-built stand-alone AR/XR device designed for enterprise and industrial-lite workers.” The glasses use the company’s in-house waveguides and a custom-built Android-based operating system running on Qualcomm’s Snapdragon XR2 chip.

DigiLens ARGO glasses

“This is a big milestone for DigiLens at a very high level. We have always been a component manufacturer,” DigiLens VP and GM of Product, Nima Shams told ARPost at the time. “At the same time, we want to push the market and meet the market and it seems like the market is kind of open and waiting.”

More Opportunities With Qualcomm

Close followers of Qualcomm’s XR operations may recall that the company often saves major news around its XR developer platform Snapdragon Spaces for AWE. The platform launched at AWE in 2021 and became available to the public at AWE last year. This year, among other announcements, Qualcomm announced Spaces compatibility with ARGO.

“We are excited to support the democratization of the XR industry by offering Snapdragon Spaces through DigiLens’ leading all-in-one AR headset,” Qualcomm Senior Director of Product Management XR, Said Bakadir, said in a release shared with ARPost.

“DigiLens’ high-transparency and sunlight-readable optics combined with the universe of leading XR application developers from Snapdragon Spaces are critical in supporting the needs of the expanding enterprise and industrial markets,” said Bakadir.

Snapdragon Spaces bundles developer tools including hand and position tracking, scene understanding and persistent anchors, spatial mapping, and plane detection. So, while we’re likely to see more partnerships with more existing applications, this strengthened relationship with Qualcomm could mean more native apps on ARGO.

Getting Rugged With Taqtile

“Industrial-lite” might be getting a bit heavier as DigiLens partners with Taqtile on a “rugged AR-enabled solution for industrial and defense customers” – presumably a more durable version of the original ARGO running Manifest, Taqtile’s flagship enterprise AR solution. Taqtile recently released a free version of Manifest to make its capabilities more available to potential clients.

“ARGO represents just the type of head-mounted, hands-free device that Manifest customers have been looking for,” Taqtile CTO John Tomizuka said in a release. “We continue to evaluate hardware solutions that will meet the unique needs of our deskless workers, and the combination of Manifest and ARGO has the ability to deliver performance and functionality.”

Getting Smart With Wisear

Wisear is a neural interface company that uses “smart earphones” to allow users to control connected devices with their thoughts rather than with touch, gesture, or even voice controls.

For the average consumer, that might just be really cool. For consumers with neurological disorders, that might be a new way to connect to the world. For enterprise, it solves another problem.

wisear smart earphones

Headworn devices mean frontline workers aren’t holding the device, but if they need their hands to interact with it, that still means taking their hands off of the job. Voice controls get around this but some environments and circumstances make voice controls inconvenient or difficult to use. Neural inputs solve those problems too. And Wisear is bringing those solutions to ARGO.

“DigiLens and Wisear share a common vision of using cutting-edge technology to revolutionize the way frontline workers work,” Pickett said in a release shared with ARPost. “Our ARGO smart glasses, coupled with Wisear’s neural interface-powered earphones, will provide frontline workers with the tools they need to work seamlessly and safely.”

More Tracking Options With Ultraleap

Ultraleap is another components manufacturer. They make input accessories like tracking cameras, controllers, and haptics. A brief shared with ARPost only mentions “a groundbreaking partnership” between the companies “offering a truly immersive and user-friendly experience across diverse applications, from gaming and education to industrial training and healthcare.”

That sounds a lot like it hints at more wide availability for ARGO, but don’t get your hopes up yet. This is the announcement about which we know the least. Most of this article has come together from releases shared with ARPost in advance of AWE, which is happening now. So, watch our AWE coverage articles as they come out for more concrete information.

So Much More to Come

Announcements from component manufacturers can be tantalizing. We know that they have huge ramifications for the whole industry, but we know that those ramifications aren’t immediate. We’re closely watching DigiLens and its partners to see when some of these announcements might bear tangible fruit but keep in mind that this company also has its own full model out now.

DigiLens Expands Ecosystem With Hardware, Software Announcements Read More »

another-ces-2023-gem:-next-gen-z-lens-waveguide-technology-by-lumus

Another CES 2023 Gem: Next-Gen Z-Lens Waveguide Technology by Lumus

Lumus has recently launched its Z-Lens AR architecture, which can help with the development of more compact AR glasses in the near future, thanks to efforts that reduced its micro-projector’s size by 50%.

Making its debut at the Consumer Electronics Show (CES) 2023, the new Z-Lens—which builds on the company’s Maximus 2D reflective waveguide technology—can be fitted with prescription lenses.

Lumus’ Waveguide Technology

According to the company, Lumus is currently the only brand that produces waveguides for outdoor use. Its luminance efficiency is 10 times better than those of Lumus’s competitors. Its design allows for a “true white” background and color uniformity. Moreover, the battery life of its micro-projector is 10 times better than other waveguides on the market.

The structure of the new Z-Lens  gives manufacturers more options regarding where to position the aperture or the opening where the light passes through. Lumus CEO, Ari Grobman, expressed optimism that this flexibility can lead to the creation of less bulky and more “natural-looking” AR eyewear.

“In order for AR glasses to penetrate the consumer market in a meaningful way, they need to be impressive both functionally and aesthetically,” said Grobman in a press release shared with ARPost. “With Z-Lens, we’re aligning form and function, eliminating barriers of entry for the industry, and paving the way for widespread consumer adoption.”

Z-Lens 2D Image Expansion

In AR glasses, the lenses that use Z-Lens reflective waveguides will serve as the “screen” onto which a tiny projector would display the AR image. Lumus’s lenses consist of waveguides or a series of cascading partially reflective mirrors. These mirrors are responsible for 2D expansion, widening the projected image horizontally and vertically.

Lumus Z-Lens new waveguide technology

Maximus’ patented waveguides reflect the light from the projector two times before the light bounces into your eye. The mini-projector—which is hidden in the temple of the eyeglass frame—has two components. First is a microdisplay that produces the virtual image and second is a collimator, which beams the light waves to the waveguide. The mirrors then reflect the light out of the waveguide to the user’s eyes.

“Our introduction of Maximus 2D reflective waveguide technology two years ago was just the beginning,” said Grobman. “Z-Lens, with all of its improvements unlocks the future of augmented reality that consumers are eagerly waiting for.”

New Z-Lens Standout Features

Lumus’s second-generation Z-Lens boasts a lightweight projector with a 2K by 2K vibrant color resolution and 3K-nit/watt brightness. The latter feature allows users to enjoy AR viewing in daylight or outdoors. Other AR lenses on the market feature sunglass-type tinting on their products to ensure that users can view virtual images. The absence of dark tints allows others to see the user’s eyes as if they’re wearing regular eyeglasses.

The first prototypes of Z-Lens have a 50-degree field of view (FOV). However, the company’s goal is to reach at least 80 degrees FOV in the future.

Z-Lens waveguide technology - Lumus

Here are the other qualities of the Maximus successor:

  • Eliminates ambient light artifacts or small light glares on the optical display that typically occur in AR eyewear.
  • Offers dynamic focal lens integration, which eases vergence-accommodation conflict (VAC). VAC can make images blurry because virtual objects appear closer to the eyes than their actual distance from them.
  •  Z-Lens architecture allows for direct bonding of optical elements for prescription glasses.
  • Provides more privacy through light leakage control. Third parties can’t view the displays seen by the wearer. Moreover, users don’t draw attention because Z-Lens don’t produce any “eye glow.”

“The Future Is Looking Up”

Waveguides already have practical applications in the military and medical professions, particularly among air force pilots and spinal surgeons. Lumus believes these wearable displays can someday overtake mobile phone screens and laptop monitors as hands-free communication tools.

“AR glasses are poised to transform our society,” Grobman said. “They feature better ergonomics than smartphones, novel interaction opportunities with various environments and businesses, and a much more seamless experience than handheld devices. The future, quite literally, is looking up.”

Another CES 2023 Gem: Next-Gen Z-Lens Waveguide Technology by Lumus Read More »

digilens-announces-argo-–-its-first-mass-market-product

DigiLens Announces ARGO – Its First Mass Market Product

DigiLens has been making groundbreaking components for a while now. And, last spring, the company released a developers kit – the Design v1. The company has now announced its first made-to-ship product, the ARGO.

A Look at the ARGO

DigiLens is calling ARGO “the future of wearable computing” and “the first purpose-built stand-alone AR/XR device designed for enterprise and industrial-lite workers.” That is to say that the device features a 3D-compatible binocular display, inside-out tracking, and numerous other features that have not widely made their way into the enterprise world in a usable form factor.

ARGO AR glasses by DigiLens

“ARGO will open up the next generation of mobile computing and voice and be the first true AR device to be deployed at mass scale,” DigiLens CEO, Chris Pickett, said in a release shared with ARPost. “By helping people connect and collaborate in the real – not merely virtual – world, ARGO will deliver productivity gains across sectors and improve people’s lives.”

Naturally, ARGO is built around DigiLens crystal waveguide technology resulting in an outdoor-bright display with minimal eye glow and a compact footprint. The glasses also run on a Qualcomm Snapdragon XR2 chip.

Dual tracking cameras allow the device’s spatial computing while a 48 MP camera allows for capturing records of the real world through photography and live or recording video. One antenna on either temple of the glasses ensure uninterrupted connectivity through Wi-Fi and Bluetooth.

Voice commands can be picked up even in loud environments thanks to five microphones. The glasses also work via gaze control and a simple but durable wheel and push-button input in the frames themselves.

The DigiLens Operating System

The glasses aren’t just a hardware offering. They also come with “DigiOS” – a collection of optimized APIs built around open-source Android 12.

“You can have the best hardware in the world, hardware is still an adoption barrier, but software is where the magic happens,” DigiLens VP and GM of Product, Nima Shams, said in a phone interview with ARPost. “We almost wanted the system to be smarter than the user and present them with information.”

While not all of those aspirations made it into the current iteration of DigiOS, the operating system custom-tailored to a hands-free interface does have some tricks. These include adjusting the brightness of the display so that it can be visible to the user without entirely washing out their surroundings when they need situational awareness.

“This is a big milestone for DigiLens at a very high level. We have always been a component manufacturer,” said Shams. “At the same time, we want to push the market and meet the market and it seems like the market is kind of open and waiting.”

A Brief Look Back

ARPost readers have been getting to know DigiLens for the last four years as a component manufacturer, specifically making display components. Last spring, the company released Design v1. The heavily modular developers kit was not widely available, though, according to Shams, the kit heavily influenced the ARGO.

“What we learned from Design v1 was that there wasn’t a projector module that we could use,” said Shams. “We designed our own light LED projector. … It was direct feedback from the Design v1.”

A lot of software queues in the ARGO also came from lessons learned with Design v1. The headset helped pave the way for DigiOS.

DigiLens ARGO AR glasses

“Design v1 was the first time that we built a Qualcomm XR2 system, and ARGO uses the same system,” said Shams.

Of course, the Design v1 was largely a technology showcase and a lot of its highly experimental features were never intended to make it into a mass-market product. For example, the ARGO is not the highly individualized modular device that the Design v1 is.

The Future of DigiLens

DigiLens still is, and will continue to be, a components company first and foremost. Their relationship with enterprise led the company to believe that it is singularly situated to deliver a product that industries need and haven’t yet had an answer for.

“I’ve seen some things from CES coming out of our peers that are very slim and very sexy but they’re viewers,” said Shams. “They don’t have inside-out tracking or binocular outdoor-bright displays.”

With all of this talk about mass adoption and the excitement of the company’s first marketed product, I had to ask Shams whether the company had aspirations for an eventual consumer model.

“Our official answer is ‘no,’” said Shams. “Companies like the Samsungs and the Apples of the world all believe that glasses will replace the smartphone and we want to make sure that DigiLens components are in those glasses.”

In fact, in the first week of January, DigiLens announced a partnership with OMNIVISION to “collaborate on developing new consumer AR/VR/XR product solutions.”

“Since XR involves multiple senses such as touch, vision, hearing, and smell, it has potential use cases in a huge variety of fields, such as healthcare, education, engineering, and more,” Devang Patel, OMNIVISION Marketing Director for the IoT and Emerging Segment said in a release. “That’s why our partnership with DigiLens is so exciting and important.” 

Something We Look Forward to Looking Through

The price and shipping date for ARGO aren’t yet public, but interested companies can reach out to DigiLens directly. We look forward to seeing use cases come out of the industry once the glasses have had time to find their way to the workers of the world.

DigiLens Announces ARGO – Its First Mass Market Product Read More »

new-waveguide-tech-from-vividq-and-dispelix-promises-new-era-in-ar

New Waveguide Tech From VividQ and Dispelix Promises New Era in AR

Holograms have been largely deemed impossible. However, “possible” and “impossible” are constantly shifting landscapes in immersive technology. Dispelix and VividQ have reportedly achieved holographic displays through a new waveguide device. And the companies are bringing these displays to consumers.

A Little Background

“Hologram” is a term often used in technology because it’s one that people are familiar with from science fiction. However, science fiction is almost exclusively the realm in which holograms reside. Holograms are three-dimensional images. Not an image that appears three-dimensional, but an image that actually has height, width, and depth.

These days, people are increasingly familiar with augmented reality through “passthrough.” In this method, a VR headset records your surroundings and you view a live feed of that recording augmented with digital effects. The image is still flat. Through techno-wizardry, they may appear to occupy different spaces or have different depths but they don’t.

AR glasses typically use a combination of waveguide lenses and a tiny projector called a light engine. The light engine projects digital effects onto the waveguide, which the wearer looks through. This means lighter displays that don’t rely on camera resolution for a good user experience.

Most waveguide AR projects still reproduce a flat image. These devices, typically used for virtual screens or screen mirroring from a paired device, often include spatial controls like ray casting but are arguably not “true” augmented reality and are sometimes referred to as “viewers” rather than “AR glasses.”

Some high-end waveguide headsets – almost exclusively used in enterprise and defense – achieve more immersive AR, but the virtual elements are still on a single focal plane. This limits immersion and can contribute to the feelings of sickness felt by some XR users. These devices also have a much larger form factor.

These are the issues addressed by the new technology from Dispelix and VividQ. And their material specifically mentions addressing these issues for consumer use cases like gaming.

Bringing Variable-Depth 3D Content to AR

Working together, VividQ and Dispelix have developed a “waveguide combiner” that is able to “accurately display simultaneous variable-depth 3D content within a user’s environment” in a usable form factor. This reportedly increases user comfort as well as immersion.

“Variable-depth 3D content” means that users can place virtual objects in their environment and interact with them naturally. That is opposed to needing to work around the virtual object rather than with it because the virtual object is displayed on a fixed focal plane.

VividQ 3D waveguide

“A fundamental issue has always been the complexity of displaying 3D images placed in the real world with a decent field of view and with an eyebox that is large enough to accommodate a wide range of IPDs [interpupillary distances], all encased within a lightweight lens,” VividQ CEO, Darran Milne, said in a release shared with ARPost. “We’ve solved that problem.”

VividQ and Dispelix have not only developed this technology but have also formed a commercial partnership to bring it to market and bring it to mass production. The physical device is designed to work with VividQ’s software, compatible with major game engines including Unity and Unreal Engine.

“Wearable AR devices have huge potential all around the world. For applications such as gaming and professional use, where the user needs to be immersed for long periods of time, it is vital that content is true 3D and placed within the user’s environment,” Dispelix CEO and co-founder, Antti Sunnari, said in the release. “We are thrilled to be working with VividQ.”

When Waveguides Feel Like a Mirage

Both companies have been building toward this breakthrough for a long time. Virtually every time that APost has covered Dispelix it has at least touched on a partnership with another company, which is typical for a components manufacturer. New product announcements are comparatively rare and are always the result of lots of hard work.

“The ability to display 3D images through a waveguide is a widely known barrier to [a compelling AR wearable device],” VividQ Head of Research, Alfred Newman, said in an email. “To realize the full capability, we needed to work with a partner capable of developing something that worked with our exact specifications.”

Of course, those who have been following immersive tech for a while will understand that a long time working hard to achieve a breakthrough means that that breakthrough reaching the public will require working hard for a long time. Devices using this groundbreaking technology might not reach shelves for a few more calendar pages. Again, Newman explains:

“We license the technology stack to device manufacturers and support them as they develop their products so the timeframe for launching devices is dependent on their product development. …Typically, new products take about two to three years to develop, manufacture, and launch, so we expect a similar time frame until consumers can pick a device off the shelf.”

Don’t Let the Perfect Be the Enemy of the Good

Waiting for the hardware to improve is a classic mass adoption trope, particularly in the consumer space. If you’re reading that you have to wait two to three years for impactful AR, you may have missed the message.

There are a lot of quality hardware and experience options in the AR space already – many of those already enabled by Dispelix and VividQ. If you want natural, immersive, real 3D waveguides, wait two or three years. If you want to experience AR today, you have options in already-available waveguide AR glasses or via passthrough on VR headsets.

New Waveguide Tech From VividQ and Dispelix Promises New Era in AR Read More »