qualcomm

ifixit-says-new-arm-surface-hardware-“puts-repair-front-and-center”

iFixit says new Arm Surface hardware “puts repair front and center”

how things have changed —

Both devices make it relatively easy to get at the battery and SSD.

Microsoft's 11th-edition Surface Pro, as exploded by iFixit. Despite adhesive holding in the screen and the fact that you need to remove the heatsink to get at the battery, it's still much more repairable than past Surfaces or competing tablets.

Enlarge / Microsoft’s 11th-edition Surface Pro, as exploded by iFixit. Despite adhesive holding in the screen and the fact that you need to remove the heatsink to get at the battery, it’s still much more repairable than past Surfaces or competing tablets.

For a long time, Microsoft’s Surface hardware was difficult-to-impossible to open and repair, and devices as recent as 2019’s Surface Pro 7 still managed a repairability score of just 1 out of 10 on iFixit’s scale. 2017’s original Surface Laptop needed to be physically sliced apart to access its internals, making it essentially impossible to try to fix the machine without destroying it.

But in recent years, partly due to pressure from shareholders and others, Microsoft has made an earnest effort to improve the repairability of its devices. The company has published detailed repair manuals and videos and has made changes to its hardware designs over the years to make it easier to open them without breaking them and easier to replace parts once you’re inside. Microsoft also sells some first-party parts for repairs, though not every part from every Surface is available, and Microsoft and iFixit have partnered to offer other parts as well.

Now, iFixit has torn apart the most recent Snapdragon X-powered Surface Pro and Surface Laptop devices and has mostly high praise for both devices in its preliminary teardown video. Both devices earn an 8 out of 10 on iFixit’s repairability scale, thanks to Microsoft’s first-party service manuals, the relative ease with which both devices can be opened, and clearly labeled internal components.

Beneath the Surface

To open the Surface Laptop, iFixit says you only need to undo four screws, hidden beneath the laptop’s rubber feet; at that point, the bottom of the machine is only attached by magnets, rather than breakable retention clips. Opening the bottom of the laptop provides easy access to the battery and an M.2 2232 SSD. Labels inside the device indicate which screws need to be removed to replace which parts, and what kind of screwdriver you’ll need to do the job; scannable barcodes also make it easier to find repair manuals and parts on Microsoft’s site. Most other parts are easy to remove and replace once the bottom of the laptop is off.

The Surface Pro’s best repairability feature remains its easily accessible M.2 2232 SSD, present under a pop-off cover on the back of the tablet. From there, things get more difficult—accessing the battery and other components requires removing the screen, which is still held in place with adhesive rather than screws or magnets. This adhesive needs to be removed—iFixit cut it away with a thin plastic tool, and closing the tablet back up securely would likely require new adhesive to be applied. Once inside, the parts and screws are still labeled clearly, but you do need to remove the entire heatsink before you can replace the battery.

iFixit uses slightly different criteria for evaluating the repairability of laptops and tablets since tablets are more tightly integrated devices. So despite the identical repairability scores, the Surface Pro remains slightly more difficult to open and fix than the laptop; iFixit is just comparing it to devices like the iPad Air and Pro rather than other PC laptops, and the Surface Pro still looks better than other tablets by comparison despite the use of adhesive.

The teardown video didn’t detail exactly why iFixit knocked points off of each device’s repairability score, though iFixit took note of the soldered-down non-upgradeable RAM and Wi-Fi/Bluetooth modules. Both devices also use way more screws and clips than something like the Framework Laptop, which could also be a factor.

We’ve been using the new Snapdragon-powered Surface devices for a few days now, and we’ll have more thoughts to share about the hardware and its performance in the coming days.

iFixit says new Arm Surface hardware “puts repair front and center” Read More »

for-the-second-time-in-two-years,-amd-blows-up-its-laptop-cpu-numbering-system

For the second time in two years, AMD blows up its laptop CPU numbering system

this again —

AMD reverses course on “decoder ring” numbering system for laptop CPUs.

AMD's Ryzen 9 AI 300 series is a new chip and a new naming scheme.

Enlarge / AMD’s Ryzen 9 AI 300 series is a new chip and a new naming scheme.

AMD

Less than two years ago, AMD announced that it was overhauling its numbering scheme for laptop processors. Each digit in its four-digit CPU model numbers picked up a new meaning, which, with the help of a detailed reference sheet, promised to inform buyers of exactly what it was they were buying.

One potential issue with this, as we pointed out at the time, was that this allowed AMD to change over the first and most important of those four digits every single year that it decided to re-release a processor, regardless of whether that chip actually included substantive improvements or not. Thus a “Ryzen 7730U” from 2023 would look two generations newer than a Ryzen 5800U from 2021, despite being essentially identical.

AMD is partially correcting this today by abandoning the self-described “decoder ring” naming system and resetting it to something more conventional.

For its new Ryzen AI laptop processors, codenamed “Strix Point,” AMD is still using the same broad Ryzen 3/5/7/9 number to communicate general performance level plus a one- or two-letter suffix to denote general performance and power level (U for ultraportables, HX for higher-performance chips, and so on). A new three-digit processor number will inform buyers of the chip’s generation in the first digit and denote the specific SKU using the last two digits.

AMD is changing how it numbers its laptop CPUs again.

Enlarge / AMD is changing how it numbers its laptop CPUs again.

AMD

In other words, the company is essentially hitting the undo button.

Like Intel, AMD is shifting from four-digit numbers to three digits. The Strix Point processor numbers will start with the 300 series, which AMD says is because this is the third generation of Ryzen laptop processors with a neural processing unit (NPU) included. Current 7040-series and 8040-series processors with NPUs are not being renamed retroactively, and AMD plans to stop using the 7000- and 8000-series numbering for processor introductions going forward.

AMD wouldn’t describe exactly how it would approach CPU model numbers for new products that used older architectures but did say that new processors that didn’t meet the 40+ TOPS requirement for Microsoft’s Copilot+ program would simply use the “Ryzen” name instead of the new “Ryzen AI” branding. That would include older architectures with slower NPUs, like the current 7040 and 8040-series chips.

Desktop CPUs are, once again, totally unaffected by this change. Desktop processors’ four-digit model numbers and alphabetic suffixes generally tell you all you need to know about their underlying architecture; the new Ryzen 9000 desktop CPUs and the Zen 5 architecture were also announced today.

It seems like a lot of work to do to end up basically where we started, especially when the people at AMD who make and market the desktop chips have been getting by just fine with older model numbers for newly released products when appropriate. But to be fair to AMD, there just isn’t a great way to do processor model numbers in a simple and consistent way, at least not given current market realities:

  • PC OEMs that seem to demand or expect “new” product from chipmakers every year, even though chip companies tend to take somewhere between one and three years to release significantly updated designs.
  • The fact that casual and low-end users don’t actually benefit a ton from performance enhancements, keeping older chips viable for longer.
  • Different subsections of the market that must be filled with slightly different chips (consider chips with vPro versus similar chips without it).
  • The need to “bin” chips—that is, disable small parts of a given silicon CPU or GPU die and then sell the results as a lower-end product—to recoup manufacturing costs and minimize waste.

Apple may come the closest to what the “ideal” would probably be—one number for the overarching chip generation (M1, M3, etc.), one word like “Pro” or “Max” to communicate the general performance level, and a straightforward description of the number of CPU and GPU cores included, to leave flexibility for binning chips. But as usual, Apple occupies a unique position: it’s the only company putting its own processors into its own systems, and the company usually only updates a product when there’s something new to put in it, rather than reflexively announcing new models every time another CES or back-to-school season or Windows version rolls around.

In reverting to more traditional model numbers, AMD has at least returned to a system that people who follow CPUs will be broadly familiar with. It’s not perfect, and it leaves plenty of room for ambiguity as the product lineup gets more complicated. But it’s in the same vein as Intel’s rebranding of 13th-gen Core chips, the whole “Intel Processor” thing, or Qualcomm’s unfriendly eight-digit model numbers for its Snapdragon X Plus and Elite chips. AMD’s new nomenclature is a devil, but at least it’s one we know.

For the second time in two years, AMD blows up its laptop CPU numbering system Read More »

$899-mini-pc-puts-snapdragon-x-elite-into-a-mini-desktop-for-developers

$899 mini PC puts Snapdragon X Elite into a mini desktop for developers

developers developers developers —

Well-specced box includes the best Snapdragon X Elite, 32GB RAM, 512GB SSD.

The Qualcomm Snapdragon Dev Kit for Windows fits a Snapdragon X Elite and 32GB of RAM into an $899 mini desktop.

Enlarge / The Qualcomm Snapdragon Dev Kit for Windows fits a Snapdragon X Elite and 32GB of RAM into an $899 mini desktop.

Qualcomm

Microsoft and Qualcomm are both making a concerted effort to make Windows-on-Arm happen after years of slow progress and false starts. One thing the companies have done to get software developers on board is to offer mini PC developer kits, which can be connected to a software developer’s normal multi-monitor setup and doesn’t require the same cash outlay as an equivalently specced Surface tablet or laptop.

Qualcomm has announced the Snapdragon Dev Kit for Windows, a small black plastic mini PC with the same internal hardware as the new wave of Copilot+ PCs with Snapdragon X Plus and Snapdragon X Elite processors in them. The box is fairly generously specced, with a slightly faster-than-normal version of the Snapdragon X Elite that can boost up to 4.3 GHz, 32GB of RAM, and a 512GB NVMe SSD.

Unlike the Windows Dev Kit 2023, which appeared to be a repurposed Surface Pro 9 motherboard thrown into a black plastic box, the Snapdragon Dev Kit appears to be purpose-built. It has a single USB-C port on the front and two USB-C ports, an HDMI port, two USB-A ports, a headphone/speaker jack, and an Ethernet port in the back. This isn’t an overwhelming complement of ports, but it’s in line with what Apple offers in the Mac mini.

Perhaps most importantly for developers hoping to play with Microsoft’s new wave of AI-accelerated features and development tools, the Snapdragon Dev Kit includes the same NPU as all the Copilot+ devices announced yesterday. Qualcomm says the NPU is capable of 45 trillion operations per second (TOPS), a bit above the 40 TOPS that Microsoft has defined as the floor for Copilot+ PCs; this requirement means that no current-generation Intel and AMD laptops and desktops qualify for the label. x86 processors with more capable NPUs should arrive sometime this fall.

The back of the box has two more USB-C ports, plus USB-A, HDMI, Ethernet, and audio. There's a dedicated power jack, so you probably won't be able to power this from a USB-C charger or monitor.

Enlarge / The back of the box has two more USB-C ports, plus USB-A, HDMI, Ethernet, and audio. There’s a dedicated power jack, so you probably won’t be able to power this from a USB-C charger or monitor.

Qualcomm

The bad news is that this kit will run you $899, $300 more than the Windows Dev Kit 2023 (which was released in 2022). It’s also $680 more than the old Snapdragon 7c-based ECS LIVA QC710, the first Arm developer box that Microsoft offered. Though that model was dramatically under-specced, it does seem like there’s room to offer a cheaper box (maybe with a Snapdragon X Plus and 16GB of RAM) to developers or users who still want to experiment with a Copilot+-capable system but don’t want to drop nearly $1,000 on a desktop.

Given that a Surface Laptop with a Snapdragon X Elite chip and 32GB of RAM will run you at least $2,000, the Snapdragon Dev Kit is still a better deal if you plan to use it primarily as a testbed or a general-purpose desktop. You can sign up to preorder the box now, and it begins shipping on June 18.

$899 mini PC puts Snapdragon X Elite into a mini desktop for developers Read More »

intel’s-and-qualcomm’s-huawei-export-licenses-get-revoked

Intel’s and Qualcomm’s Huawei export licenses get revoked

More Arm laptops? —

Huawei’s phone division has moved on, but laptops will suffer without Intel.

Huawei's Intel-powered Matebook X Pro has drawn criticism from US China hawks.

Enlarge / Huawei’s Intel-powered Matebook X Pro has drawn criticism from US China hawks.

Huawei

The US crackdown on exports to Huawei now includes even stronger restrictions than the company has already faced. The Financial Times reports that Intel and Qualcomm have had their Huawei export licenses revoked, so Huawei will no longer be able to buy chips from either company.

The export ban has been around since 2020 and means that any company wishing to ship parts to Huawei must get approval from the government on a case-by-case basis. Sometimes these come with restrictions, like Qualcomm’s license, which allowed it to ship smartphone chips to Huawei, but not “5G” chips. That led to Qualcomm creating special 4G-only versions of its 5G chips for Huawei, and the company ended up with 4G-only Snapdragon 888 phones in 2021.

Since then, Huawei has been working on its own Arm chips from its chip design division, HiSilicon. In April, the Huawei Pura 70 smartphone launched with an in-house HiSilicon Kirin 9010 SoC made at SMIC, a Chinese chip fab that is also facing export restrictions. With what is probably still a 7 nm manufacturing process, it’s more of a 2020 chip than a 2024 chip, but that’s still fast enough for many use cases.

Assuming HiSilicon can make enough smartphone chips, the loss of Qualcomm chips isn’t a huge deal right now. Qualcomm seemed to know Huawei has moved beyond it, too, saying in a recent SEC filing, “We do not expect to receive product revenues from Huawei beyond the current calendar year.” Huawei is roaring back to life in the Chinese smartphone market, thanks to HiSilicon chips and preferences for locally made goods.

Huawei's new laptop looks thin, light, and premium.

Huawei’s new laptop looks thin, light, and premium.

Huawei

Intel is going to be a bigger problem and was probably the reason for this latest export change. Intel has controversially had a license to ship Huawei laptop chips since 2020, so Huawei’s laptop business hasn’t been hurting much. Just in April, the 2024 Huawei Matebook X Pro launched with Intel’s latest “Meteor Lake” Core Ultra 9 Processor. It looks like a top-tier laptop, with a 14-inch,120 Hz OLED display, fingerprint reader, all the latest Wi-Fi connectivity, Windows 11 (Microsoft also has approval), and an aluminum body. Thanks to the Intel chip, it also has much-hyped “on-board AI processing.”

Shortly after launch, Reuters reported that Republican lawmakers were unhappy about Intel’s involvement with Huawei’s premium laptop, particularly because of its ability to enable nebulous “AI” features. The US recently passed new restrictions on shipping AI chips to China, but that was around more serious Nvidia AI server chips like the H200, which powers most of the generative AI industry. The hype around AI also means most consumer gear comes with some kind of “AI” marketing angle nowadays, and apparently that was enough to send lawmakers back to the drawing board.

If it feels like you’ve heard of a thousand Huawei export ban expansions that don’t seem very effective, you’re not alone. That Reuters report quotes Congressman Michael McCaul (R-Texas) with the same feeling: “These approvals must stop. Two years ago, I was told licenses to Huawei would stop. Today, it doesn’t seem as though the policy has changed.” The policy has changed, like when new licenses stopped being issued in 2023, but that apparently didn’t involve revoking existing licenses. Profit-first US companies are fighting these bans every step of the way, since a Huawei contract can represent millions of dollars. Huawei can also see all of this coming and is doing its best to adjust.

Assuming this latest restriction finally does the trick, with no Intel chips, Huawei’s laptop business will surely suffer once it runs out of its current stockpile. With ARM laptops becoming more and more popular, though, maybe the next step for Huawei’s laptop division is a HiSilicon laptop. Such a laptop would probably be very slow, but it would be better than nothing.

Intel’s and Qualcomm’s Huawei export licenses get revoked Read More »

xr-industry-giants-team-up-to-save-key-developer-tool

XR Industry Giants Team up to Save Key Developer Tool

Microsoft, Qualcomm and Magic Leap announced a partnership to “guide the evolution” of the Mixed Reality Toolkit (MRTK), a cross-platform AR/VR development framework which has now gone open-source.

MRTK was a Microsoft-driven project that provided a set of components and features used to accelerate cross-platform XR app development in the Unity game engine. The developing team behind MRTK was unfortunately disbanded, as Microsoft cut both MRTK and the AltspaceVR teams earlier this year in a wide-reaching round of layoffs.

Still, as an open-source project now, Microsoft is joining XR industry cohorts Qualcomm and Magic Leap to form their own independent organization within GitHub that aspires to transform the software into a “true multi-platform toolkit that enables greater third-party collaboration.”

“With Magic Leap joining Microsoft as equal stakeholders on the MRTK steering committee, we hope to enrich the current ecosystem and help our developer community create richer, more immersive experiences that span platforms,” Magic Leap says in a blogpost. “Additionally, our support for MRTK3 will allow for simple porting of MRTK3 apps from other OpenXR devices to our platform.”

MRTK3 already supports a wide range of platforms, either full or experimentally, including OpenXR devices like Microsoft HoloLens 2, Meta Quest, Windows Mixed Reality, SteamVR, Oculus Rift (on OpenXR), Lenovo ThinkReality A3, as well as Windows Traditional desktop. The committee says more devices are “coming soon,” one of which will likely be the Magic Leap 2 AR headset.

Meanwhile, Microsoft announced MRTK3 is on track to reach general availability to developers on the second week of September 2023. To learn more, check out Microsoft’s MRTK3 hub, which includes support info, tutorials, and more.

XR Industry Giants Team up to Save Key Developer Tool Read More »

[industry-direct]-snapdragon-spaces-expands-support-for-the-next-generation-of-mr-devices

[Industry Direct] Snapdragon Spaces Expands Support for the Next Generation of MR Devices

Industry Direct by the Snapdragon Spaces Team

Industry Direct is our program for sponsors who want to speak directly to the Road to VR newsletter audience. Industry Direct posts are written by sponsors with no involvement from the Road to VR editorial team. Links to these posts appear only in our newsletter and do not intermix with our on-site editorial feed. Industry Direct sponsors help make Road to VR possible.

It’s clear that we’re undergoing another massive shift on how we connect, play, work and interact with the world around us. More specifically, a shift on the devices we use every day.

Mixed reality (MR) is taking the world by storm and headworn devices are quickly winning the hearts (and habits) of consumers. Extended Reality (XR) devices are getting lighter, sleeker, and more performant providing unprecedented immersive experiences. New players in the market means a boost for the industry and more options for developers looking to create the next generation of AR, VR and MR apps and experiences.

A New Wave of MR Devices and Experiences

At AWE 2023 (Augmented World Expo), Qualcomm announced that Snapdragon Spaces XR Developer Platform now supports the new generation of MR devices, including Lenovo’s ThinkReality VRX, Oppo’s new MR Glass Developer Edition, all-in-one AR devices from DigiLens, TCL RayNeo, and other prominent headworn devices expected to be released later this year. Powered by Snapdragon chipsets purposefully built for XR, these devices are a game changer for developers looking to combine computer vision, AI, and 5G capabilities to build immersive and ultra-realistic experiences.

By expanding the perception technology stack from AR to MR, Snapdragon Spaces enables more developers to push the boundaries of reality, all thanks to the video passthrough capabilities combined with features that seamlessly understand environments and users.

With the wide variety of devices available and soon to be available in the market, developers can reap the benefits from working with a platform that is based on OpenXR. Snapdragon Spaces enables developers to easily deploy applications across multiple devices while being part of an open and rapidly growing ecosystem.

Strong Momentum for XR Developers

Developers are in the driver’s seat leading, disrupting and creating this new era of spatial computing.

Hugo Swart, VP and GM of XR, highlighted the incredible traction the Snapdragon Spaces ecosystem is getting: thousands of developers have joined the Snapdragon Spaces community, more than 80 members have joined Snapdragon Spaces Pathfinder Program, three new Metaverse Fund venture investments and an inaugural group of 10 companies joining the Niantic Lightship and Snapdragon Spaces developer initiative.

The platform has been a critical building block for developers across productivity, gaming and entertainment, health, education, training and other verticals to deliver innovative apps based on the world’s most popular development engines: Unity and Unreal.

Get Started with Snapdragon Spaces

The XR market is about to experience a huge influx of content, applications, new devices and increased adoption.

Snapdragon Spaces continues to expand and create an open ecosystem that enables developers to pioneer innovative experiences for the next generation of immersive technology. For developers who want to help build this new era of spatial computing, check out Snapdragon Spaces.

[Industry Direct] Snapdragon Spaces Expands Support for the Next Generation of MR Devices Read More »

awe-usa-2023-day-one:-xr,-ai,-metaverse,-and-more

AWE USA 2023 Day One: XR, AI, Metaverse, and More

AWE USA 2023 saw a blossoming industry defending itself from negative press and a perceived rivalry with other emerging technologies. Fortunately, Day One also brought big announcements, great discussions, and a little help from AI itself.

Ori Inbar’s Welcome Address

Historically, AWE has started with an address from founder Ori Inbar. This time, it started with an address from a hologram of Ori Inbar appearing on an ARHT display.

Ori Inbar hologram at AWE USA 2023 Day 1
Ori Inbar hologram

The hologram waxed on for a few minutes about progress in the industry and XR’s incredible journey. Then the human Ori Inbar appeared and told the audience that everything that the hologram said was written by ChatGPT.

While (the real) Inbar quipped that he uses artificial intelligence to show him how not to talk, he addressed recent media claims that AI is taking attention and funding away from XR. He has a different view.

it’s ON !!!

Ori Inbar just started his opening key note at #AWE2023

Holo-Ori was here thanks to our friends from @arht_tech.@como pic.twitter.com/Do23hjIkST

— AWE (@ARealityEvent) May 31, 2023

“We industry insiders know this is not exactly true … AI is a good thing for XR. AI accelerates XR,” said Inbar. “XR is the interface for AI … our interactions [with AI] will become a lot less about text and prompts and a lot more about spatial context.”

“Metaverse, Shmetaverse” Returns With a Very Special Guest

Inbar has always been bullish on XR. He has been skeptical of the metaverse.

At the end of his welcome address last year, Inbar praised himself for not saying “the M word” a single time. The year before that, he opened the conference with a joke game show called “Metaverse, Shmetaverse.” Attendees this year were curious to see Inbar share the stage with a special guest: Neal Stephenson.

Neal Stephenson at AWE USA 2023 Day 1
Neal Stephenson

Stephenson’s 1992 book, Snow Crash, introduced the world to the word “metaverse” – though Stephenson said that he wasn’t the first one to imagine the concept. He also addressed the common concern that the term for shared virtual spaces came from a dystopian novel.

“The metaverse described in Snow Crash was my best guess about what spatial computing as a mass medium might look like,” said Stephenson. “The metaverse itself is neither dystopian nor utopian.”

Stephenson then commented that the last five years or so have seen the emergence of the core technologies necessary to create the metaverse, though it still suffers from a lack of compelling content. That’s something that his company, Lamina1, hopes to address through a blockchain-based system for rewarding creators.

“There have to be experiences in the metaverse that are worth having,” said Stephenson. “For me, there’s a kind of glaring and frustrating lack of support for the people who make those experiences.”

AWE 2023 Keynotes and Follow-Ups

Both Day One and Day Two of AWE start out with blocks of keynotes on the main stage. On Day One, following Inbar’s welcome address and conversation with Stephenson, we heard from Qualcomm and XREAL (formerly Nreal). Both talks kicked off themes that would be taken up in other sessions throughout the day.

Qualcomm

From the main stage, Qualcomm Vice President and General Manager of XR, Hugo Swart, presented “Accelerating the XR Ecosystem: The Future Is Open.” He commented on the challenge of developing AR headsets, but mentioned the half-dozen or so Qualcomm-enabled headsets released in the last year, including the Lenovo ThinkReality VRX announced Tuesday.

Hugo Swart Qualcomm at AWE USA 2023 Day 1
Hugo Swart

Swart was joined on the stage by OPPO Director of XR Technology, Yi Xu, who announced a new Qualcomm-powered MR headset that would become available as a developer edition in the second half of this year.

As exciting as those announcements were, it was a software announcement that really made a stir. It’s a new Snapdragon Spaces tool called “Dual Render Fusion.”

“We have been working very hard to reimagine smartphone XR when used with AR glasses,” said Swart. “The idea is that mobile developers designing apps for 2D expand those apps to world-scale apps without any knowledge of XR.”

Keeping the Conversation Going

Another talk, “XR’s Inflection Point” presented by Qualcomm Director of Product Management Steve Lukas, provided a deeper dive into Dual Render Fusion. The tool allows an experience to use a mobile phone camera and a headworn device’s camera simultaneously. Existing app development tools hadn’t allowed this because (until now) it didn’t make sense.

Steve Lukas at AWE 2023 Day 1
Steve Lukas

“To increase XR’s adoption curve, we must first flatten its learning curve, and that’s what Qualcomm just did,” said Lukas. “We’re not ready to give up on mobile phones so why don’t we stop talking about how to replace them and start talking about how to leverage them?”

A panel discussion, “Creating a New Reality With Snapdragon Today” moderated by Qualcomm Senior Director of Product Management XR Said Bakadir, brought together Xu, Lenovo General Manager of XR and Metaverse Vishal Shah, and DigiLens Vice President of Sales and Marketing Brian Hamilton. They largely addressed the need to rethink AR content and delivery.

Vishal Shah, Brian Hamilton, Yi Xu, and Said Bakadir at AWE USA 2023 Day 1
From left to right: Vishal Shah, Brian Hamilton, Yi Xu, and Said Bakadir

“When I talk to the developers, they say, ‘Well there’s no hardware.’ When I talk to the hardware guys, they say, ‘There’s no content.’ And we’re kind of stuck in that space,” said Bakadir.

Hamilton and Shah both said, in their own words, that Qualcomm is creating “an all-in-one platform” and “an end-to-end solution” that solves the content/delivery dilemma that Bakadir opened with.

XREAL

In case you blinked and missed it, Nreal is now XREAL. According to a release shared with ARPost, the name change had to do with “disputes regarding the Nreal mark” (probably how similar it was to “Unreal”). But, “the disputes were solved amicably.”

Chi Xu XREAL AWE 2023
Chi Xu

The only change is the name – the hardware and software are still the hardware and software that we know and love. So, when CEO Chi Xu took the stage to present “Unleashing the Potential of Consumer AR” he just focused on progress.

From one angle, that progress looks like a version of XREAL’s AR operating system for Steam Deck, which Xu said is “coming soon.” From another angle, it looked like the partnership with Sightful which recently resulted in “Spacetop” – the world’s first AR laptop.

XREAL also announced Beam, a controller and compute box that can connect wirelessly or via hard connection to XREAL glasses specifically for streaming media. Beam also allows comfort and usability settings for the virtual screen that aren’t currently supported by the company’s current console and app integrations. Xu called it “the best TV innovation since TV.”

AI and XR

A number of panels and talks also picked up on Inbar’s theme of AI and XR. And they all (as far as I saw) unanimously agreed with Inbar’s assessment that there is no actual competition between the two technologies.

The most in-depth discussion on the topic was “The Intersection of AI and XR” a panel discussion between XR ethicist Kent Bye, Lamina1 CPO Tony Parisi, HTC Global VP of Corporate Development Alvin Graylin, and moderated by WXR Fund Managing Partner Amy LaMeyer.

Amy LaMeyer, Tony Parisi, Alvin Graylin, Kent Bye AWE 2023 Day 1
From left to right: Amy LaMeyer, Tony Parisi, Alvin Graylin, Kent Bye

“There’s this myth that AI is here so now XR’s dead, but it’s the complete opposite,” said Graylin. Graylin pointed out that most forms of tracking and input as well as approaches to scene understanding are all driven by AI. “AI has been part of XR for a long time.”

While they all agreed that AI is a part of XR, the group disagreed on the extent to which AI could take over content creation.

“A lot of people think AI is the solution to all of their content creation and authoring needs in XR, but that’s not the whole equation,” said Parisi.

Graylin countered that AI will increasingly be able to replace human developers. Bye in particular was vocal that we should be reluctant and suspicious of handing over too much creative power to AI in the first place.

“The differentiating factor is going to be storytelling,” said Bye. “I’m seeing a lot of XR theater that has live actors doing things that AI could never do.”

Web3, WebXR, and the Metaverse

The conversation is still continuing regarding the relationship between the metaverse and Web3. With both the metaverse and Web3 focusing on the ideas of openness and interoperability, WebXR has become a common ground between the two. WebXR is also the most accessible from a hardware perspective.

“VR headsets will remain a niche tech like game consoles: some people will have them and use them and swear by them and won’t be able to live without them, but not everyone will have one,” Nokia Head of Trends and Innovation Scouting, Leslie Shannon, said in her talk “What Problem Does the Metaverse Solve?”

Leslie Shannon AWE 2023 Day 1
Leslie Shannon

“The majority of metaverse experiences are happening on mobile phones,” said Shannon. “Presence is more important than immersion.”

Wonderland Engine CEO Jonathan Hale asked “Will WebXR Replace Native XR” with The Fitness Resort COO Lydia Berry. Berry commented that the availability of WebXR across devices helps developers make their content accessible as well as discoverable.

Lydia Berry and Jonathan Hale AWE 2023 Day 1
Lydia Berry and Jonathan Hale

“The adoption challenges around glasses are there. We’re still in the really early adoption phase,” said Berry. “We need as many headsets out there as possible.”

Hale also added that WebXR is being taken more seriously as a delivery method by hardware manufacturers who were previously mainly interested in pursuing native apps.

“More and more interest is coming from hardware manufacturers every day,” said Hale. “We just announced that we’re working with Qualcomm to bring Wonderland Engine to Snapdragon Spaces.”

Keep Coming Back

AWE Day One was a riot but there’s a lot more where that came from. Day Two kicks off with keynotes by Magic Leap and Niantic, there are more talks, more panels, more AI, and the Expo Floor opens up for demos. We’ll see you tomorrow.

AWE USA 2023 Day One: XR, AI, Metaverse, and More Read More »

digilens-expands-ecosystem-with-hardware,-software-announcements

DigiLens Expands Ecosystem With Hardware, Software Announcements

DigiLens may not be on every XR user’s mind, but we all owe them a lot. The optical components manufacturer only recently released its first branded wearable, but the organization makes parts for a number of XR companies and products. That’s why it’s so exciting that the company announced a wave of new processes and partnerships over the last few weeks.

SRG+

“Surface Relief Gratings” is one complicated process within the production of the complicated system that is a waveguide – the optical component that put DigiLens on the map. The short of it is that waveguides are the translucent screen on which a feed is cast by an accompanying “light engine” in this particular approach to AR displays.

DigiLens doesn’t make light engines, but the methods that they use to produce lenses can reduce “eye glow” – which is essentially wasted light. The company’s new “SRG+” waveguide process achieves these ends at a lower cost, while also increasing the aspect ratio for an improved field of view on a lighter lens that can be produced more efficiently at a larger scale.

DigiLens announces SRG+

Lens benefits aside, this process improvement also allows for a more efficient light engine. A more efficient light engine translates to less energy consumption and a smaller form factor for the complete device. All of those are good selling points for a head-worn display. Many of those benefits are also true for Micro OLED lenses, a different approach to AR displays.

“I am excited about Digilens’ recent SRG+ developments, which provide a new, low-cost replication technology satisfying such drastic nanostructure requirements,” Dr. Bernard Kress, President of SPIE, the international society for optics and photonics, said in a release. “The AR waveguides field is the tip of the iceberg.”

A New Partner in Mojo Vision

The first major partner to take advantage of this new process is Mojo Vision, a Micro-LED manufacturer that became famous in the industry for pursuing AR contact lenses. While that product has yet to materialize, its pursuit has resulted in Mojo Vision holding records for large displays on small tech. And, it can get even larger and lighter thanks to SRG+.

“Bringing our technologies together will raise the bar on display performance, and efficiency in the AR/XR industry,” Mojo Vision CEO Nikhil Balram said in a release shared with ARPost. “Partnering with DigiLens brings AR glasses closer to mass-scale consumer electronics.”

This partnership may also help to solve another one of AR’s persistent challenges: the sunny problem. AR glasses to date are almost always tinted. That’s because, to see AR elements in high ambient light conditions, the display either needs to be exceptionally bright or artificially darkened. Instead of cranking up the brightness, manufacturers opt for tinted lenses.

“The total form factor of the AR glasses can finally be small and light enough for consumers to wear for long periods of time and bright enough to allow them to see the superimposed digital information — even on a sunny day — without needing to darken the lenses,” DigiLens CEO Chris Pickett said in the release.

ARGO Is DigiLens’ Golden Fleece

After years of working backstage for device manufacturers, DigiLens announced ARGO at the beginning of this year, calling it “the first purpose-built stand-alone AR/XR device designed for enterprise and industrial-lite workers.” The glasses use the company’s in-house waveguides and a custom-built Android-based operating system running on Qualcomm’s Snapdragon XR2 chip.

DigiLens ARGO glasses

“This is a big milestone for DigiLens at a very high level. We have always been a component manufacturer,” DigiLens VP and GM of Product, Nima Shams told ARPost at the time. “At the same time, we want to push the market and meet the market and it seems like the market is kind of open and waiting.”

More Opportunities With Qualcomm

Close followers of Qualcomm’s XR operations may recall that the company often saves major news around its XR developer platform Snapdragon Spaces for AWE. The platform launched at AWE in 2021 and became available to the public at AWE last year. This year, among other announcements, Qualcomm announced Spaces compatibility with ARGO.

“We are excited to support the democratization of the XR industry by offering Snapdragon Spaces through DigiLens’ leading all-in-one AR headset,” Qualcomm Senior Director of Product Management XR, Said Bakadir, said in a release shared with ARPost.

“DigiLens’ high-transparency and sunlight-readable optics combined with the universe of leading XR application developers from Snapdragon Spaces are critical in supporting the needs of the expanding enterprise and industrial markets,” said Bakadir.

Snapdragon Spaces bundles developer tools including hand and position tracking, scene understanding and persistent anchors, spatial mapping, and plane detection. So, while we’re likely to see more partnerships with more existing applications, this strengthened relationship with Qualcomm could mean more native apps on ARGO.

Getting Rugged With Taqtile

“Industrial-lite” might be getting a bit heavier as DigiLens partners with Taqtile on a “rugged AR-enabled solution for industrial and defense customers” – presumably a more durable version of the original ARGO running Manifest, Taqtile’s flagship enterprise AR solution. Taqtile recently released a free version of Manifest to make its capabilities more available to potential clients.

“ARGO represents just the type of head-mounted, hands-free device that Manifest customers have been looking for,” Taqtile CTO John Tomizuka said in a release. “We continue to evaluate hardware solutions that will meet the unique needs of our deskless workers, and the combination of Manifest and ARGO has the ability to deliver performance and functionality.”

Getting Smart With Wisear

Wisear is a neural interface company that uses “smart earphones” to allow users to control connected devices with their thoughts rather than with touch, gesture, or even voice controls.

For the average consumer, that might just be really cool. For consumers with neurological disorders, that might be a new way to connect to the world. For enterprise, it solves another problem.

wisear smart earphones

Headworn devices mean frontline workers aren’t holding the device, but if they need their hands to interact with it, that still means taking their hands off of the job. Voice controls get around this but some environments and circumstances make voice controls inconvenient or difficult to use. Neural inputs solve those problems too. And Wisear is bringing those solutions to ARGO.

“DigiLens and Wisear share a common vision of using cutting-edge technology to revolutionize the way frontline workers work,” Pickett said in a release shared with ARPost. “Our ARGO smart glasses, coupled with Wisear’s neural interface-powered earphones, will provide frontline workers with the tools they need to work seamlessly and safely.”

More Tracking Options With Ultraleap

Ultraleap is another components manufacturer. They make input accessories like tracking cameras, controllers, and haptics. A brief shared with ARPost only mentions “a groundbreaking partnership” between the companies “offering a truly immersive and user-friendly experience across diverse applications, from gaming and education to industrial training and healthcare.”

That sounds a lot like it hints at more wide availability for ARGO, but don’t get your hopes up yet. This is the announcement about which we know the least. Most of this article has come together from releases shared with ARPost in advance of AWE, which is happening now. So, watch our AWE coverage articles as they come out for more concrete information.

So Much More to Come

Announcements from component manufacturers can be tantalizing. We know that they have huge ramifications for the whole industry, but we know that those ramifications aren’t immediate. We’re closely watching DigiLens and its partners to see when some of these announcements might bear tangible fruit but keep in mind that this company also has its own full model out now.

DigiLens Expands Ecosystem With Hardware, Software Announcements Read More »

eye-tracking-is-a-game-changer-for-xr-that-goes-far-beyond-foveated-rendering

Eye-tracking is a Game Changer for XR That Goes Far Beyond Foveated Rendering

Eye-tracking—the ability to quickly and precisely measure the direction a user is looking while inside of a VR headset—is often talked about within the context of foveated rendering, and how it could reduce the performance requirements of XR headsets. And while foveated rendering is an exciting use-case for eye-tracking in AR and VR headsets, eye-tracking stands to bring much more to the table.

Updated – May 2nd, 2023

Eye-tracking has been talked about with regards to XR as a distant technology for many years, but the hardware is finally becoming increasingly available to developers and customers. PSVR 2 and Quest Pro are the most visible examples of headsets with built-in eye-tracking, along with the likes of Varjo Aero, Vive Pro Eye and more.

With this momentum, in just a few years we could see eye-tracking become a standard part of consumer XR headsets. When that happens, there’s a wide range of features the tech can enable to drastically improve the experience.

Foveated Rendering

Let’s first start with the one that many people are already familiar with. Foveated rendering aims to reduce the computational power required for displaying demanding AR and VR scenes. The name comes from the ‘fovea’—a small pit at the center of the human retina which is densely packed with photoreceptors. It’s the fovea which gives us high resolution vision at the center of our field of view; meanwhile our peripheral vision is actually very poor at picking up detail and color, and is better tuned for spotting motion and contrast than seeing detail. You can think of it like a camera which has a large sensor with just a few megapixels, and another smaller sensor in the middle with lots of megapixels.

The region of your vision in which you can see in high detail is actually much smaller than most think—just a few degrees across the center of your view. The difference in resolving power between the fovea and the rest of the retina is so drastic, that without your fovea, you couldn’t make out the text on this page. You can see this easily for yourself: if you keep your eyes focused on this word and try to read just two sentences below, you’ll find it’s almost impossible to make out what the words say, even though you can see something resembling words. The reason that people overestimate the foveal region of their vision seems to be because the brain does a lot of unconscious interpretation and prediction to build a model of how we believe the world to be.

Foveated rendering aims to exploit this quirk of our vision by rendering the virtual scene in high resolution only in the region that the fovea sees, and then drastically cut down the complexity of the scene in our peripheral vision where the detail can’t be resolved anyway. Doing so allows us to focus most of the processing power where it contributes most to detail, while saving processing resources elsewhere. That may not sound like a huge deal, but as the display resolution of XR headsets and field-of-view increases, the power needed to render complex scenes grows quickly.

Eye-tracking of course comes into play because we need to know where the center of the user’s gaze is at all times quickly and with high precision in order to pull off foveated rendering. While it’s difficult to pull this off without the user noticing, it’s possible and has been demonstrated quite effectively on recent headset like Quest Pro and PSVR 2.

Automatic User Detection & Adjustment

In addition to detecting movement, eye-tracking can also be used as a biometric identifier. That makes eye-tracking a great candidate for multiple user profiles across a single headset—when I put on the headset, the system can instantly identify me as a unique user and call up my customized environment, content library, game progress, and settings. When a friend puts on the headset, the system can load their preferences and saved data.

Eye-tracking can also be used to precisely measure IPD (the distance between one’s eyes). Knowing your IPD is important in XR because it’s required to move the lenses and displays into the optimal position for both comfort and visual quality. Unfortunately many people understandably don’t know what their IPD off the top of their head.

With eye-tracking, it would be easy to instantly measure each user’s IPD and then have the headset’s software assist the user in adjusting headset’s IPD to match, or warn users that their IPD is outside the range supported by the headset.

In more advanced headsets, this process can be invisible and automatic—IPD can be measured invisibly, and the headset can have a motorized IPD adjustment that automatically moves the lenses into the correct position without the user needing to be aware of any of it, like on the Varjo Aero, for example.

Varifocal Displays

A prototype varifocal headset | Image courtesy NVIDIA

The optical systems used in today’s VR headsets work pretty well but they’re actually rather simple and don’t support an important function of human vision: dynamic focus. This is because the display in XR headsets is always the same distance from our eyes, even when the stereoscopic depth suggests otherwise. This leads to an issue called vergence-accommodation conflict. If you want to learn a bit more in depth, check out our primer below:

Accommodation

Accommodation is the bending of the eye’s lens to focus light from objects at different distances. | Photo courtesy Pearson Scott Foresman

In the real world, to focus on a near object the lens of your eye bends to make the light from the object hit the right spot on your retina, giving you a sharp view of the object. For an object that’s further away, the light is traveling at different angles into your eye and the lens again must bend to ensure the light is focused onto your retina. This is why, if you close one eye and focus on your finger a few inches from your face, the world behind your finger is blurry. Conversely, if you focus on the world behind your finger, your finger becomes blurry. This is called accommodation.

Vergence

Vergence is the inward rotation of each eye to overlap each eye’s view into one aligned image. | Photo courtesy Fred Hsu (CC BY-SA 3.0)

Then there’s vergence, which is when each of your eyes rotates inward to ‘converge’ the separate views from each eye into one overlapping image. For very distant objects, your eyes are nearly parallel, because the distance between them is so small in comparison to the distance of the object (meaning each eye sees a nearly identical portion of the object). For very near objects, your eyes must rotate inward to bring each eye’s perspective into alignment. You can see this too with our little finger trick as above: this time, using both eyes, hold your finger a few inches from your face and look at it. Notice that you see double-images of objects far behind your finger. When you then focus on those objects behind your finger, now you see a double finger image.

The Conflict

With precise enough instruments, you could use either vergence or accommodation to know how far away an object is that a person is looking at. But the thing is, both accommodation and vergence happen in your eye together, automatically. And they don’t just happen at the same time—there’s a direct correlation between vergence and accommodation, such that for any given measurement of vergence, there’s a directly corresponding level of accommodation (and vice versa). Since you were a little baby, your brain and eyes have formed muscle memory to make these two things happen together, without thinking, anytime you look at anything.

But when it comes to most of today’s AR and VR headsets, vergence and accommodation are out of sync due to inherent limitations of the optical design.

In a basic AR or VR headset, there’s a display (which is, let’s say, 3″ away from your eye) which shows the virtual scene, and a lens which focuses the light from the display onto your eye (just like the lens in your eye would normally focus the light from the world onto your retina). But since the display is a static distance from your eye, and the lens’ shape is static, the light coming from all objects shown on that display is coming from the same distance. So even if there’s a virtual mountain five miles away and a coffee cup on a table five inches away, the light from both objects enters the eye at the same angle (which means your accommodation—the bending of the lens in your eye—never changes).

That comes in conflict with vergence in such headsets which—because we can show a different image to each eye—is variable. Being able to adjust the imagine independently for each eye, such that our eyes need to converge on objects at different depths, is essentially what gives today’s AR and VR headsets stereoscopy.

But the most realistic (and arguably, most comfortable) display we could create would eliminate the vergence-accommodation issue and let the two work in sync, just like we’re used to in the real world.

Varifocal displays—those which can dynamically alter their focal depth—are proposed as a solution to this problem. There’s a number of approaches to varifocal displays, perhaps the most simple of which is an optical system where the display is physically moved back and forth from the lens in order to change focal depth on the fly.

Achieving such an actuated varifocal display requires eye-tracking because the system needs to know precisely where in the scene the user is looking. By tracing a path into the virtual scene from each of the user’s eyes, the system can find the point that those paths intersect, establishing the proper focal plane that the user is looking at. This information is then sent to the display to adjust accordingly, setting the focal depth to match the virtual distance from the user’s eye to the object.

A well implemented varifocal display could not only eliminate the vergence-accommodation conflict, but also allow users to focus on virtual objects much nearer to them than in existing headsets.

And well before we’re putting varifocal displays into XR headsets, eye-tracking could be used for simulated depth-of-field, which could approximate the blurring of objects outside of the focal plane of the user’s eyes.

As of now, there’s no major headset on the market with varifocal capabilities, but there’s a growing body of research and development trying to figure out how to make the capability compact, reliable, and affordable.

Foveated Displays

While foveated rendering aims to better distribute rendering power between the part of our vision where we can see sharply and our low-detail peripheral vision, something similar can be achieved for the actual pixel count.

Rather than just changing the detail of the rendering on certain parts of the display vs. others, foveated displays are those which are physically moved (or in some cases “steered”) to stay in front of the user’s gaze no matter where they look.

Foveated displays open the door to achieving much higher resolution in AR and VR headsets without brute-forcing the problem by trying to cram pixels at higher resolution across our entire field-of-view. Doing so is not only be costly, but also runs into challenging power and size constraints as the number of pixels approach retinal-resolution. Instead, foveated displays would move a smaller, pixel-dense display to wherever the user is looking based on eye-tracking data. This approach could even lead to higher fields-of-view than could otherwise be achieved with a single flat display.

A rough approximation of how a pixel-dense foveated display looks against a larger, much less pixel-dense display in Varjo’s prototype headset. | Photo by Road to VR, based on images courtesy Varjo

Varjo is one company working on a foveated display system. They use a typical display that covers a wide field of view (but isn’t very pixel dense), and then superimpose a microdisplay that’s much more pixel dense on top of it. The combination of the two means the user gets both a wide field of view for their peripheral vision, and a region of very high resolution for their foveal vision.

Granted, this foveated display is still static (the high resolution area stays in the middle of the display) rather than dynamic, but the company has considered a number of methods for moving the display to ensure the high resolution area is always at the center of your gaze.

Continued on Page 2: Better Social Avatars »

Eye-tracking is a Game Changer for XR That Goes Far Beyond Foveated Rendering Read More »

popular-quest-2-pc-streaming-software-adds-‘super-resolution’-feature-for-enhanced-visuals

Popular Quest 2 PC Streaming Software Adds ‘Super Resolution’ Feature for Enhanced Visuals

Virtual Desktop has collaborated with Qualcomm to integrate the company’s Snapdragon Game Super Resolution, a software enhancement squarely targeted at increasing the wireless streaming quality and latency of PC visuals to Quest 2 and Pico devices.

Virtual Desktop is a great tool not only because it provides standalone headset users wireless access to their computers, but because its developer, Guy Godin, is constantly adding in new features to tempt users away from using built-in solutions, e.g. Air Link.

That’s a tall order since built-in stuff like Air Link are typically free and usually pretty great, letting Quest and Pico users connect to their VR-ready PCs to play games like Half-Life: Alyx, but Virtual Desktop goes a few steps further. With its PC native application developed for high quality wireless streaming, you can do things like cycle through multiple physical monitors and even connect to up to four separate computers—a feature set you probably won’t see on the Air Link change log.

Now Godin has worked with Qualcomm to integrate the company’s Snapdragon Game Super Resolution for built-in upscaling, essentially creating higher resolution images from lower resolution inputs so it can be served up to standalone headsets in higher fidelity. Check out the results below:

Because producing clearer visuals with fewer resources is the name of the game, Qualcomm says in a blog post that its techniques can also reduce wireless bandwidth, system pressure, memory, and provide power requirements.

Godin says in a Reddit post that the new upscaling works with “Potato, Low, Medium quality (up to 120fps) and High (up to 90fps), and it upscales to Ultra resolution under the hood. It can work with SSW enabled as well and doesn’t introduce any additional latency.”

You can get Virtual Desktop on Quest over at the Quest Store, priced at $20. It’s also available on Pico Neo 3 and Pico 4, and will also soon arrive on Vive Focus 3 and XR Elite too, Godin says.

Update (10: 30 ET): Guy Godin reached out to Road to VR to correct that the new Snapdragon Game Super Resolution is available on Quest, Pico, and will soon come to Vive Focus 3 and XR Elite. We’ve included that in the body of the article.

Popular Quest 2 PC Streaming Software Adds ‘Super Resolution’ Feature for Enhanced Visuals Read More »

qualcomm-partners-with-7-major-telecoms-to-advance-smartphone-tethered-ar-glasses

Qualcomm Partners with 7 Major Telecoms to Advance Smartphone-tethered AR Glasses

Qualcomm announced at Mobile World Congress (MWC) today it’s partnering with seven global telecommunication companies in preparation for the next generation of AR glasses which are set to work directly with the user’s smartphone.

Partners include CMCC, Deutsche Telekom, KDDI Corporation, NTT QONOQ, T-Mobile, Telefonica, and Vodafone, which are said to currently be working with Qualcomm on new XR devices, experiences, and developer initiatives, including Qualcomm’s Snapdragon Spaces XR developer platform.

Qualcomm announced Snapdragon Spaces in late 2021, a software tool kit which focuses on performance and low power devices which allows developers to create head-worn AR experiences from the ground-up or by adding head-worn AR to existing smartphone apps.

Qualcomm and Japan’s KDDI Corporation also announced a multi-year collaboration which it says will focus on the expansion of XR use cases and creation of a developer program in Japan.

Meanwhile, Qualcomm says OEMs are designing “a new wave of devices for operators and beyond” such as the newly unveiled Xiaomi Wireless AR Glass Discovery Edition, OPPO’s new Mixed Reality device and OnePlus 11 5G smartphone.

At least in Xiaomi’s case, its Wireless AR Glass headset streams data from compatible smartphones. Effectively offloading computation to the smartphone, the company’s 126g headset boasts a wireless latency of as low as 3ms between the smartphone device to the glasses, and a wireless connection with full link latency as low as 50ms which is comparable to wired solution.

Qualcomm Partners with 7 Major Telecoms to Advance Smartphone-tethered AR Glasses Read More »

samsung-partners-with-google-&-qualcomm-to-release-android-powered-xr-device

Samsung Partners with Google & Qualcomm to Release Android-powered XR Device

Samsung’s 2023 Unpacked event was all about the company’s Galaxy S23 hardware, although at the end of its hour-long presentation the South Korean tech giant announced it was working with Qualcomm and Google to develop an XR device.

TM Roh, Samsung’s president and head of mobile experiences, didn’t reveal any more than what was said on stage, namely the existence of the partnership itself, however speaking to The Washington Post he announced the companies are “getting there,” and that the XR device was “not too far away.”

It’s not clear what sort of device it will be, since ‘XR’ essentially covers the entire gamut of immersive headsets, including augmented reality (e.g. HoloLens), virtual reality (e.g. Meta Quest 2), and mixed reality (e.g. Meta Quest Pro). Our best bet though is on a standalone MR headset, which uses passthrough cameras to layer computer-generated visuals on top of the user’s physical space, essentially replicating the experience you might have on a see-through AR display, albeit on a VR device.

MR headsets include Meta Quest Pro, HTC Vive XR Elite, and Apple’s rumored headset which is reportedly set to arrive sometime early this year at around $3,000.

Meta Quest Pro | Image courtesy Meta

As you’d imagine, Qualcomm is tasked with building the XR device’s chipset, while Samsung will manufacture the headset’s hardware. Software will be provided by Google; WaPo reports it will be running on “the unannounced version of the Android operating system meant specifically to power devices such as wearable displays.”

With the exception of Qualcomm, which not only produces XR-specific chipsets but also regularly shows of its own XR headset references, both Samsung and Google’s commitment to the project are kind of a long-awaited homecoming.

Samsung was one of the first truly massive tech companies to develop VR hardware. Starting in 2014, the company partnered with Meta (then Oculus) on the Samsung Gear VR platform, which paired the Galaxy Note 4 phone with a headset shell sporting an optimized intertidal Measurment unit (IMU). Samsung Gear VR was essentially the first high-quality 3DOF mobile VR experience offered to consumers, marking a stark departure from the sort VR experiences you could find on Google’s more open, but decidedly lower-quality Cardboard platform.

Notably, Samsung hasn’t released a VR product since the launch of the PC VR headset Odyssey+. Like seemingly all big tech firms these days, it appears to be working on AR glasses.

Smasung Odyssey+ | Image courtesy Samsung

Google, although reportedly also working on AR device, similarly shelved its VR ambitions when it discontinued its standalone Daydream platform in 2019, something which at the time was essentially the nail in the company’s Android VR coffin. Google previously worked with Lenovo in 2018 to produce its first and only standalone Daydream VR headset, the Lenovo Mirage Solo, which offered 6DOF room-scale tracking while providing only a single 3DOF clicker-style controller.

Since then, Google has only really been vocal about its experimental system for immersive video chatting, Project Starline, which lets people engage in face-to-face video chats without needing an AR or VR headset.

Typically, we’d say Mobile World Congress 2023 would be the next logical place to share more info about the XR hardware partnership. Samsung, Qualcomm and Google will all be present, so we may just learn more there when the week-long event kicks off in Barcelona, Spain on February 27th.

Samsung Partners with Google & Qualcomm to Release Android-powered XR Device Read More »