AI

the-intersections-of-artificial-intelligence-and-extended-reality

The Intersections of Artificial Intelligence and Extended Reality

It seems like just yesterday it was the AR this, VR that, metaverse, metaverse, metaverse. Now all anyone can talk about is artificial intelligence. Is that a bad sign for XR? Some people seem to think so. However, people in the XR industry understand that it’s not a competition.

In fact, artificial intelligence has a huge role to play in building and experiencing XR content – and it’s been part of high-level metaverse discussions for a very long time. I’ve never claimed to be a metaverse expert and I’m not about to claim to be an AI expert, so I’ve been talking to the people building these technologies to learn more about how they help each other.

The Types of Artificial Intelligence in Extended Realities

For the sake of this article, there are three main different branches of artificial intelligence: computer vision, generative AI, and large language models. AI is more complicated than this, but this helps to get us started talking about how it relates to XR.

Computer Vision

In XR, computer vision helps apps recognize and understand elements in the environment. This places virtual elements in the environment and sometimes lets them react to that environment. Computer vision is also increasingly being used to streamline the creation of digital twins of physical items or locations.

Niantic is one of XR’s big world-builders using computer vision and scene understanding to realistically augment the world. 8th Wall, an acquisition that does its own projects but also serves as Niantic’s WebXR division, also uses some AI but is also compatible with other AI tools, as teams showcased in a recent Innovation Lab hackathon.

“During the sky effects challenge in March, we saw some really interesting integrations of sky effects with generative AI because that was the shiny object at the time,” Caitlin Lacey, Niantic’s Senior Director of Product Marketing told ARPost in a recent interview. “We saw project after project take that spin and we never really saw that coming.”

The winner used generative AI to create the environment that replaced the sky through a recent tool developed by 8th Wall. While some see artificial intelligence (that “shiny object”) as taking the wind out of immersive tech’s sails, Lacey sees this as an evolution rather than a distraction.

“I don’t think it’s one or the other. I think they complement each other,” said Lacey. “I like to call them the peanut butter and jelly of the internet.”

Generative AI

Generative AI takes a prompt and turns it into some form of media, whether an image, a short video, or even a 3D asset. Generative AI is often used in VR experiences to create “skyboxes” – the flat image over the virtual landscape where players have their actual interactions. However, as AI gets stronger, it is increasingly used to create virtual assets and environments themselves.

Artificial Intelligence and Professional Content Creation

Talespin makes immersive XR experiences for training soft skills in the workplace. The company has been using artificial intelligence internally for a while now and recently rolled out a whole AI-powered authoring tool for their clients and customers.

A release shared with ARPost calls the platform “an orchestrator of several AI technologies behind the scenes.” That includes developing generative AI tools for character and world building, but it also includes work with other kinds of artificial intelligence that we’ll explore further in the article, like LLMs.

“One of the problems we’ve all had in the XR community is that there’s a very small contingent of people who have the interest and the know-how and the time to create these experiences, so this massive opportunity is funneled into a very narrow pipeline,” Talespin CEO Kyle Jackson told ARPost. “Internally, we’ve seen a 95-97% reduction in time to create [with AI tools].”

Talespin isn’t introducing these tools to put themselves out of business. On the contrary, Jackson said that his team is able to be even more involved in helping companies workshop their experiences because his team is spending less time building the experiences themselves. Jackson further said this is only one example of a shift happening to more and more jobs.

“What should we be doing to make ourselves more valuable as these things shift? … It’s really about metacognition,” said Jackson. “Our place flipped from needing to know the answer to needing to know the question.”

Artificial Intelligence and Individual Creators

DEVAR launched MyWebAR in 2021 as a no-code authoring tool for WebAR experiences. In the spring of 2023, that platform became more powerful with a neural network for AR object creation.

In creating a 3D asset from a prompt, the network determines the necessary polygon count and replicates the texture. The resulting 3D asset can exist in AR experiences and serve as a marker itself for second-layer experiences.

“A designer today is someone who can not just draw, but describe. Today, it’s the same in XR,” DEVAR founder and CEO Anna Belova told ARPost. “Our goal is to make this available to everyone … you just need to open your imagination.”

Blurring the Lines

“From strictly the making a world aspect, AI takes on a lot of the work,” Mirrorscape CEO Grant Anderson told ARPost. “Making all of these models and environments takes a lot of time and money, so AI is a magic bullet.”

Mirroscape is looking to “bring your tabletop game to life with immersive 3D augmented reality.” Of course, much of the beauty of tabletop games come from the fact that players are creating their own worlds and characters as they go along. While the roleplaying element has been reproduced by other platforms, Mirrorscape is bringing in the individual creativity through AI.

“We’re all about user-created content, and I think in the end AI is really going to revolutionize that,” said Grant. “It’s going to blur the lines around what a game publisher is.”

Even for those who are professional builders but who might be independent or just starting out, artificial intelligence, whether to create assets or just for ideation, can help level the playing field. That was a theme of a recent Zapworks workshop “Can AI Unlock Your Creating Potential? Augmenting Reality With AI Tools.”

“AI is now giving individuals like me and all of you sort of superpowers to compete with collectives,” Zappar executive creative director Andre Assalino said during the workshop. “If I was a one-man band, if I was starting off with my own little design firm or whatever, if it’s just me freelancing, I now will be able to do so much more than I could five years ago.”

NeRFs

Neural Radiance Fields (NeRFs) weren’t included in the introduction because they can be seen as a combination of generative AI and computer vision. It starts out with a special kind of neural network called a multilayer perceptron (MLP). A “neural network” is any artificial intelligence that’s based off of the human brain, and an MLP is … well, look at it this way:

If you’ve ever taken an engineering course, or even a highschool shop class, you’ve been introduced to drafting. Technical drawings represent a 3D structure as a series of 2D images, each showing different angles of the 3D structure. Over time, you can get pretty good at visualizing the complete structure from these flat images. An MLP can do the same thing.

The difference is the output. When a human does this, the output is a thought – a spatial understanding of the object in your mind’s eye. When an MLP does this, the output is a NeRF – a 3D rendering generated from the 2D images.

Early on, this meant feeding countless images into the MLP. However, in the summer of 2022, Apple and the University of British Columbia developed a way to do it with one video. Their approach was specifically interested in generating 3D models of people from video clips for use in AR applications.

Whether a NeRF recreates a human or an object, it’s quickly becoming the fastest and easiest way to make digital twins. Of course, the only downside is that NeRF can only create digital models of things that already exist in the physical world.

Digital Twins and Simulation

Digital twins can be built with or without artificial intelligence. However, some use cases of digital twins are powered by AI. These include simulations like optimization and disaster readiness. For example, a digital twin of a real campus can be created, but then modified on a computer to maximize production or minimize risk in different simulated scenarios.

“You can do things like scan in areas of a refinery, but then create optimized versions of that refinery … and have different simulations of things happening,” MeetKai co-founder and executive chairwoman Weili Dai told ARPost in a recent interview.

A recent suite of authoring tools launched by the company (which started in AI before branching into XR solutions) includes AI-powered tools for creating virtual environments from the virtual world. These can be left as exact digital twins, or they can be edited to streamline the production of more fantastic virtual worlds by providing a foundation built in reality.

Large Language Models

Large Language Models take in language prompts and return language responses. This is on the list of AI interactions that runs largely under the hood so that, ideally, users don’t realize that they’re interacting with AI. For example, large language models could be the future of NPC interactions and “non-human agents” that help us navigate vast virtual worlds.

“In these virtual world environments, people are often more comfortable talking to virtual agents,” Inworld AI CEO Ilya Gelfenbeyn told ARPost in a recent interview. “In many cases, they are acting in some service roles and they are preferable [to human agents].”

Inworld AI makes brains that can animate Ready Player Me avatars in virtual worlds. Creators get to decide what the artificial intelligence knows – or what information it can access from the web – and what its personality is like as it walks and talks its way through the virtual landscape.

“You basically are teaching an actor how it is supposed to behave,” Inworld CPO Kylan Gibbs told ARPost.

Large language models are also used by developers to speed up back-end processes like generating code.

How XR Gives Back

So far, we’ve talked about ways in which artificial intelligence makes XR experiences better. However, the opposite is also true, with XR helping to strengthen AI for other uses and applications.

Evolving AI

We’ve already seen that some approaches to artificial intelligence are modeled after the human brain. We know that the human brain developed essentially through trial and error as it rose to meet the needs of our early ancestors. So, what if virtual brains had the same opportunity?

Martine Rothblatt PhD reports that very opportunity in the excellent book “Virtually Human: The Promise – and the Peril – of Digital Immortality”:

“[Academics] have even programmed elements of autonomy and empathy into computers. They even create artificial software worlds in which they attempt to mimic natural selection. In these artificial worlds, software structures compete for resources, undergo mutations, and evolve. Experimenters are hopeful that consciousness will evolve in their software as it did in biology, with vastly greater speed.”

Feeding AI

Like any emerging technology, people’s expectations of artificial intelligence can grow faster than AI’s actual capabilities. AI learns by having data entered into it. Lots of data.

For some applications, there is a lot of extant data for artificial intelligence to learn from. But, sometimes, the answers that people want from AI don’t exist yet as data from the physical world.

“One sort of major issue of training AI is the lack of data,” Treble Technologies CEO Finnur Pind told ARPost in a recent interview.

Treble Technologies works with creating realistic sound in virtual environments. To train an artificial intelligence to work with sound, it needs audio files. Historically, these were painstakingly sampled with different things causing different sounds in different environments.

Usually, during the early design phases, an architect or automotive designer will approach Treble to predict what audio will sound like in a future space. However, Treble can also use its software to generate specific sounds in specific environments to train artificial intelligence without all of the time and labor-intensive sampling. Pinur calls this “synthetic data generation.”

The AI-XR Relationship Is “and” Not “or”

Holding up artificial intelligence as the new technology on the block that somehow takes away from XR is an interesting narrative. However, experts are in agreement that these two emerging technologies reinforce each other – they don’t compete. XR helps AI grow in new and fantastic ways, while AI makes XR tools more powerful and more accessible. There’s room for both.

The Intersections of Artificial Intelligence and Extended Reality Read More »

awe-usa-2023-day-three:-eyes-on-apple

AWE USA 2023 Day Three: Eyes on Apple

The last, third day of AWE USA 2023 took place on Friday, June 2. The first day of AWE is largely dominated by keynotes. A lot of air on the second day is taken up by the expo floor opening. By the third day, the keynotes are done, the expo floor starts to get packed away, and panel discussions and developer talks rule the day. And Apple ruled a lot of those talks.

Bracing for Impact From Apple

A big shift is expected this week as Apple is expected to announce its entrance into the XR market. The writing has been on the wall for a long time.

Rumors have probably been circulating for longer than many readers have even been watching XR. ARPost started speculating in 2018 on a 2019 release. Five years of radio silence later and we had reports that the product would be delayed indefinitely.

The rumor mill is back in operation with an expected launch this week (Apple’s WWDC23 starts today) – with many suggesting that Meta’s sudden announcement of the Quest 3 is a harbinger. Whether an Apple entrance is real this time or not, AWE is bracing itself.

Suspicion on Standards

Let’s take a step back and look at a conversation that happened on AWE USA 2023 Day Two, but is very pertinent to the emerging Apple narrative.

The “Building Open Standards for the Metaverse” panel moderated by Moor Insights and Strategy Senior Analyst Anshel Sag brought together XR Safety Initiative (XRSI) founder and CEO Kavya Pearlman, XRSI Advisor Elizabeth Rothman, and Khronos Group President Neil Trevett.

Apple’s tendency to operate outside of standards was discussed. Even prior to their entrance into the market, this has caused problems for XR app developers – Apple devices even have a different way of sensing depth than Android devices. XR glasses tend to come out first or only on Android in part because of Android’s more open ecosystem.

“Apple currently holds so much power that they could say ‘This is the way we’re going to go.’ and the Metaverse Standards Forum could stand up and say ‘No.’,” said Pearlman, expressing concern over accessibility of “the next generation of the internet”.

Trevett expressed a different approach, saying that standards should present the best option, not the only option. While standards are more useful the more groups use them, competition is helpful and shows diversity in the industry. And diversity in the industry is what sets Apple apart.

“If Apple does announce something, they’ll do a lot of education … it will progress how people use the tech whether they use open standards or not,” said Trevett. “If you don’t have a competitor on the proprietary end of the spectrum, that’s when you should start to worry because it means that no one cares enough about what you’re doing.”

Hope for New Displays

On Day Three, KGOn Tech LLC’s resident optics expert Karl Guttag presented an early morning developer session on “Optical Versus Passthrough Mixed Reality.” Guttag has been justifiably critical of Meta Quest Pro’s passthrough in particular. Even for optical XR, he expressed skepticism about a screen replacement, which is what the Apple headset is largely rumored to be.

karl guttag AWE 2023 Day 3
Karl Guttag

“One of our biggest issues in the market is expectations vs. reality,” said Guttag. “What is hard in optical AR is easy in passthrough and vice versa. I see very little overlap in applications … there is also very little overlap in device requirements.”

A New Generation of Interaction

“The Quest 3 has finally been announced, which is great for everyone in the industry,” 3lbXR and 3lb Games CEO Robin Moulder said in her talk “Expand Your Reach: Ditch the Controllers and Jump into Mixed Reality.” “Next week is going to be a whole new level when Apple announces something – hopefully.”

robin moulder AWE 2023 Day 3
Robin Moulder

Moulder presented the next round of headsets as the first of a generation that will hopefully be user-friendly enough to increase adoption and deployment bringing more users and creators into the XR ecosystem.

“By the time we have the Apple headset and the new Quest 3, everybody is going to be freaking out about how great hand tracking is and moving into this new world of possibilities,” said Moulder.

More on AI

AI isn’t distracting anyone from XR and Apple isn’t distracting anyone from AI. Apple appearing as a conference theme doesn’t mean that anyone was done talking about AI. If you’re sick of reading about AI, at least read the first section below.

Lucid Realities: A Glimpse Into the Current State of Generative AI

After two full days of people talking about how AI is a magical world generator that’s going to take the task of content creation off of the shoulders of builders, Microsoft Research Engineer Jasmine Roberts set the record straight.

jasmine roberts AWE 2023
Jasmine Roberts

“We’ve passed through this techno-optimist state into dystopia and neither of those are good,” said Roberts. “When people think that [AI] can replace writers, it’s not really meant to do that. You still need human supervisors.”

AI not being able to do everything that a lot of people think it can isn’t the end of the world. A lot of the things that people want AI to do is already possible through other less glamorous tools.

“A lot of what people want from generative AI, they can actually get from procedural generation,” said Roberts. “There are some situations where you need bespoke assets so generative AI wouldn’t really cut it.”

Roberts isn’t against AI – her presentation was simply illustrating that it doesn’t work the way that some industry outsiders are being led to believe. That isn’t the same as saying that it doesn’t work. In fact, she brought a demo of an upcoming AI-powered Clippy. (You remember Clippy, right?)

Augmented Ecologies

Roberts was talking about the limitations of AI. The “Augmented Ecologies” panel moderated by AWE co-founder Tish Shute, saw Three Dog Labs founder Sean White,  Morpheus XR CTO Anselm Hook, and Croquet founder and CTO David A. Smith talking about what happens when AI is the new dominant life form on planet Earth.

Tish Shute, Sean White, Anselm Hook, and David Smith - AWE 2023 Day 3
From left to right: Tish Shute, Sean White, Anselm Hook, and David Smith

“We’re kind of moving to a probabilistic model, it’s less deterministic, which is much more in line with ecological models,” said White.

This talk presented the scenario in which developers are no longer the ones running the show. AI takes on a life of its own, and that life is more capable than ours.

“In an ecology, we’re not necessarily at the center, we’re part of the system,” said Hook. “We’re not necessarily able to dominate the technologies that are out there anymore.”

This might scare you, but it doesn’t scare Smith. Smith described a future in which AI becomes the legacy that can live in environments that humans never can, like the reaches of space.

“The metaverse and AI are going to redefine what it means to be human,” said Smith. “Ecosystems are not healthy if they are not evolving.”

“No Longer the Apex”

On the morning of Day Two, the Virtual World Society and the VR/AR Association hosted a very special breakfast. Invited were some of the most influential leaders in the immersive technology space. The goal was to discuss the health and future of the XR industry.

The findings will be presented in a report, but some of the concepts were also presented at “Spatial Computing for All” – a fireside chat with Virtual World Society Founder Tom Furness, HTC China President Alvin Graylin, and moderated by technology consultant Linda Ricci.

The major takeaway was that the industry insiders aren’t particularly worried about the next few years. After that, the way in which we do work might start to change and that might have to change the ways that we think about ourselves and value our identities in a changing society.

AWE Is Changing Too

During the show wrap-up, Ori Inbar had some big news. “AWE is leveling up to LA.” This was the fourteenth AWE. Every AWE, except for one year when the entire conference was virtual because of the COVID-19 pandemic, has been in Santa Clara. But, the conference has grown so much that it’s time to move.

AWE 2024 in LA

“I think we realized this year that we were kind of busting at the seams,” said Inbar. “We need a lot more space.”

The conference, which will take place from June 18-20 will be in Long Beach, with “super, super early bird tickets” available for the next few weeks.

Yes, There’s Still More

Most of the Auggie Awards and the winners of Inbar’s climate challenge were announced during a ceremony on the evening of Day Two. During the event wrap-up, the final three Auggies were awarded. We didn’t forget, we just didn’t have room for them in our coverage.

So, there is one final piece of AWE coverage just on the Auggies. Keep an eye out. Spoiler alert, Apple wasn’t nominated in any of the categories.

AWE USA 2023 Day Three: Eyes on Apple Read More »

awe-usa-2023-day-one:-xr,-ai,-metaverse,-and-more

AWE USA 2023 Day One: XR, AI, Metaverse, and More

AWE USA 2023 saw a blossoming industry defending itself from negative press and a perceived rivalry with other emerging technologies. Fortunately, Day One also brought big announcements, great discussions, and a little help from AI itself.

Ori Inbar’s Welcome Address

Historically, AWE has started with an address from founder Ori Inbar. This time, it started with an address from a hologram of Ori Inbar appearing on an ARHT display.

Ori Inbar hologram at AWE USA 2023 Day 1
Ori Inbar hologram

The hologram waxed on for a few minutes about progress in the industry and XR’s incredible journey. Then the human Ori Inbar appeared and told the audience that everything that the hologram said was written by ChatGPT.

While (the real) Inbar quipped that he uses artificial intelligence to show him how not to talk, he addressed recent media claims that AI is taking attention and funding away from XR. He has a different view.

it’s ON !!!

Ori Inbar just started his opening key note at #AWE2023

Holo-Ori was here thanks to our friends from @arht_tech.@como pic.twitter.com/Do23hjIkST

— AWE (@ARealityEvent) May 31, 2023

“We industry insiders know this is not exactly true … AI is a good thing for XR. AI accelerates XR,” said Inbar. “XR is the interface for AI … our interactions [with AI] will become a lot less about text and prompts and a lot more about spatial context.”

“Metaverse, Shmetaverse” Returns With a Very Special Guest

Inbar has always been bullish on XR. He has been skeptical of the metaverse.

At the end of his welcome address last year, Inbar praised himself for not saying “the M word” a single time. The year before that, he opened the conference with a joke game show called “Metaverse, Shmetaverse.” Attendees this year were curious to see Inbar share the stage with a special guest: Neal Stephenson.

Neal Stephenson at AWE USA 2023 Day 1
Neal Stephenson

Stephenson’s 1992 book, Snow Crash, introduced the world to the word “metaverse” – though Stephenson said that he wasn’t the first one to imagine the concept. He also addressed the common concern that the term for shared virtual spaces came from a dystopian novel.

“The metaverse described in Snow Crash was my best guess about what spatial computing as a mass medium might look like,” said Stephenson. “The metaverse itself is neither dystopian nor utopian.”

Stephenson then commented that the last five years or so have seen the emergence of the core technologies necessary to create the metaverse, though it still suffers from a lack of compelling content. That’s something that his company, Lamina1, hopes to address through a blockchain-based system for rewarding creators.

“There have to be experiences in the metaverse that are worth having,” said Stephenson. “For me, there’s a kind of glaring and frustrating lack of support for the people who make those experiences.”

AWE 2023 Keynotes and Follow-Ups

Both Day One and Day Two of AWE start out with blocks of keynotes on the main stage. On Day One, following Inbar’s welcome address and conversation with Stephenson, we heard from Qualcomm and XREAL (formerly Nreal). Both talks kicked off themes that would be taken up in other sessions throughout the day.

Qualcomm

From the main stage, Qualcomm Vice President and General Manager of XR, Hugo Swart, presented “Accelerating the XR Ecosystem: The Future Is Open.” He commented on the challenge of developing AR headsets, but mentioned the half-dozen or so Qualcomm-enabled headsets released in the last year, including the Lenovo ThinkReality VRX announced Tuesday.

Hugo Swart Qualcomm at AWE USA 2023 Day 1
Hugo Swart

Swart was joined on the stage by OPPO Director of XR Technology, Yi Xu, who announced a new Qualcomm-powered MR headset that would become available as a developer edition in the second half of this year.

As exciting as those announcements were, it was a software announcement that really made a stir. It’s a new Snapdragon Spaces tool called “Dual Render Fusion.”

“We have been working very hard to reimagine smartphone XR when used with AR glasses,” said Swart. “The idea is that mobile developers designing apps for 2D expand those apps to world-scale apps without any knowledge of XR.”

Keeping the Conversation Going

Another talk, “XR’s Inflection Point” presented by Qualcomm Director of Product Management Steve Lukas, provided a deeper dive into Dual Render Fusion. The tool allows an experience to use a mobile phone camera and a headworn device’s camera simultaneously. Existing app development tools hadn’t allowed this because (until now) it didn’t make sense.

Steve Lukas at AWE 2023 Day 1
Steve Lukas

“To increase XR’s adoption curve, we must first flatten its learning curve, and that’s what Qualcomm just did,” said Lukas. “We’re not ready to give up on mobile phones so why don’t we stop talking about how to replace them and start talking about how to leverage them?”

A panel discussion, “Creating a New Reality With Snapdragon Today” moderated by Qualcomm Senior Director of Product Management XR Said Bakadir, brought together Xu, Lenovo General Manager of XR and Metaverse Vishal Shah, and DigiLens Vice President of Sales and Marketing Brian Hamilton. They largely addressed the need to rethink AR content and delivery.

Vishal Shah, Brian Hamilton, Yi Xu, and Said Bakadir at AWE USA 2023 Day 1
From left to right: Vishal Shah, Brian Hamilton, Yi Xu, and Said Bakadir

“When I talk to the developers, they say, ‘Well there’s no hardware.’ When I talk to the hardware guys, they say, ‘There’s no content.’ And we’re kind of stuck in that space,” said Bakadir.

Hamilton and Shah both said, in their own words, that Qualcomm is creating “an all-in-one platform” and “an end-to-end solution” that solves the content/delivery dilemma that Bakadir opened with.

XREAL

In case you blinked and missed it, Nreal is now XREAL. According to a release shared with ARPost, the name change had to do with “disputes regarding the Nreal mark” (probably how similar it was to “Unreal”). But, “the disputes were solved amicably.”

Chi Xu XREAL AWE 2023
Chi Xu

The only change is the name – the hardware and software are still the hardware and software that we know and love. So, when CEO Chi Xu took the stage to present “Unleashing the Potential of Consumer AR” he just focused on progress.

From one angle, that progress looks like a version of XREAL’s AR operating system for Steam Deck, which Xu said is “coming soon.” From another angle, it looked like the partnership with Sightful which recently resulted in “Spacetop” – the world’s first AR laptop.

XREAL also announced Beam, a controller and compute box that can connect wirelessly or via hard connection to XREAL glasses specifically for streaming media. Beam also allows comfort and usability settings for the virtual screen that aren’t currently supported by the company’s current console and app integrations. Xu called it “the best TV innovation since TV.”

AI and XR

A number of panels and talks also picked up on Inbar’s theme of AI and XR. And they all (as far as I saw) unanimously agreed with Inbar’s assessment that there is no actual competition between the two technologies.

The most in-depth discussion on the topic was “The Intersection of AI and XR” a panel discussion between XR ethicist Kent Bye, Lamina1 CPO Tony Parisi, HTC Global VP of Corporate Development Alvin Graylin, and moderated by WXR Fund Managing Partner Amy LaMeyer.

Amy LaMeyer, Tony Parisi, Alvin Graylin, Kent Bye AWE 2023 Day 1
From left to right: Amy LaMeyer, Tony Parisi, Alvin Graylin, Kent Bye

“There’s this myth that AI is here so now XR’s dead, but it’s the complete opposite,” said Graylin. Graylin pointed out that most forms of tracking and input as well as approaches to scene understanding are all driven by AI. “AI has been part of XR for a long time.”

While they all agreed that AI is a part of XR, the group disagreed on the extent to which AI could take over content creation.

“A lot of people think AI is the solution to all of their content creation and authoring needs in XR, but that’s not the whole equation,” said Parisi.

Graylin countered that AI will increasingly be able to replace human developers. Bye in particular was vocal that we should be reluctant and suspicious of handing over too much creative power to AI in the first place.

“The differentiating factor is going to be storytelling,” said Bye. “I’m seeing a lot of XR theater that has live actors doing things that AI could never do.”

Web3, WebXR, and the Metaverse

The conversation is still continuing regarding the relationship between the metaverse and Web3. With both the metaverse and Web3 focusing on the ideas of openness and interoperability, WebXR has become a common ground between the two. WebXR is also the most accessible from a hardware perspective.

“VR headsets will remain a niche tech like game consoles: some people will have them and use them and swear by them and won’t be able to live without them, but not everyone will have one,” Nokia Head of Trends and Innovation Scouting, Leslie Shannon, said in her talk “What Problem Does the Metaverse Solve?”

Leslie Shannon AWE 2023 Day 1
Leslie Shannon

“The majority of metaverse experiences are happening on mobile phones,” said Shannon. “Presence is more important than immersion.”

Wonderland Engine CEO Jonathan Hale asked “Will WebXR Replace Native XR” with The Fitness Resort COO Lydia Berry. Berry commented that the availability of WebXR across devices helps developers make their content accessible as well as discoverable.

Lydia Berry and Jonathan Hale AWE 2023 Day 1
Lydia Berry and Jonathan Hale

“The adoption challenges around glasses are there. We’re still in the really early adoption phase,” said Berry. “We need as many headsets out there as possible.”

Hale also added that WebXR is being taken more seriously as a delivery method by hardware manufacturers who were previously mainly interested in pursuing native apps.

“More and more interest is coming from hardware manufacturers every day,” said Hale. “We just announced that we’re working with Qualcomm to bring Wonderland Engine to Snapdragon Spaces.”

Keep Coming Back

AWE Day One was a riot but there’s a lot more where that came from. Day Two kicks off with keynotes by Magic Leap and Niantic, there are more talks, more panels, more AI, and the Expo Floor opens up for demos. We’ll see you tomorrow.

AWE USA 2023 Day One: XR, AI, Metaverse, and More Read More »

strivr-enhances-immersive-learning-with-generative-ai,-equips-vr-training-platform-with-mental-health-and-well-being-experiences

Strivr Enhances Immersive Learning With Generative AI, Equips VR Training Platform With Mental Health and Well-Being Experiences

Strivr, a virtual reality training solutions startup, was founded as a VR training platform for professional sports leagues such as the NBA, NHL, and NFL. Today, Strivr has made its way to the job training scene with an innovative approach to employee training, leveraging generative AI (GenAI) to transform learning experiences.

More Companies Lean Toward Immersive Learning

Today’s business landscape is rapidly evolving. As such, Fortune 500 companies and other businesses in the corporate sector are starting to turn to more innovative employee training and development solutions. To serve the changing demands of top companies, Strivr has secured $16 million in funding back in 2018 to expand its VR training platform.

Research shows that learning through VR environments can significantly enhance knowledge retention, making it a groundbreaking development in employee training.

Unlike traditional training methods, a VR training platform immerses employees in lifelike scenarios, providing unparalleled engagement and experiential learning. However, this technology isn’t a new concept at all. Companies have been incorporating VR into their training solutions for several years, but we’ve only recently seen more industries adopting this technology rapidly.

The Impact of Generative AI on VR Training Platforms

Walmart, the largest retailer in the world, partnered with Strivr to bring VR to their training facilities. Employees can now practice in virtual sales floors repeatedly until they perfect their skills. In 2019, nearly 1.4 million Walmart associates have undergone VR training to prepare for the holiday rush, placing them in a simulated, chaotic Black Friday scenario.

As a result, associates reported a 30% increase in employee satisfaction, 70% higher test scores, and 10 to 15% higher knowledge retention rates. Because of the VR training’s success, Walmart expanded the VR training program to all their stores nationwide.

Derek Belch, founder and CEO at Strivr, states that the demand for the faster development of high-quality and scalable VR experiences that generate impactful results is “at an all-time high.”

VR training platofrm Strivr

As Strivr’s customers are among the most prominent companies globally, they are directly experiencing the impact of immersive learning on employee engagement, retention, and performance. “They want more, and we’re listening,” said Belch in a press release shared with ARPost.

So, to enhance its VR training platform, Strivr embraces generative AI to develop storylines, boost animation and asset creation, and optimize visual and content-driven features.

GenAI will also aid HR and L&D leaders in critical decision-making by deriving insights from immersive user data.

Strivr’s VR Training Platform Addresses Employee Mental Health

Strivr has partnered with Reulay and Healium in hosting its first in-headset mental health and well-being applications on the VR training platform. This will allow their customers to incorporate mental health “breaks” into their training curricula and address the rising levels of employee burnout, depression, and anxiety.

Belch has announced that Strivr also partnered with one of the world’s leading financial institutions to make meditation activities available in their workplace.

Meditation is indeed helpful for employees; the Journal of the American Medical Association recently published a study that showed that meditation can help reduce anxiety as effectively as drug therapies. Mindfulness practices, on the other hand, have been demonstrated to increase employee productivity, focus, and collaboration.

How VR Transforms Professional Training

With Strivr’s VR Training platform offering enhanced experiential learning and mental well-being, one might wonder how VR technology will influence employee training moving forward.

Belch describes Strivr’s VR training platform as a “beautifully free space” to practice. Employees can develop or improve their skills in a realistic scenario that simulates actual workplace challenges in a way that typical workshops and classrooms cannot. Moreover, training employees through VR platform cuts travel costs associated with conventional training facilities.

VR training platform Strivr

VR training platforms also contribute to a more inclusive and diverse workplace. Employees belonging to minority groups can rehearse and tailor their behaviors in simulated scenarios where a superior or customer is prejudiced toward them, for instance. When these situations are addressed during training, companies can protect their employees from these challenges and prepare them.

What’s Next for VR Training Platforms?

According to Belch, Strivr’s enhanced VR training platform is only the beginning of how VR will continue to impact the employee experience.

So far, VR training platforms have been improving employee onboarding, knowledge retention, and performance. They allow employees to practice and acquire critical skills in a safe, virtual environment, helping them gain more confidence and efficiency while training. Additionally, diversity and inclusion are promoted, thanks to VR’s ability to simulate scenarios where employees can tailor their behaviors during difficult situations.

And, of course, VR training has rightfully gained recognition for helping teach retail workers essential customer service skills. By interacting with virtual customers in a life-like environment, Walmart’s employees have significantly boosted their skills, and the mega-retailer has implemented an immersive training solution to all of its nearly 4,700 stores all over America.

In 2022, Accenture invested in Strivr and Talespin to revolutionize immersive learning and enterprise VR. This is a good sign of confidence in the industry and its massive potential for growth.

As we keep an eye on the latest scoop about VR technology, we can expect more groundbreaking developments in the industry and for VR platforms to increase their presence in the employee training realm.

Strivr Enhances Immersive Learning With Generative AI, Equips VR Training Platform With Mental Health and Well-Being Experiences Read More »

talespin-releases-ai-powered,-web-accessible-no-code-creator-platform

Talespin Releases AI-powered, Web-Accessible No-Code Creator Platform

To prepare professionals for tomorrow’s workplace, you need to be able to leverage tomorrow’s technology. Talespin was already doing this with their immersive AI-powered VR simulation and training modules.

Now, they’re taking it a step further by turning over a web-based no-code creator tool. To learn more, we reconnected with Talespin CEO Kyle Jackson to talk about the future of his company and the future of work.

The Road So Far

Talespin has existed as an idea for about ten years. That includes a few years before they started turning out experiences in 2015. In 2019, the company started leveraging AI technology for more nuanced storytelling and more believable virtual characters.

CoPilot Designer 3.0 Talespin

CoPilot Designer, the company’s content creation platform, released in 2021. Since then, it’s gone through big and small updates.

That brings us to the release of CoPilot Designer 3.0 – probably the biggest single change that’s come to the platform so far. This third major version of the tool is accessible on the web rather than as a downloaded app. We’ve already seen what the designer can do, as Talespin has been using it internally, including in its recent intricate story world in partnership with Pearson.

“Our North Star was how do you get the ability to create content into the hands of people who have the knowledge,” Jackson told ARPost this March. “The no-code platform was built in service of that but we decided we had to eat our own dogfood.”

In addition to being completely no-code, CoPilot Designer 3.0 has more AI tools than ever. It also features direct publishing to Quest 2, PC VR headsets, and Mac devices via streaming with support for Lenovo ThinkReality headsets and the Quest Pro coming soon.

Understanding AI in the Designer

The AI that powers CoPilot Designer 3.0 comes in two flavors – the tools that help the creator build the experience, and the tools that help the learner become immersed in the experience.

More generative 3D tools (tools that help the creator build environments and characters) is coming soon. The tools really developing in this iteration of CoPilot Designer are large language models (LLMs) and neural voices.

Talespin CoPilot Designer 3.0

Jackson described LLMs as the context of the content and neural voices as the expression of the content. After all, the average Talespin module could exist as a text-only interaction. But, an experience meant to teach soft skills is a lot more impactful when the situations and characters feel real. That means that the content can’t just be good, it has to be delivered in a moving way.

The Future of Work – and Talespin

While AI develops, Jackson said that the thing that he’s waiting for the most isn’t a new capability of AI. It’s trust.

“Right now, I would say that there’s not much trust in enterprise for this stuff, so we’re working very diligently,” Jackson told ARPost. “Learning and marketing have been two areas that are more flexible … I think that’s going to be where we really see this stuff break out first.”

Right now, that diligence includes maintaining the human component and limiting AI involvement where necessary. Where AI might help creators apply learning material, that learning material is still originally authored by human experts. One day AI might help to write the content too, but that isn’t happening so far.

“If our goal is achieved where we’re actually developing learning on the fly,” said Jackson, “we need to be sure that what it’s producing is good.”

Much of the inspiration behind Talespin in the first place was that as more manual jobs get automated, necessary workplace skills will pivot to soft skills. In short, humans won’t be replaced by machines, but the work that humans do will change.

As his own company relies more on AI for content generation, Jackson has already seen this prediction coming true for his team. As they’ve exponentially decreased the time that it takes for them to create content, they’re more able to work with customers and partners as opposed to largely serving as a platform to create and host content that companies made themselves.

Talepsin CoPilot Designer 3.0 - XR Content Creation Time Graph

Solving the Content Problem

To some degree, Talespin being a pioneer in the AI space is a necessary evolution of the company’s having been an XR pioneer. Some aspects of XR’s frontier struggles are already a thing of the past, but others have a lot to gain from leaning on other emerging technologies.

“At least on the enterprise side, there’s really no one doubting the validity of this technology anymore … Now it’s just a question of how we get that content more distributed,” said Jackson. “It feels like there’s a confluence of major events that are driving us along.”

Talespin Releases AI-powered, Web-Accessible No-Code Creator Platform Read More »

this-‘skyrim-vr’-mod-shows-how-ai-can-take-vr-immersion-to-the-next-level

This ‘Skyrim VR’ Mod Shows How AI Can Take VR Immersion to the Next Level

ChatGPT isn’t perfect, but the popular AI chatbot’s access to large language models (LLM) means it can do a lot of things you might not expect, like give all of Tamriel’s NPC inhabitants the ability to hold natural conversations and answer questions about the iconic fantasy world. Uncanny, yes. But it’s a prescient look at how games might one day use AI to reach new heights in immersion.

YouTuber ‘Art from the Machine’ released a video showing off how they modded the much beloved VR version of The Elder Scrolls V: Skyrim.

The mod, which isn’t available yet, ostensibly lets you hold conversations with NPCs via ChatGPT and xVASynth, an AI tool for generating voice acting lines using voices from video games.

Check out the results in the most recent update below:

The latest version of the project introduces Skyrim scripting for the first time, which the developer says allows for lip syncing of voices and NPC awareness of in-game events. While still a little rigid, it feels like a pretty big step towards climbing out of the uncanny valley.

Here’s how ‘Art from the Machine’ describes the project in a recent Reddit post showcasing their work:

A few weeks ago I posted a video demonstrating a Python script I am working on which lets you talk to NPCs in Skyrim via ChatGPT and xVASynth. Since then I have been working to integrate this Python script with Skyrim’s own modding tools and I have reached a few exciting milestones:

NPCs are now aware of their current location and time of day. This opens up lots of possibilities for ChatGPT to react to the game world dynamically instead of waiting to be given context by the player. As an example, I no longer have issues with shopkeepers trying to barter with me in the Bannered Mare after work hours. NPCs are also aware of the items picked up by the player during conversation. This means that if you loot a chest, harvest an animal pelt, or pick a flower, NPCs will be able to comment on these actions.

NPCs are now lip synced with xVASynth. This is obviously much more natural than the floaty proof-of-concept voices I had before. I have also made some quality of life improvements such as getting response times down to ~15 seconds and adding a spell to start conversations.

When everything is in place, it is an incredibly surreal experience to be able to sit down and talk to these characters in VR. Nothing takes me out of the experience more than hearing the same repeated voice lines, and with this no two responses are ever the same. There is still a lot of work to go, but even in its current state I couldn’t go back to playing without this.

You might notice the actual voice prompting the NPCs is also fairly robotic too, although ‘Art from the Machine’ says they’re using speech-to-text to talk to the ChatGPT 3.5-driven system. The voice heard in the video is generated from xVASynth, and then plugged in during video editing to replace what they call their “radio-unfriendly voice.”

And when can you download and play for yourself? Well, the developer says publishing their project is still a bit of a sticky issue.

“I haven’t really thought about how to publish this, so I think I’ll have to dig into other ChatGPT projects to see how others have tackled the API key issue. I am hoping that it’s possible to alternatively connect to a locally-run LLM model for anyone who isn’t keen on paying the API fees.”

Serving up more natural NPC responses is also an area that needs to be addressed, the developer says.

For now I have it set up so that NPCs say “let me think” to indicate that I have been heard and the response is in the process of being generated, but you’re right this can be expanded to choose from a few different filler lines instead of repeating the same one every time.

And while the video is noticeably sped up after prompts, this mostly comes down to the voice generation software xVASynth, which admittedly slows the response pipeline down since it’s being run locally. ChatGPT itself doesn’t affect performance, the developer says.

This isn’t the first project we’ve seen using chatbots to enrich user interactions. Lee Vermeulen, a long-time VR pioneer and developer behind Modboxreleased a video in 2021 showing off one of his first tests using OpenAI GPT 3 and voice acting software Replica. In Vermeulen’s video, he talks about how he set parameters for each NPC, giving them the body of knowledge they should have, all of which guides the sort of responses they’ll give.

Check out Vermeulen’s video below, the very same that inspired ‘Art from the Machine’ to start working on the Skyrim VR mod:

As you’d imagine, this is really only the tip of the iceberg for AI-driven NPC interactions. Being able to naturally talk to NPCs, even if a little stuttery and not exactly at human-level, may be preferable over having to wade through a ton of 2D text menus, or go through slow and ungainly tutorials. It also offers up the chance to bond more with your trusty AI companion, like Skyrim’s Lydia or Fallout 4’s Nick Valentine, who instead of offering up canned dialogue might actually, you know, help you out every once in a while.

And that’s really only the surface level stuff that a mod like ‘Art from the Machine’ might deliver to existing games that aren’t built with AI-driven NPCs. Imagining a game that is actually predicated on your ability to ask the right questions and do your own detective work—well, that’s a role-playing game we’ve never experienced before, either in VR our otherwise.

This ‘Skyrim VR’ Mod Shows How AI Can Take VR Immersion to the Next Level Read More »

meetkai-launches-new-building-tools

MeetKai Launches New Building Tools

MeetKai has been around since 2018 but some of its first publicly enjoyable content hit the streets a few months ago. Now, the company is releasing a suite of software solutions and developer tools to help the rest of us build the metaverse.

From Innovation to Product

ARPost met MeetKai in July 2022, when the company was launching a limited engagement in Time Square. Since then, the company has been working with the Los Angeles Chargers.

“The purpose of the Time Square activation and campaign was really to test things out in the browser,” CEO and co-founder, James Kaplan, said in a video call. “With 3D spaces, there’s a question of whether the user views it as a game, or as something else.”

MeetKai Metaverse Editor - Los Angeles Chargers
MeetKai Metaverse Editor – Los Angeles Chargers

Those insights have informed their subsequent outward-facing work with the Chargers, but the company has also been working on some more behind-the-scenes products that were just released at CES.

“We’re moving from an innovation technology company to a product company,” co-founder and Executive Chairwoman, Weili Dai, said in the call. “Technology innovation is great, but show me the value for the end user. That’s where MeetKai is.”

Build the Metaverse With MeetKai

At CES, MeetKai announced three new product offerings: MeetKai Cloud AI, MeetKai Reality, and MeetKai Metaverse Editor. The first of those offerings is more in line with the company’s history as a conversational AI service provider. The second two offerings are tools for creating digital twins and for building and editing virtual spaces respectively.

“The biggest request that we get from people is that they want to build their own stuff, they don’t just want to see the stuff that we made,” said Kaplan. “So, we’ve been trying to say ‘how do we let people build things?’ even when they’re not engineers or artists.”

Users of the new tools can use them individually to create projects for internal or outward-facing projects. For example, a user could choose to create an exact digital twin of a physical environment with MeetKai Reality or create an entirely new virtual space with MeetKai Editor.

However, some of the most interesting projects come when the tools are used together. One example of this is an agricultural organization with early access to the products that used these two tools together to create a digital twin of real areas on their premises and then used the Editor for simulation and training use cases.

“AI as an Enabling Tool”

The formula for creating usable but robust tools was to combine conventional building tools like scanning and game engines with some help from artificial intelligence. In that way, these products look a lot less like a deviation from the company’s history and look a lot more like what the company has been doing all along.

MeetKai Cloud AI - Avatar sample
MeetKai Cloud AI – Avatar sample

“We see AI as an enabling tool. That was our premise from the beginning,” said Kaplan. “If you start a project and then add AI, it’s always going to be worse than if you say, ‘What kinds of AI do we have or what kinds of AI can we build?’ and see what kind of products can follow that.”

So the first hurdle is building the tools and the second hurdle is making the tools usable. Most companies in the space either build tools which remain forever overly complex, or they make tools that work but have limited potential because they were only designed for one specific use or for use within one specific environment.

“The core technology is AI and the capability needs to be presented in the most friendly way, and that’s what we do,” said Weili. “The AI capability, the technology, the innovation has to be leading.”

The company’s approach to software isn’t the only way they stand out. They also have a somewhat conservative approach when it comes to the hardware that they build for.

“I think 2025 is going to be the year that a lot of this hardware is going to start to level up. … Once the hardware is available, you have to let people build from day one,” said Kaplan. “Right now a lot of what’s coming out, even from these big companies, looks really silly because they’re assuming that the hardware isn’t going to improve.”

A More Mature Vision of the Metaverse

This duo has a lot to say about the competition. But, fortunately for the rest of us, it isn’t all bad. As they’ve made their way around CES, they’ve made one more observation that might be a nice closing note for this article. It has to do with how companies are approaching “the M-word.”

“Last CES, we saw a lot of things about the metaverse and I think that this year we’re really excited because a lot of the really bad ideas about the metaverse have collapsed,” said Kaplan. “Now, the focus is what brings value to the user as opposed to what brings value to some opaque idea of a conceptual user.”

Kaplan sees our augmented reality future as like a mountain, but the mountain doesn’t just go straight up. We reach apparent summits only to encounter steep valleys between us and the next summit. Where most companies climb one peak at a time, Kaplan and Weili are trying to plan a road across the whole mountain chain which means designing “in parallel.”

“The moment hardware is ready, we’re going to leapfrog … we prepare MeetKai for the long run,” said Weili. “We have partners working with us. This isn’t just a technology demonstration.”

How MeetKai Climbs the Mountain

This team’s journey along that mountain road might be more apparent than we realize. After all, when we last talked to them and “metaverse” was the word on everyone’s lips, they appeared with a ready-made solution. Now as AI developer tools are the hot thing, here they come with a ready-made solution. Wherever we go next, it’s likely MeetKai will have been there first.

MeetKai Launches New Building Tools Read More »

vr-robots:-enhancing-robot-functions-with-vr-technology

VR Robots: Enhancing Robot Functions With VR Technology

 

VR robots are slowly moving into the mainstream with applications that go beyond the usual manufacturing processes. Robots have been in use for years in industrial settings where they perform automated repetitive tasks. But their practical use has been quite limited. Today, however, we see some of them in the consumer sector delivering robotic solutions that require customization.

Augmented by other technologies such as AR, VR, and AI, robots show improved efficiency and safety in accomplishing more complex processes. With VR, humans can supervise the robots remotely to enhance their performance. VR technology provides human operators with a more immersive environment. This enables them to interact with robots better and view the actual surroundings of the robots in real time. Consequently, this opens vast opportunities for practical uses that enhance our lives.

Real-Life Use Cases of VR Robots

1. TX SCARA: Automated Restocking of Refrigerated Shelves

Developed by Telexistence, TX SCARA is powered by three main technologies—robotics, artificial intelligence, and virtual reality. This robot specializes in restocking refrigerated shelves in stores. It relies on GORDON, its AI system, to know when and where to place products. When issues arise due to external factors or system miscalculation, Telexistence employees use VR headsets to control the robot remotely and address the problem.

TX SCARA is present in 300 FamilyMart stores in Japan. Plans to expand their use in convenience stores in the United States are already underway. With TX SCARA capable of working 24/7 with a pace of up to 1,000 bottles or cans per day, it can replace up to three hours of human work each day for a single store alone.

2. Reachy: A Robot That Shows Emotions

Reachy gives VR robots a human side. An expressive humanoid platform, Reachy mimics human expressions and body language. It conveys human emotions through its antennas and motions.

VR robots - Reachy
Reachy

Users operate Reachy remotely using VR equipment that shows the environment surrounding the robot. They can move Reachy’s head, arms, and hands to manipulate objects and interact with people around the robot. They can also control Reachy’s mobile base to move around and explore its environment.

Since it can be programmed with Python and ROS to perform almost any task, its use cases are virtually limitless. It has applications across various sectors, such as research (to explore new frontiers in robotics), healthcare (to replace mechanical tasks), retail (to enhance customer experiences), education (to make learning more immersive), and many others. Reachy is also fully customizable, with many different configurations, modules, and hardware options available.

3. Robotic VR: Haptic Technology for Medical Care

A team of researchers co-led by the City University of Hong Kong has developed an advanced robotic VR system that has great potential for use in healthcare. Robotic VR, an innovative human-machine interface (HMI), can be used to perform medical procedures. This includes conducting swab tests and caring for patients with infectious diseases.

Doctors, nurses, and other health practitioners control the VR robot using a VR headset and flexible electronic skin that enables them to experience tactile sensations while interacting remotely with patients. This allows them to control and adjust the robot’s motion and strength as they collect bio-samples or provide nursing care. Robotic VR can help minimize the risk of infection and prevent contagion.

4. Skippy: Your Neighborhood Delivery Robot

Skippy elevates deliveries to a whole new level. Human operators, called Skipsters, control these VR robots remotely. They use VR headsets to supervise the robots as they move about the neighborhood. When you order food or groceries from a partner establishment, Skippy picks it up and delivers it to your doorstep. Powered by AI and controlled by Skipsters, the cute robot rolls through pedestrian paths while avoiding foot traffic and obstacles.

VR robots - Skippy
Skippy

You can now have Skippy deliver your food orders from a handful of restaurants in Minneapolis and Jacksonville. With its maker, Carbon Origins, planning to expand the fleet this year, it won’t be long until you spot a Skippy around your city.

Watch Out for More VR-Enabled Robots

Virtual reality is an enabling technology in robotics. By merging these two technologies, we’re bound to see more practical uses of VR-enabled robots in the consumer market. As the technologies become more advanced and the hardware required becomes more affordable, we can expect to see more VR robots that we can interact with as we go through our daily lives.

Developments in VR interface and robotics technology will eventually pave the way for advancements in the usability of VR robots in real-world applications.

VR Robots: Enhancing Robot Functions With VR Technology Read More »