virtual reality

highlighting-the-top-3-xr-trends-of-2023-[insights-from-this-year’s-awe-usa]

Highlighting the Top 3 XR Trends of 2023 [Insights From This Year’s AWE USA]

The 2023 edition of AWE USA not only lived up to its reputation but also reached new heights, reportedly marking its largest event to date. From cutting-edge hardware to new, groundbreaking technology and software solutions, this year had it all.

3 Trends That Will Shape the Future of XR

Let’s dive in and explore the main three trends that stood out and are bound to shape the narrative for the future of XR.

Main Focus on AR

There was a lot of discussion this year about artificial intelligence and how it will enable XR rather than replace it. Just like with the metaverse last year, AI became a new hot topic, but in terms of hardware, the spotlight was clearly on AR.

There were, of course, some notable VR-first devices presented: Lenovo announced their new ThinkReality VRX headset, which is now available for purchase ($1,299). I had a chance to give it a try and was impressed with its large sweet spot, visual clarity, and a high degree of comfort. The headset includes a cooling system that takes the heat away from your face and makes the inside feel almost air-conditioned.

ThinkReality VRX
ThinkReality VRX

HTC presented their modular HTC Vive XR Elite ($1,099) for which they had won a “Best Headworn Device” award. It can be worn both like a traditional headset with a head strap or akin to glasses with an external power source instead of the battery in the back. In detached form, the Vive XR Elite weighs only 270 grams.

These devices were more of an exception rather than the rule, however, and pale in comparison to the amount of AR devices showcased this year. Just on the main floor, we had Vuzix promoting their Ultralite turnkey AR solution, Sightful with a screenless Spacetop AR laptop, XREAL presenting XREAL Air glasses, and Magic Leap returning with Magic Leap 2. Right next to those was C-Real with their unique light field display and Tilt Five. In the lobby, Zappar was demonstrating its $75 Cardboard-inspired device.

And that’s just the hardware, the list doesn’t include smartphone-based solutions like Snapchat’s SnapAR and Snap Lenses or Ffface.me digital clothing. Many software providers were experimenting with AR as well. Whether it was enterprise and training applications or entertainment like a laser-tag-inspired Laser Limbo, the focus on augmented reality was prevalent.

Laser-tag-inspired Laser Limbo
Laser-tag-inspired Laser Limbo

Subjectively, I found the XREAL and Tilt Five glasses to be the most promising choices in terms of their usefulness and affordability. Tilt Five ($359) offers six degrees of freedom and a wide 110° field of view, plus a whole range of tabletop applications and games. It also comes with a tracked controller.

Tilt Five
Tilt Five

The XREAL Air ($488 with XReal Beam) might only have three degrees of freedom and a smaller FOV of 46°, but makes up for it with its versatility. It weighs only 79 grams and is compatible with phones, consoles, and laptops. Almost any device with a screen can be beamed into the glasses. For those looking to start experimenting with AR, both offer a good and inexpensive entry point.

The Renaissance of Haptics

It was hard to ignore the sheer volume of haptic-related products at AWE. There was a surge of novel startups and original concepts along with many industry veterans returning to show off their latest progress.

I did not expect haptics to have such a strong showing and was positively taken aback. Bhaptics were busy presenting their new TactGlove and Contact CI came out with a new product called Maestro. The most established player in the space, HaptX, was there as well.

Among newer entrants, SenseGlove was celebrating their €3.25M Series A funding with a newly updated Nova 2 haptic glove. Weart demoed their TouchDIVER glove capable of not only feedback but also temperature variations, while OWO showed off their latest haptic vest that uses electrical impulses to simulate sensations. Fluid Reality stole the show with its electroosmotic device that uses an electric field to create feedback.

Fluid Reality
Fluid Reality

There were too many to list but even this short rundown underscores how noticeable haptics were this year. Most of these products target industrial and business markets, with the notable exceptions being the OWO vest ($499) and Bhaptics (also $499). Both devices have their strengths and weaknesses, though I have to give extra points to OWO for taking a bold, unique approach and allowing users to configure the vest so that it can simulate discomfort as well as other unpleasant feedback. This can result in a significantly more visceral experience and a heightened feeling of presence that’s hard to replicate using other methods.

OWO Haptic Vest
OWO Haptic Vest

Seeing all the new and creative ways to model and recreate tactile data left me impressed with what’s to come, but at the same time, underwhelmed with the more conventional approaches.

Full resistance feedback, which restricts your movement, felt detached and did not properly mirror what I was seeing inside the headset. That was the case for both SenseGlove Nova and the high-end HaptX.

Their feedback, while indeed powerful, felt very mechanical and arbitrary. There are two paradigms here at play, one is trying to nail the fidelity but approximate the sensation, while the other one is trying to provide the exact, realistic sensation at the cost of fidelity.

New Optics Solutions Are Coming

There were a number of booths dealing with optics and display solutions this year. It’s possible the latest push into AR helped supercharge this progress in optics. Many booths had some kind of developer kit or proof-of-concept ready. Visitors would come and literally peer into the future through these stationary prototypes.

One example was Ant Reality demonstrating their mixed waveguide solution called Crossfire. While the displays (ranging in field of view from 56° to 120°) were impressive, what made them unique was their ability to do both AR and VR. At a press of a button, the surroundings would go completely dark, turning the augmented overlay into an immersive experience. Magic Leap 2 is known for offering what is called segmented dimming, but in the case of the Crossfire, the glasses would become completely opaque despite the AWE show floor being exceptionally bright.

Ant Display demonstrating their prototypes
Ant Display demonstrating their prototypes

Another never-before-achieved breakthrough was a light field display incorporated into an AR headset, courtesy of CREAL. Light field displays promise to solve a lot of issues, the most common one being correct focal depth. Harnessing the direction of light can produce outstanding results, but shrinking light field tech to fit into a glasses form factor still proves tricky. CREAL’s headset is an important, pioneering step in this field.

CREAL’s LFD headset
CREAL’s LFD headset

Another interesting innovation came from a company called Hypervision. Their claim to fame is their ultra-wide display capable of achieving a human vision 240° field of view. To make this happen, Hypervision used not one, not two, but four pancake lenses. Vertically, the screen has 95° so it doesn’t quite match the human eye top to bottom, but horizontally there’s full peripheral vision. While the stitching between the screens was slightly perceptible, the ability to achieve human FOV in such a small form factor is a massive step forward.

Hypervision
Hypervision

Overall, this means that the future generations of XR devices will have access to a wide variety of new, next-gen optics and display solutions, most of which are not even known to the general public. Display tech doesn’t follow Moore’s Law so it’s always difficult to make any specific predictions, but there’s clearly no stagnation in the field and some of the breakthroughs we saw this year are truly exciting.

Closing Thoughts

These are just some of the main trends and shifts we saw this year. There was a notable increase in 3D spatial display panels, such as Leia Lume Pad 2, Sony’s Spatial Display, Looking Glass, and a human-sized holographic box by ARHT.

This forms part of a larger trend of broadening the definition of spatial computing, which is sometimes expanded to include other real-world tools and technologies like visualizations, projection mapping, and 3D screens.

What also caught my eye was a noticeable reduction in locomotion solutions. Gone are the days of omnidirectional treadmills or big simulator seats. The only two exceptions were the unconventional EXIT SUIT, which suspends the wearer slightly above the ground allowing them to run in the air, sit, fly, and do a range of other motions (for which the team had won this year “AWEsome” award) and the Freeaim shoes that act like rollers, pushing the wearer backward as they walk.

This was the last AWE hosted in Santa Clara. From next year on, the event is moving to the Long Beach Convention Center. This shift to a new, bigger venue highlights the constant growth of the XR space and that’s one trend that speaks for itself.

Guest Post


About the Guest Author(s)

Mat Pawluczuk

Mat Pawluczuk

Mat Pawluczuk is an XR / VR writer and content creator.

Highlighting the Top 3 XR Trends of 2023 [Insights From This Year’s AWE USA] Read More »

looking-forward-to-awe-asia-2023

Looking Forward to AWE Asia 2023

If you get all of your AWE coverage from ARPost, you might be under the impression that the event is only in California – but it wouldn’t be much of a “World Expo” then, would it? In addition to frequent all-online events, AWE consists of three in-person events each year: AWE USA, AWE Europe, and AWE Asia.

AWE Asia, this year taking place in Singapore, is fast approaching, with the agenda now finalized. Attendees can look forward to hearing from over 60 speakers in over 60 sessions including keynotes, talks, and panels over the course of the two-day conference. Let’s take a look at some of the most exciting sessions.

AWE Asia Keynotes and Addresses

Day One starts off with an opening ceremony by AWE co-founder Ori Inbar, joined on-stage by AWE Asia President and Vice President, Gavin Newton-Tanzer and Ryan Hu. This session is followed by back-to-back keynotes by HTC Global Vice President of Corporate Development Alvin Graylin and University of South Australia professor Dr. Mark Billinghurst.

Day Two also starts off with keynotes. First, “Metaverse as the Next Biggest Thing: Challenges, Roadmaps, and Standardization” by IEEE president Dr. Yu Yuan. This is followed by “ifland: A Case Study on Telco Collaboration in Building a Global Metaverse Platform” presented by SK Telecom Vice President Ikhwan Cho and Deutsche Telekom Senior Director of XR and the Metaverse Terry Schussler.

Day Two then closes with remarks and awards from Inbar, Newton-Tanzer, and AWE Asia COO and Content Director David Weeks.

The keynotes and addresses are great because they often feature some of a conference’s biggest announcements and most anticipated speakers. They’re also great because nothing is scheduled at the same time as a keynote. From here, we’ll have to start making some tough calls.

Day One Sessions

Following the AWE Asia welcome address and keynotes on Day One, the crowd is sure to split. Remain near the main stage to hear NVIDIA’s Vanessa Ching discuss “Developers, Platforms, and AI.” Venture off to a substage to hear Joe Millward and Kyle Jackson of Talespin talk about “Scaling XR Content for the Enterprise With Generative AI.”

Next up. Niantic Senior VP of Engineering, Brian McClendon, explains how “Niantic is Powering AR, Everywhere, All at Once.” Having seen this talk at AWE USA, I can tell you it’s worth seeing, but I can also point out that you could watch the recording online and stretch your day a little further.

Another tough decision follows. Will it be “How AI Will Enhance the Metaverse and Education” with Meta Head of Global Education Partnerships Leticia Jauregui and Zoe Immersive CEO and co-founder Emilie Joly? Or will it be “Beyond Loudness: Spatial Chat and the Future of Virtual World Communication” with Dolby Laboratories Developer Advocate Angelik Laboy?

Day One’s Marathon on the Main Stage

The afternoon of Day One has a lineup of promising presentations on the main stage. Starting, Immersal Chief Marketing Officer Päivi Laakso-Kuivalainen and Graviton Interactive co-founder and Managing Director Declan Dwyer talk “Revolutionizing Fan Engagement: Augmented Reality in Stadiums Powered by Visual Positioning Systems and Spatial Computing.”

This is followed by Linux Foundation General Manager Royal O’Brien talking about “Inspiring Game Development Through Open Source.” Then, keep your seat to hear Trigger XR founder and CEO Jason Yim talk about retail, advertising, and e-commerce. A little later on the same stage, Mindverse.AI co-founder and COO Kisson Lin talks about the Web3 creator economy.

Day Two Sessions Main Stage Sessions

One can’t-miss session on Day Two comes from Dispelix APAC VP of Sales and Partnerships Andy Lin, presenting “PERFECTING COMFORT – Vision Behind Dispelix Waveguide Combiners for Near-to-Eye XR Displays.”

Some of the last regular sessions on the main stage before the AWE Asia closing address look promising as well.

First, Infocomm Assistant Director of Innovation Joanne Teh, Deloitte Center for the Edge Southeast Asia Leader Michelle Khoo, Serl.io co-founder and CEO Terence Loo, and SMRT Corporation Learning Technologies Lead Benjamin Chen have a panel discussion about “The Future of Immersive Experiences: Navigating the World of XR.”

Immediately following the panel discussion, Google’s Toshihiro Ohnuma takes the stage to discuss “Connecting Both Worlds – Google Maps and AR Core.”

In between those sessions, the substages look pretty promising.

Major Side-Stage Attractions

After Lin’s talk, head over to Substage 1 for a series of promising talks. These start with Maxar Technologies Business Development Manager Andrew Steele presenting “Experience the Digital Twin Built for Connecting Your XR Content With the Real World. “ The world-scale digital twin won the Auggie for Best Use of AI at the awards ceremony in Santa Clara this spring.

Up next on the same stage, Anything World co-founder and Creative Director Sebastian Hofer explains “How AI Is Powering a Golden Age in Games Development.”

A quick break between sessions and then back to learn about “ThinkReality Solutions Powering the Enterprise Metaverse” with Lenovo Emerging Technologies Lead Martand Srivastava and Qualcomm’s Kai Ping Tee.

Lots to Take In

AWE Asia being two days instead of three certainly doesn’t solve the classic AWE problem of there being just too much amazing content to take in everything. At least, not live anyway.

To attend AWE Asia yourself, get tickets here, and use our code AW323SEB25 for 30% off the standard ticket and PAR23VSEB for 35% off the VIP ticket.

Looking Forward to AWE Asia 2023 Read More »

“the-future-of-business-travel”-report-by-booking.com-gives-metaverse-predictions

“The Future of Business Travel” Report by Booking.com Gives Metaverse Predictions

The metaverse can be summed up as the augmented world. So, naturally, it has implications for travel. How and when people travel may both seriously change as spatial communication and digital twins make some kinds of travel less likely, while AR and automation reimagine the travel that we do engage in. A report by Booking.com for Business, titled “The Future of Business Travel” explores the next 30 years of travel.

AR and Space Hotels

The report begins with “A Timeline of Future Business Travel Predictions.” To the potential dismay of augmented reality enthusiasts, the report puts AR in 2027 – the same year as “space hotels”. The report acknowledges existing AR use cases including augmenting areas with contextual information. However, the authors are waiting for something better.

“Right now, AR is limited, lacking a wide field-of-view and having resolution, battery, and 3D sensing issues,” reads the report. “It’s thought that by 2027 people will have access to unconstrained, immersive AR experiences and the associated advantages for travel professionals.”

Why 2027? The paper doesn’t explicitly mention powerful AR wearables, but the time frame and their insistence on “unconstrained” experiences suggest that this is what the authors are waiting for. We already have consumer AR glasses, with limited FoV, but these are almost exclusively “viewers” for virtual screens that can’t offer the real-time contextual information people want.

In a recent interview with ARPost Lumus VP of Optics David Goldman placed a consumer AR device based on Z-Lens around 2025, with 2027 seeing models with 50-degree FoV eventually getting as wide as 70 or 80 degrees. That sounds like it’s getting more in line with people’s expectations for AR travel.

More Interest in VR?

Augmented travel is one thing, but virtual travel is another. Virtual reality has higher immersion due to a heads-up interface, greater graphical fidelity, and wider field of view. Further, VR hardware is becoming increasingly accessible, affordable, and popular with consumers.

The report also included a collection of the most-searched business travel trends, which included virtual travel in the top three. A ranking of the most talked about travel trends in the media also includes “hotel metaverse” at number three and “hotel virtual events” at number eight.

The authors attribute this to virtual travel “reducing the necessary number of business trips and giving corporate travelers the chance to explore the world with VR and metaverse experience.” Specific use cases anticipated in the report include immersive tours prior to booking, virtual conferences and events, virtual site visits to digital twins, and immersive in-flight entertainment.

More to the Metaverse

Immersive technology is first in our minds and hearts here at ARPost, but the metaverse is about more than just display technologies. The report also includes predictions related to other emerging technologies including artificial intelligence and blockchain.

For example, the authors predict blockchain technology becoming standard in hotels the year before they anticipate AR kicking off. And, around the beginning of the next decade, the authors predict “guest comfort and energy efficiency will be managed and optimized by AI in most hotels.”

Other predictions, including hotel-specific crypto-driven rewards programs and robot assistants, can be found in the full report.

A Lot to Look Forward To

All predictions should be taken with a healthy dose of salt – and that’s particularly true of predictions based on when to expect a given development. Disclaimers aside, Booking.com has presented a very interesting look at trends regarding what people want out of the metaverse when it comes to travel.

“The Future of Business Travel” Report by Booking.com Gives Metaverse Predictions Read More »

official-amazevr-concerts-app-launches-with-an-exclusive-zara-larsson-concert

Official AmazeVR Concerts App Launches With an Exclusive Zara Larsson Concert

Do you remember missing an amazing concert by your favorite artist because you could not travel to another country or continent to attend it? This is no longer a problem. Thanks to AmazeVR, anyone can experience live shows using their newly-launched VR Concerts app.

Drawing on their previous experience working with artists like Megan Thee Stallion and Ceraadi, the company is celebrating the launch of their AmazeVR Concerts app with “Zara Larsson VR Concert”, the one-of-a-kind show by Swedish pop star Zara Larsson. Now, anyone can install the AmazeVR Concerts app and attend any concert available on the platform from the comfort of their home.

Virtual Events – the Future of Entertainment

The global health crisis we experienced made us rethink all types of interactions, from healthcare appointments and business meetings to concerts and theater shows. The VR concerts app developed by AmazeVR is one of the latest additions to immersive and interactive tools for entertainment.

This is a huge step forward both for artists and audiences. For artists, VR shows allow them to interact with more fans and monetize their work in new ways. For music fans, the barriers represented by long distances and finances for traveling suddenly disappear.

Zara Larsson Excited to Collaborate with AmazeVR

Known for hits such as “Lush Life”, “Ain’t My Fault”, and “End of Time”, Swedish pop star Zara Larsson exuded enthusiasm for collaborating with AmazeVR for the launch of  AmazeVR Concerts app.

“I’ve always believed that live music has the power to unite and transcend boundaries. As an artist, finding new ways to connect with my fans and deliver a truly immersive and unforgettable experience is super important to me,” she said in a press release shared with ARPost. “I’m thrilled to be working with AmazeVR to break through the fourth wall, and directly into the homes of fans around the world.”

Bringing Artists and Fans Together in the Virtual World

For AmazeVR, their VR Concerts app, available on Meta Quest 2 (App Lab) and SteamVR, is the crowning of years of developing and improving immersive solutions for the entertainment industry. Creating the first VR concerts and measuring the public response to them showed them that they were on the right path.

At AmazeVR we are ushering a new wave of innovation for music experiences, by providing artists with extraordinary and unparalleled avenues to be up close and personal with their fans,” said AmazeVR co-CEO and co-founder Steve Lee. “It is an honor to be launching the AmazeVR app alongside such an incredible artist like Zara. Her creativity has come together to create a showstopping performance and we can’t wait for her fans to enjoy the experience.”

A Busy Schedule for the Newly Launched AmazeVR Concerts App

The virtual reality concert experience app is set to attract fans of all types of music, including pop-rock, hip-hop, K-pop, rap, and more. Right now, the app is downloadable for free and offers one free song per artist. For the exclusive Zara Larsson VR concert, fans can purchase access for one year at an exclusive launch price of $6.99.

Official AmazeVR Concerts App Launches With an Exclusive Zara Larsson Concert Read More »

european-council-publishes-web-4.0-strategy

European Council Publishes Web 4.0 Strategy

The European Commission is already setting out to tackle Web 4.0. There’s quite a bit to unpack here, including the EC approach, the 4-point plan that they recently published, and – of course – what they mean by Web 4.0.

What Is Web 4.0?

It’s not a typo and you’re not asleep at the wheel. While most of us haven’t gotten the hang of Web 3.0 yet, Europe is already setting the table for Web 4.0. Don’t worry, this is just a new terminology for something that’s already on your radar.

“Beyond the currently developing third generation of the internet, Web 3.0, whose main features are openness, decentralization, and user empowerment, the next generation, Web 4.0, will allow an integration between digital and real objects and environments and enhanced interactions between humans and machines,” reads the EC’s report.

So, essentially, “Web 4.0” is the metaverse. But, why not just call it that?

Webs and the Metaverse

The metaverse discussion at least started out as being largely a conversation within the world of immersive technology, with discussions of Web3 largely being topics within the blockchain and crypto spaces. (“Web3” and “Web 3.0” aren’t exactly the same concept, but both largely revolve around decentralization, so they’re more-or-less interchangeable for most levels of discussion.)

As voices from the cryptocurrency and blockchain communities promised that these technologies would be the future of a cross-platform, self-owned online future, Web3 and the metaverse were increasingly mentioned in the same breath with both being apparently convergent visions of the future.

A short-lived explosion of interest in the metaverse was so short-lived largely because – while the pieces are certainly falling into place – one connected metaverse hasn’t fully realized. While there are more-or-less realized metaverse spaces or use cases, the all-encompassing digital layer of reality isn’t here yet. Web3, while struggling with adoption, is largely functional today.

While some may groan at the introduction of yet another idealistic tech concept, “Web 4.0” does offer some clarity at least with regard to what the EC is talking about. First, it respects that the metaverse is still a thing of the (near?) future. Second, it ties in the themes of openness and decentralization that were lacking in many metaverse discussions.

Finally, it ties in “interactions between humans and machines.” While some technologists have long included this aspect in their discussions of the metaverse, recent developments in AI have led to increased interest in this field even since blockchain and the metaverse had their moments in the media over the last few years.

Bracing for Web 4.0

While it’s easy to feel like much of the world is still catching up with the previous generation of the internet, how is Europe planning to get ahead of the next generation of the internet? A lot of it has to do with knowing where current experts are and creating pathways for future builders.

To make that happen, the report outlines four “Key Strategy Pillars”:

  1. Empowering people and reinforcing skills to foster awareness, access to trustworthy information, and building a talent pool of virtual world specialists.
  2. Supporting a European Web 4.0 industrial ecosystem to scale up excellence and address fragmentation.
  3. Supporting local progress and virtual public services to leverage the opportunities virtual worlds can offer.
  4. Shaping global standards for open and interoperable virtual worlds and Web 4.0, ensuring they will not be dominated by a few big players.

One of the reasons that so much of the strategy has to do with ideas like “empowering people” and “leveraging opportunities” might be that much of the document was distilled from an earlier workshop of 150 randomly selected European citizens. The average person is likely feeling left behind Web 2.0 and out of the loop on Web 3.0.

The European Perspective

“Ensuring that [virtual worlds] will not be dominated by a few big players” may not be a uniquely European feeling, but it’s interesting to note. Meta, in particular, has gotten into trouble in EU member countries like Germany for the equivalent of antitrust concerns, which has opened the way for Pico to make headway in European markets free from its US political struggles.

At the most recent Augmented World Expo – just before Apple announced their first XR headset – some speakers even expressed concern that Apple will be able to throw its weight around the industry in a way that not even Meta enjoys.

Apple currently holds so much power that they could say ‘This is the way we’re going to go.’ and the Metaverse Standards Forum could stand up and say ‘No.’,” XRSI founder and CEO Kavya Pearlman said during a panel discussion at this year’s AWE.

Standards are a concern everywhere, but this is another area where the approach is somewhat different across the Atlantic. A number of standards groups have formed in the US, but all of them are independent groups rather than governmental initiatives – though some groups are calling for regulators to step into the space over concerns like privacy.

Thinking Globally About Web 4.0

“Europe is, in many ways, a first mover on metaverse policy, and it is putting forward a positive vision for the future of immersive technology,” the XRA’s VP of Public Policy Joan O’Hara said in an email to ARPost. “We very much appreciate the [European Commission’s] approach to balancing user protection and wellbeing with the desire to support innovation and adoption.”

The headquarters of Web 3.0 and Web 4.0 companies might be in one country or another, but most of them are offering international services. Unless they want to have different (and potentially incompatible) versions of those services available for different countries, it behooves those companies to have services that fit all national standards.

So, in the absence of officially codified US standards for immersive worlds, it is likely that the services offered to American audiences might fit into the shape described by groups like the European Commission. Fortunately, most of the organizations already looking at these problems are also international in nature and work with and between national governments.

“This will serve as a model going forward,” said O’Hara. “The XRA has been actively engaged with both European and British colleagues on these issues, and we believe the US interests are largely aligned with those of our friends across the Atlantic.”

Thinking Ahead

US discussions of Web 3.0 have largely spiraled around the nation’s failure to prepare for or recover from Web 2.0. The fact that Europe is already looking forward to Web 4.0 is definitely something to consider. In emerging tech, looking backward instead of forward is a dangerous strategy.

European Council Publishes Web 4.0 Strategy Read More »

inspirit-launches-affordable-xr-stem-education-platform-for-middle-and-high-school-students

Inspirit Launches Affordable XR STEM Education Platform for Middle and High School Students

XR STEM education has taken a leap forward with the official launch of Inspirit’s Innovative Learning Hub. The digital platform provides educators with affordable access to a premium library of virtual reality and augmented reality experiences designed specifically for middle and high school students. Focusing on enhancing learning outcomes and increasing engagement, Inspirit is revolutionizing the way STEM subjects are taught worldwide.

Breaking Down Barriers With Immersive Learning

Inspirit is a research-driven EdTech startup that pioneers immersive XR experiences for STEM education. The company’s Innovative Learning Hub stands as the premier choice for immersive XR STEM education, encompassing diverse subjects such as mathematics, physics, chemistry, biology, and vocational training.

Through XR experiences, Inspirit’s platform provides students with experiential learning opportunities. By engaging in simulations and exploring 3D models, students gain a deeper understanding of complex STEM concepts.

The advantages of VR education have long been embraced by both teachers and students, who have found immense value in its experiential approach. But with Inspirit’s XR expertise and easy-to-use technology, the platform bridges the gap between theoretical concepts and real-world applications, providing students with a deeper understanding and fostering engagement.

Renowned for its commitment to rigorous research, Inspirit collaborates with Stanford University researchers to unlock the full potential of XR learning. The result is a unified platform that seamlessly integrates into schools, improving learning outcomes and providing teachers with an intuitive system to embed into their curriculum.

Experts in the field, like Jeremy Bailenson, founding director of the Stanford Virtual Human Interaction Lab and professor of education, recognize the impact of Inspirit’s approach, emphasizing the importance of teacher professional development and curriculum alignment for successful integration and long-term usage in the classroom.

Inspirit XR STEM Education Platform

“Inspirit is unique in that it is led by a VR pioneer who puts ‘education first’, with a huge amount of experience in the world of STEM,” said Bailenson, in a press release shared with ARPost.

Unparalleled Access to Immersive XR Content

The Innovative Learning Hub boasts a comprehensive library of age-appropriate XR experiences that align with educational standards. From engaging simulations to interactive lessons, students have the opportunity to explore and study complex concepts, making learning tangible and enjoyable. This cutting-edge content ensures that students receive the highest-quality educational experiences.

Cross-Platform Compatibility for Seamless Learning

Flexibility is a key advantage of Inspirit’s Innovative Learning Hub. Students can access the library of XR content from various devices, including laptops, Chromebooks, and most VR headsets designed for educational use.

XR STEM Education Platform by Inspirit

This compatibility maximizes schools’ existing hardware investments while expanding learning capabilities. By eliminating the need for costly subscriptions and one-off purchases, Inspirit promotes inclusivity and accessibility, allowing all students to benefit from a comprehensive STEM curriculum.

XR STEM Education: Inspiring Students and Shaping Futures

As a firm believer in the transformative power of immersive technology, Aditya Vishwanath, co-founder and CEO of Inspirit, actively champions its potential for revolutionizing XR STEM education.

The Innovative Learning Hub serves as a platform that grants middle and high school students the opportunity to engage with exceptional XR content. “Our research-based methodology ensures all middle and high school students have an opportunity to access top-notch XR content that enhances their learning experience, prepares them for the future, and inspires them to pursue their dreams,” said Vishwanath.

Inspirit Launches Affordable XR STEM Education Platform for Middle and High School Students Read More »

ai-to-help-everyone-unleash-their-inner-creator-with-masterpiece-x

AI to Help Everyone Unleash Their Inner Creator With Masterpiece X

Empowering independent creators is an often-touted benefit of AI in XR. We’ve seen examples from professional development studios with little to no public offering, but precious few examples of AI-powered authoring tools for individual users. Masterpiece Studio is adding one more, “Masterpiece X”, to help everyone “realize and elevate more of their creative potential.”

“A New Form of Literacy”

Masterpiece Studio doesn’t just want to release an app – they want to start a movement. The team believes that “everyone is a creator” but the modern means of creation are inaccessible to the average person – and that AI is the solution.

Masterpiece X Meta Quest 3D Remix screenshot

“As our world increasingly continues to become more digital, learning how to create becomes a crucial skill: a new form of literacy,” says a release shared with ARPost.

Masterpiece Studio has already been in the business of 3D asset generation for over eight years now. The company took home the 2021 Auggie Award for Best Creator and Authoring Tool, and is a member of the Khronos Group and the Metaverse Standards Forum.

So, what’s the news? A new AI-powered asset generation platform called Masterpiece X, currently available as a beta application through a partnership with Meta.

The Early Days of Masterpiece X

Masterpiece X is already available on the Quest 2, and it’s already useful if you have your own 3D assets to import. There’s a free asset library, but it only contains sample content at the moment. The big feature of the app – creating 3D models from text prompts – is still rolling out and will (hopefully) result in a more highly populated asset library.

Masterpiece X Meta community library

“Please keep in mind that this is an ‘early release’ phase of the Masterpiece X platform. Some features are still in testing with select partners,” reads the release.

That doesn’t mean that it’s too early to bother getting the app. It’s already a powerful tool. Creators that download and master the app now will be better prepared to unlock its full potential when it’s ready.

Creating an account isn’t a lengthy process, but it’s a bit clunky – it can’t be done entirely online or entirely in-app, which means switching between a desktop and the VR headset to enter URLs and passwords. After that, you can take a brief tutorial or experiment on your own.

The app already incorporates a number of powerful tools into the entirely spatial workflow. Getting used to the controls might take some work, though people who already have experience with VR art tools might have a leg up. Users can choose a beginner menu with a cleaner look and fewer tools, or an expert menu with more options.

So far, tools allow users to change the size, shape, color, and texture of assets. Some of these are simple objects, while others come with rigged skeletons that can take on a variety of animations.

I Had a Dream…

For someone like me who isn’t very well-versed in 3D asset editing, now is the moment to spend time in Masterpiece X – honing my skills until the day that asset creation on the platform is streamlined by AI. Maybe then I can finally make a skateboarding Gumby-shaped David Bowie to star in an immersive music video for “Twinkle Song” by Miley Cyrus. Maybe.

AI to Help Everyone Unleash Their Inner Creator With Masterpiece X Read More »

virtual-reality-enhances-ketamine-therapy-sessions-with-immersive-experiences

Virtual Reality Enhances Ketamine Therapy Sessions With Immersive Experiences

Recently, TRIPP PsyAssist completed its Phase 1 Feasibility Study to demonstrate its use as a pretreatment tool for patients undergoing ketamine therapy. This VR solution is an example of emerging technologies that facilitate the development of more accessible and transformative mental health care solutions.

Globally, more than 500 million people have anxiety and depression. That’s more than half of the total estimate of people living with some form of mental illness. However, only a third receive adequate mental health care. Solutions like TRIPP PsyAssist help mental health clinics provide the care their patients need.

Treating Mental Health Disorders With Ketamine Therapy

Many methods are available to treat depression, anxiety, and similar mental health conditions. Using psychedelics, like ketamine, is gaining ground as a fast-acting, non-invasive treatment option. Low doses of ketamine, a dissociative psychedelic drug, are administered intravenously in a clinical setting to patients for several minutes while the patient is observed. Patients typically go through several rounds of these treatments.

While ketamine therapy has numerous clinical studies proving its effectiveness, there remains a need to manage pre-treatment settings. Patients experience the same anxiety before their ketamine therapy sessions, and alleviating their distress will help usher in a more relaxed onboarding and treatment session. There’s also a need to integrate their experiences after the ketamine therapy treatment, both at the clinic and at home.

Using VR to Improve Ketamine Therapy Pre-Treatment Experience

TRIPP is a California-based company pioneering XR wellness technologies for consumers, enterprises, and clinics. Their research-based platform is available across VR, AR, and mobile to help facilitate a deeper self-connection and create collective well-being.

TRIPP PsyAssist for ketamine therapy 2

TRIPP is best known for its award-winning consumer platform that creates beautiful meditative VR spaces where users can spend time calming their minds and centering their being. Staying true to its mission of using technology to transform the mind, the company introduced TRIPP PsyAssist, its clinical offering aimed at helping medical institutions use XR to improve their practices.

At TRIPP, we are dedicated to empowering individuals on their path to healing,” said TRIPP’s CEO and founder Nanea Reeves. They believe that virtual reality has the power to enhance therapeutic interventions, and their research encourages them to explore new frontiers in mental health treatment.

The main objective of Phase 1 of the TRIPP PsyAssist study was to assess whether guided, meditative imagery, which was provided through VR using the Pico Neo 3 Pro Eye headset, could be successfully implemented as a pre-treatment program in an actual clinical setting. The study also aimed to evaluate the level of acceptance of this approach.

The study participants were undergoing ketamine therapy for anxiety or depression at Kadima Neuropsychiatry Institute in San Diego. Kadima’s President, David Feifel, MD, PhD, was excited to partner with TRIPP and have this important feasibility study conducted among its patients.

VR technology has great potential to enhance mental wellness, and TRIPP PsyAssist is at the forefront of translating that potential into reality,” Feifel said. “This study represents an important step in that direction.

Improving Patient Experience with TRIPP PsyAssist

The results of the feasibility study were very promising. Eighty percent of the users wanted to use the system frequently, while all of them found the different functions well-integrated. Likewise, 100% of the users felt very confident in using the system.

TRIPP’s Clinical Director of Operations, Sunny Strasburg, LMFT, was delighted with the success of the preliminary results of the feasibility study.

TRIPP PsyAssist for ketamine therapy

These findings inspire us to forge ahead in uncovering new frontiers within clinical settings where technology and psychedelic medicine converge,” she said. Strasburg and her team look forward to expanding their study to explore various TRIPP PsyAssist applications in clinical settings.

With Phase 1 of the study completed, TRIPP PsyAssist is set to discover new ways of integrating innovative VR technology into mainstream clinical practices.

Reeves and Strasburg are also attending the MAPS Psychedelic Science Conference, which is taking place this week, where they are showcasing their research and discussing the impact of emerging technologies on mental health treatment. A Kadima booth will also be present to give attendees a demonstration of the transformative potential of the TRIPP platform.

Final Thoughts

Significant advances in research have elevated our knowledge about mental health. However, it remains a critical global health concern as the number of disorders escalates, but the available resources remain sparse.

But there’s a light at the end of the tunnel. Initiatives like TRIPP PsyAssist prove that emerging technology can play a significant role in alleviating mental health problems. This gives us the confidence that the future is bright and that our challenges have a solution at hand.

Virtual Reality Enhances Ketamine Therapy Sessions With Immersive Experiences Read More »

the-intersections-of-artificial-intelligence-and-extended-reality

The Intersections of Artificial Intelligence and Extended Reality

It seems like just yesterday it was the AR this, VR that, metaverse, metaverse, metaverse. Now all anyone can talk about is artificial intelligence. Is that a bad sign for XR? Some people seem to think so. However, people in the XR industry understand that it’s not a competition.

In fact, artificial intelligence has a huge role to play in building and experiencing XR content – and it’s been part of high-level metaverse discussions for a very long time. I’ve never claimed to be a metaverse expert and I’m not about to claim to be an AI expert, so I’ve been talking to the people building these technologies to learn more about how they help each other.

The Types of Artificial Intelligence in Extended Realities

For the sake of this article, there are three main different branches of artificial intelligence: computer vision, generative AI, and large language models. AI is more complicated than this, but this helps to get us started talking about how it relates to XR.

Computer Vision

In XR, computer vision helps apps recognize and understand elements in the environment. This places virtual elements in the environment and sometimes lets them react to that environment. Computer vision is also increasingly being used to streamline the creation of digital twins of physical items or locations.

Niantic is one of XR’s big world-builders using computer vision and scene understanding to realistically augment the world. 8th Wall, an acquisition that does its own projects but also serves as Niantic’s WebXR division, also uses some AI but is also compatible with other AI tools, as teams showcased in a recent Innovation Lab hackathon.

“During the sky effects challenge in March, we saw some really interesting integrations of sky effects with generative AI because that was the shiny object at the time,” Caitlin Lacey, Niantic’s Senior Director of Product Marketing told ARPost in a recent interview. “We saw project after project take that spin and we never really saw that coming.”

The winner used generative AI to create the environment that replaced the sky through a recent tool developed by 8th Wall. While some see artificial intelligence (that “shiny object”) as taking the wind out of immersive tech’s sails, Lacey sees this as an evolution rather than a distraction.

“I don’t think it’s one or the other. I think they complement each other,” said Lacey. “I like to call them the peanut butter and jelly of the internet.”

Generative AI

Generative AI takes a prompt and turns it into some form of media, whether an image, a short video, or even a 3D asset. Generative AI is often used in VR experiences to create “skyboxes” – the flat image over the virtual landscape where players have their actual interactions. However, as AI gets stronger, it is increasingly used to create virtual assets and environments themselves.

Artificial Intelligence and Professional Content Creation

Talespin makes immersive XR experiences for training soft skills in the workplace. The company has been using artificial intelligence internally for a while now and recently rolled out a whole AI-powered authoring tool for their clients and customers.

A release shared with ARPost calls the platform “an orchestrator of several AI technologies behind the scenes.” That includes developing generative AI tools for character and world building, but it also includes work with other kinds of artificial intelligence that we’ll explore further in the article, like LLMs.

“One of the problems we’ve all had in the XR community is that there’s a very small contingent of people who have the interest and the know-how and the time to create these experiences, so this massive opportunity is funneled into a very narrow pipeline,” Talespin CEO Kyle Jackson told ARPost. “Internally, we’ve seen a 95-97% reduction in time to create [with AI tools].”

Talespin isn’t introducing these tools to put themselves out of business. On the contrary, Jackson said that his team is able to be even more involved in helping companies workshop their experiences because his team is spending less time building the experiences themselves. Jackson further said this is only one example of a shift happening to more and more jobs.

“What should we be doing to make ourselves more valuable as these things shift? … It’s really about metacognition,” said Jackson. “Our place flipped from needing to know the answer to needing to know the question.”

Artificial Intelligence and Individual Creators

DEVAR launched MyWebAR in 2021 as a no-code authoring tool for WebAR experiences. In the spring of 2023, that platform became more powerful with a neural network for AR object creation.

In creating a 3D asset from a prompt, the network determines the necessary polygon count and replicates the texture. The resulting 3D asset can exist in AR experiences and serve as a marker itself for second-layer experiences.

“A designer today is someone who can not just draw, but describe. Today, it’s the same in XR,” DEVAR founder and CEO Anna Belova told ARPost. “Our goal is to make this available to everyone … you just need to open your imagination.”

Blurring the Lines

“From strictly the making a world aspect, AI takes on a lot of the work,” Mirrorscape CEO Grant Anderson told ARPost. “Making all of these models and environments takes a lot of time and money, so AI is a magic bullet.”

Mirroscape is looking to “bring your tabletop game to life with immersive 3D augmented reality.” Of course, much of the beauty of tabletop games come from the fact that players are creating their own worlds and characters as they go along. While the roleplaying element has been reproduced by other platforms, Mirrorscape is bringing in the individual creativity through AI.

“We’re all about user-created content, and I think in the end AI is really going to revolutionize that,” said Grant. “It’s going to blur the lines around what a game publisher is.”

Even for those who are professional builders but who might be independent or just starting out, artificial intelligence, whether to create assets or just for ideation, can help level the playing field. That was a theme of a recent Zapworks workshop “Can AI Unlock Your Creating Potential? Augmenting Reality With AI Tools.”

“AI is now giving individuals like me and all of you sort of superpowers to compete with collectives,” Zappar executive creative director Andre Assalino said during the workshop. “If I was a one-man band, if I was starting off with my own little design firm or whatever, if it’s just me freelancing, I now will be able to do so much more than I could five years ago.”

NeRFs

Neural Radiance Fields (NeRFs) weren’t included in the introduction because they can be seen as a combination of generative AI and computer vision. It starts out with a special kind of neural network called a multilayer perceptron (MLP). A “neural network” is any artificial intelligence that’s based off of the human brain, and an MLP is … well, look at it this way:

If you’ve ever taken an engineering course, or even a highschool shop class, you’ve been introduced to drafting. Technical drawings represent a 3D structure as a series of 2D images, each showing different angles of the 3D structure. Over time, you can get pretty good at visualizing the complete structure from these flat images. An MLP can do the same thing.

The difference is the output. When a human does this, the output is a thought – a spatial understanding of the object in your mind’s eye. When an MLP does this, the output is a NeRF – a 3D rendering generated from the 2D images.

Early on, this meant feeding countless images into the MLP. However, in the summer of 2022, Apple and the University of British Columbia developed a way to do it with one video. Their approach was specifically interested in generating 3D models of people from video clips for use in AR applications.

Whether a NeRF recreates a human or an object, it’s quickly becoming the fastest and easiest way to make digital twins. Of course, the only downside is that NeRF can only create digital models of things that already exist in the physical world.

Digital Twins and Simulation

Digital twins can be built with or without artificial intelligence. However, some use cases of digital twins are powered by AI. These include simulations like optimization and disaster readiness. For example, a digital twin of a real campus can be created, but then modified on a computer to maximize production or minimize risk in different simulated scenarios.

“You can do things like scan in areas of a refinery, but then create optimized versions of that refinery … and have different simulations of things happening,” MeetKai co-founder and executive chairwoman Weili Dai told ARPost in a recent interview.

A recent suite of authoring tools launched by the company (which started in AI before branching into XR solutions) includes AI-powered tools for creating virtual environments from the virtual world. These can be left as exact digital twins, or they can be edited to streamline the production of more fantastic virtual worlds by providing a foundation built in reality.

Large Language Models

Large Language Models take in language prompts and return language responses. This is on the list of AI interactions that runs largely under the hood so that, ideally, users don’t realize that they’re interacting with AI. For example, large language models could be the future of NPC interactions and “non-human agents” that help us navigate vast virtual worlds.

“In these virtual world environments, people are often more comfortable talking to virtual agents,” Inworld AI CEO Ilya Gelfenbeyn told ARPost in a recent interview. “In many cases, they are acting in some service roles and they are preferable [to human agents].”

Inworld AI makes brains that can animate Ready Player Me avatars in virtual worlds. Creators get to decide what the artificial intelligence knows – or what information it can access from the web – and what its personality is like as it walks and talks its way through the virtual landscape.

“You basically are teaching an actor how it is supposed to behave,” Inworld CPO Kylan Gibbs told ARPost.

Large language models are also used by developers to speed up back-end processes like generating code.

How XR Gives Back

So far, we’ve talked about ways in which artificial intelligence makes XR experiences better. However, the opposite is also true, with XR helping to strengthen AI for other uses and applications.

Evolving AI

We’ve already seen that some approaches to artificial intelligence are modeled after the human brain. We know that the human brain developed essentially through trial and error as it rose to meet the needs of our early ancestors. So, what if virtual brains had the same opportunity?

Martine Rothblatt PhD reports that very opportunity in the excellent book “Virtually Human: The Promise – and the Peril – of Digital Immortality”:

“[Academics] have even programmed elements of autonomy and empathy into computers. They even create artificial software worlds in which they attempt to mimic natural selection. In these artificial worlds, software structures compete for resources, undergo mutations, and evolve. Experimenters are hopeful that consciousness will evolve in their software as it did in biology, with vastly greater speed.”

Feeding AI

Like any emerging technology, people’s expectations of artificial intelligence can grow faster than AI’s actual capabilities. AI learns by having data entered into it. Lots of data.

For some applications, there is a lot of extant data for artificial intelligence to learn from. But, sometimes, the answers that people want from AI don’t exist yet as data from the physical world.

“One sort of major issue of training AI is the lack of data,” Treble Technologies CEO Finnur Pind told ARPost in a recent interview.

Treble Technologies works with creating realistic sound in virtual environments. To train an artificial intelligence to work with sound, it needs audio files. Historically, these were painstakingly sampled with different things causing different sounds in different environments.

Usually, during the early design phases, an architect or automotive designer will approach Treble to predict what audio will sound like in a future space. However, Treble can also use its software to generate specific sounds in specific environments to train artificial intelligence without all of the time and labor-intensive sampling. Pinur calls this “synthetic data generation.”

The AI-XR Relationship Is “and” Not “or”

Holding up artificial intelligence as the new technology on the block that somehow takes away from XR is an interesting narrative. However, experts are in agreement that these two emerging technologies reinforce each other – they don’t compete. XR helps AI grow in new and fantastic ways, while AI makes XR tools more powerful and more accessible. There’s room for both.

The Intersections of Artificial Intelligence and Extended Reality Read More »

virbela-creates-virtual-cannes-lions-festival-with-pwc

Virbela Creates Virtual Cannes Lions Festival With PwC

This year is the 70th annual Cannes Lions International Festival of Creativity. The event takes place in the iconic town in the south of France but why should a festival of creativity not also take place in the virtual world? To extend opportunities for clients and customers to experience the event, PwC worked with Virbela to create a virtual option.

The virtual Cannes Lions Festival coincides with the in-person event from June 19-23, but I got a private advance tour of the custom Virbela campus, from two of the event’s creators, Virbela President and co-founder Alex Howland, and PwC Global Metaverse Leader Roberto Hernandez.

Welcome to the (Virtual) Cannes Lions

When you think of Cannes, you might not immediately think of a company like PwC. While the Cannes Film Festival attracts the most media headlines, the Lions is an event of a much broader scope as it recognizes creativity across everything from film and music to brands and marketing.

The Cannes Lions Festival is naturally associated with the city in which it takes place – which sits on the southern coast of France on the Mediterranean Sea between Marseille and Nice. Some events consist of the traditional sort of panels and presentations, but companies also present on the beach or on docked boats. Of course, not everyone can make that kind of trip.

Recreating Cannes

“What we are doing with this collaboration is giving people from around the world the opportunity to see what can be done with these kinds of immersive environments,” Hernandez said. “We also wanted to give people the opportunity to join remotely, but not just through a video call.”

Virbela and PwC Cannes Campus Outside

Hernandez acknowledged that remote events already exist around the Cannes Lions, but that they usually consist of someone giving reports day-by-day via a video. Virbela is able to get a lot closer to the real thing – boats and all.

“We wanted to give the campus that you know so well some French Riviera flare,” said Howland.

The intention was never to create an exact twin of the Cannes Lions venue, but to create a festive atmosphere in the familiar campus.

“It really takes you to a different place in a way that video conferencing doesn’t allow you to do,” said Hernandez. “I’m convinced, particularly when I talk to large-brand clients, that venues like these allow you to stretch the experience.”

Virbela With an Accent

That familiar Virbela campus (which got a sweeping redesign and beautiful graphics overhaul last year) is already an island complete with sand beaches, speed boats, volleyball nets, campfire sites, and tall pines. The bespoke world created for the Cannes Lions has more palm trees and yachts, making it a little more Mediterranean and a little less Pacific Northwest.

Virbela and PwC Cannes Yacht

Some of the buildings, particularly the ones more inland, are in the classic Virbela style. However, the main event buildings near the venue’s entrance have gotten a European facelift as well. Combined with flags, photo backdrops, and a red carpet (of course), the campus’ main stretch is decidedly more cosmopolitan.

“It was a match of very creative people from both sides (…) everyone approached this not as different teams working together, but as one team working on a dream experience,” said Hernandez. “We didn’t want this to be just inviting people to an auditorium to hear someone lecture.”

Hernandez, who was particularly excited about driving speedboats in Virbela, specifically mentioned assembling a team consisting of both experienced developers and younger developers who would be able to work together on an attractive, interactive, and engaging experience.

“PwC’s experience with these events and events like it has been very valuable,” said Howland.

Virbela in the Browser

As someone who has been covering Virbela and its events for a number of years, something about this experience excited me that might not register with some of the people joining the virtual Cannes Lions. I joined in a browser.

Jon in Virbela Cannes campus

Historically, Virbela has been app-based – even for limited-time events like iLRN or AfroTech World. WebXR experiences have been handled by a special branch of the company called Frame.

Frame comes with some smaller venues built in, but has been growing in its ability to create larger and more visually impactful virtual spaces. For someone who knows their way around the metaverse, connecting a Ready Player Me avatar is quick and easy. The platform also recently got an avatar upgrade, though the avatars still aren’t as customizable as Virbela’s.

To use the full modular Virbela campus and its more nuanced avatar generation engine, the team worked with NVIDIA to stream the world in-browser rather than require an app download. This also helped to make the experience more accessible for visitors.

“We have a number of clients with very strict firewalls and procedures,” said Hernandez. “For some of them, being able to download anything at all is a big challenge.”

Exploring the Virbela Riviera

While the actual event will obviously have a lot more energy when all of the attendees sign in, a number of attractions were already up and running for me to explore. One of the recurring ironies in immersive worlds is that events meant to replicate in-person events also provide experiences that no in-person party would allow. The virtual Cannes Lions are no different.

Virbela and PwC Cannes Theatre Inside

PwC didn’t just work with Virbela on this one activation and Virbela doesn’t just do events. Virbela offers virtual offices for global companies and distributed teams, including PwC. One exhibit in the Lions campus allows visitors to see the digital twins of PwC offices around the world.

Another experience allowed by the event streaming online works with a Frame integration to transport users into a series of more interactive virtual worlds. These show the effects that climate change could have on the cities around the world where PwC has offices. Visitors can also explore different climate scenarios and the results of potential intervention methods.

An Exclusive Event

Virbela is making the Cannes Lions more accessible than ever before. However, it’s still not open to everyone – select PwC clients and partners received email invitations.

“We never conceived this to be something attended by hundreds of thousands of people,” said Hernandez. “We want it to be special.”

The expected headcount? Around a thousand or so. It might sound like a lot for one virtual world, but Howland isn’t worried. Virbela has been tested at higher pressures than that.

“We regularly get over two thousand people in a campus,” said Howland. “One of the benefits of our new avatar system is that it’s a bit more performant, so we’ll be able to see that play out.”

Maybe One Day…

This is an ambitious project and it will be interesting to see whether it scales over time. While this extension of the Cannes Lions isn’t for everyone, we’ve already seen events like Fashion Week and Burning Man grow into immersive worlds and make them accessible to just about everyone. Maybe someday soon we will all be able to attend this event across the metaverse.

Virbela Creates Virtual Cannes Lions Festival With PwC Read More »

“affordable-haptic-glove”-crushes-indiegogo-campaign,-shipping-this-autumn

“Affordable Haptic Glove” Crushes Indiegogo Campaign, Shipping This Autumn

Haptic gloves allow users to interact much more intuitively and convincingly with virtual objects and virtual worlds. As such, they are viewed by many to be the next opportunity to increase the sense of immersion provided by XR technology.

Unfortunately, haptic gloves are still pretty cutting-edge technology. As such, currently available products on the market are largely limited to dev kits priced for researchers or enterprise users. However, one company, Bifrost, just passed its crowdfunding goal for its first product – the Pulse haptic glove – which could make haptics affordable to many more users.

Meet the Pulse Haptic Glove From Bifrost

“We designed Pulse with enthusiasts in mind every step of the way,” reads the product’s crowdfunding page. “After nearly five years of development, we’re proud to say that Pulse is the most accessible haptic glove on the market.”

The glove, which consists of motors resting over the knuckles that apply pressure by pulling caps worn on the fingertips, went on Indiegogo with a goal  of $20,000 but had raised more than three times that much by the time the backing window closed. And yes, those finger caps also have integrated trackers for precise controls within XR.

“Our glove offers precise finger tracking and realistic haptic feedback, creating a new level of immersion,” Bifrost founder Sam Baker said in an email to ARPost. “Pulse is a promising alternative to existing haptic devices that are out of reach for most consumers due to the steep price.”

Pulse haptic glove

The gloves are going for $300. Compare that to the $4,500 HaptX G1 launched last year. The glove appears to have a more “open” design than some competitors, but that may not be the final iteration. According to the crowdsourcing page, the product is still currently in the prototyping stage.

Fitting in the Market

There are a few companies out there making gloves for haptics and finger tracking. Other models are larger and more expensive, but that doesn’t mean that they aren’t worth thinking about. Even if they’re out of your price range, understanding other haptic gloves can help you wrap your head around what the team at Bifrost is doing.

Most of the heavy hitters have a similar basic mechanism as Pulse – most are a little smaller and all are more contained. But, keep in mind that we’re seeing a prototype from Bifrost. We should be thanking them for showing us such an unobstructed view of how haptics work – even if we hope that they make improvements before launch.

However, there are different kinds of haptic sensations that different kinds of products allow – sometimes in tandem. For example, tensile resistance isn’t the only sensation that your hands can feel – even if it is a really good start.

We drew a price comparison with HaptX, but those gloves aren’t 15 times more expensive for no reason. In addition to an arguably fairly standard grip simulation system, HaptX gloves have “hundreds of microfluidic actuators across your fingers and palms” that provide a far more immersive haptic experience – and require a massive piece of hardware called an “Airpack.”

SenseGlove offers a medium – a bulkier package than Pulse, but less ambitious haptics than HaptX. The closest thing on the market to Pulse in terms of form factor might be MANUS, though MANUS focuses on tracking rather than haptics and is priced in above HAPTX.

Would You Try Them On?

The bad news is that Pulse costs almost as much as a new entry-level headset. The good news is that that’s orders of magnitude less than the nearest competitor. While many price-conscious users will no doubt go without even these most affordable haptic gloves, Pulse will no doubt do what the manufacturers intends – open up the market to enthusiasts and “prosumers.”

“Affordable Haptic Glove” Crushes Indiegogo Campaign, Shipping This Autumn Read More »

10-top-use-cases-of-blockchain-in-virtual-reality

10 Top Use Cases of Blockchain in Virtual Reality

While talking about blockchain technology, we should keep in mind that this is a new and still evolving industry. There are several use cases of blockchain in different industries, including virtual reality.

Here are the top use cases of blockchain in VR.

1. Decentralized Asset Ownership

Blockchain can be used to verify and track the ownership of virtual assets in VR, making it impossible for fraudulent transactions to occur, one example is DeFi (Decentralized Finance) in which there is no threat of fraud or theft as DeFi offers an emerging model for organizing and enabling cryptocurrency-based transactions without relying on intermediaries or traditional banks.

2. VR Advertising and Sponsorship

Blockchain and VR can work together to verify sponsorship and verify traffic, promoting transparency and trust between brands and consumers. Through blockchain, advertising and sponsorship in virtual reality can be made more secure and reliable.

By using blockchain-based smart contracts, brands and advertisers can ensure that their ads are displayed to their target audience in the virtual reality environment. This, in turn, can increase the relevance of the ads and improve the ROI for the advertisers.

Additionally, blockchain can also help to track and verify the traffic generated by the ads. By recording each interaction with the ad on the blockchain, advertisers can get a better understanding of the effectiveness of their VR advertising campaigns. This level of transparency can help to build trust between the brand and the consumer.

Blockchain can also be used to prevent fraudulent advertising practices. By using a decentralized system that relies on a consensus mechanism, blockchain ensures that the data is secure and tamper-proof. This can prevent scenarios where advertisers pay for fake traffic generated by bots or other illegitimate means.

In conclusion, the use of blockchain technology in virtual reality advertising and sponsorship can offer numerous benefits to both brands and consumers. By increasing transparency, security, and reliability, blockchain can help to build trust and improve the overall experience for everyone involved.

3. Secure VR Marketplaces

Decentralized marketplaces on the blockchain can be used to facilitate secure transactions of virtual goods and services, reducing fraud and ensuring safe transactions.

4. Virtual Reality Gaming

Blockchain-based incentives may be utilized to incentivize gamers to contribute to better gameplay, thus increasing engagement in VR gaming environments.

5. Rights Management

Another potential use case is the registration of copyright, publisher, and distribution rights to control the distribution and licensing of virtual reality content, ensuring the creators of content receive fair compensation.

6. Virtual Real Estate

With the increasing popularity of virtual worlds, blockchain technology can be utilized to enable the ownership and trading of virtual real estate. This allows for users to build and own property in virtual worlds and potentially earn revenue from it.

7. Virtual Currency

Virtual currencies used in VR environments can also benefit from blockchain technology. Blockchain can enable secure, transparent, and decentralized transactions for virtual reality commerce, as well as provide a more stable and reliable virtual currency system.

8. Identity Management

Blockchain technology can be used to manage and protect user identities in virtual environments, providing a more secure and trustworthy system for verification and authentication.

9. Cross-Platform Interoperability

Blockchain technology can enable interoperability between different VR platforms, allowing users to seamlessly interact with each other and access content across different platforms.

10. Decentralized Autonomous Organizations (DAOs)

Using blockchain technology, DAOs can be created in virtual reality environments to create decentralized decision-making and governance structures. This can enable community-driven development and decision-making in virtual worlds.

Conclusion

The use cases of blockchain in virtual reality demonstrate the potential for these two emerging technologies to work together and create innovative solutions for a variety of industries.

Blockchain technology can provide an additional layer of security and transparency to the virtual world, enabling safer transactions and the protection of users’ personal information.

Guest Post


About the Guest Author(s)

Amir Ashraf

Amir Ashraf

Amir is an expert editor and writer in the blockchain industry. After completing his studies, he started exploring different aspects of the blockchain industry and took the initiative to make things easy to understand by writing about different techniques that are used in blockchain and still working on different aspects as this is a developing field.

10 Top Use Cases of Blockchain in Virtual Reality Read More »