Immersive Technology

challenges-behind-applying-real-world-laws-to-xr-spaces-and-ensuring-user-safety

Challenges Behind Applying Real-World Laws to XR Spaces and Ensuring User Safety

Immersive technologies bridging the gap between the physical and digital worlds can create new business opportunities. However, it also gives rise to new challenges in regulation and applying real-world laws to XR spaces. According to a World Economic Forum report, we are relatively slow in innovating new legal frameworks for emerging technologies like AR and VR.

Common Challenges of Applying Laws to AR and VR

XR technologies like AR and VR are already considered beneficial and are used in industries like medicine and education. However, XR still harbors risks to human rights, according to an Electronic Frontier Foundation (EFF) article.

Issues like data harvesting and online harassment pose real threats to users, and self-regulation when it comes to data protection and ethical guidelines is insufficient in mitigating such risks. Some common challenges that crop up when applying real-world laws to AR and VR include intellectual property, virtual privacy and security, and product liability.

There’s also the need for a new framework tailored to fit emerging technologies, but legislative attempts at regulation may face several hurdles. It’s also worth noting that while regulation can help keep users safe, it may also potentially hamper the development of such technologies, according to Digikonn co-founder Chirag Prajapati.

Can Real-World Laws Be Applied to XR Spaces?

In an interview with IEEE Spectrum in 2018, Robyn Chatwood, an intellectual property and information technology partner at Dentons Australia, gave an example of an incident that occurred in a VR space where a user experienced sexual assault. Unfortunately, Chatwood remarked that there are no laws saying that sexual assault in VR is the same as in the real world. When asked when she thinks these issues will be addressed, Chatwood remarked that, in several years, another incident could draw more widespread attention to the problems in XR spaces. It’s also possible that, through increased adoption, society will begin to recognize the need to develop regulations for XR spaces.

On a more positive note, the trend toward regulations for XR spaces has been changing recently. For instance, Meta has rolled out a minimum distance between avatars in Horizon Worlds, its VR social media platform. This boundary prevents other avatars from getting into your avatar’s personal space. This system works by halting a user’s forward movement as they get closer to the said boundary.

There are also new laws being drafted to protect users in online spaces. In particular, the UK’s Online Safety Bill, which had its second reading in the House of Commons in April 2022, aims to protect users by ensuring that online platforms have safety measures in place against harmful and illegal content and covers four new criminal offenses.

In the paper, The Law and Ethics of Virtual Assault, author John Danaher proposes a broader definition of virtual sexual assault, which allows for what he calls the different “sub-types of virtual sexual assault.” Danaher also provides suggestions on when virtual acts should be criminalized and how virtual sexual assault can be criminalized. The paper also touches on topics like consent and criminal responsibility for such crimes.

There’s even a short film that brings to light pressing metaverse concerns. Privacy Lost aims to educate policymakers about the potential dangers, such as manipulation, that come with emerging technologies.

While many legal issues in the virtual world are resolved through criminal courts and tort systems, according to Gamma Law’s David B. Hoppe, these approaches lack the necessary nuance and context to resolve such legal disputes. Hoppe remarks that real-world laws may not have the specificity that will allow them to tackle new privacy issues in XR spaces and shares that there is a need for a more nuanced legal strategy and tailored legal documents to help protect users in XR spaces.

Issues with Existing Cyber Laws

The novelty of AR and VR technologies makes it challenging to implement legislation. However, for users to maximize the benefits of such technologies, their needs should be considered by developers, policymakers, and organizations that implement them. While cyber laws are in place, persistent issues still need to be tackled, such as challenges in executing sanctions for offenders and the lack of adequate responses.

The United Nations Office on Drugs and Crime (UNODC) also cites several obstacles to cybercrime investigations, such as user anonymity from technologies, attribution, which determines who or what is responsible for the crime, and traceback, which can be time-consuming. The UNODC also notes that the lack of coordinated national cybercrime laws and international standards for evidence can hamper cybercrime investigations.

Creating Safer XR Spaces for Users

Based on guidelines provided by the World Economic Forum, there are several key considerations that legislators should consider. These include how laws and regulations apply to XR conduct governed by private platforms and how rules can potentially apply when an XR user’s activities have direct, real-world effects.

The XR Association (XRA) has also provided guidelines to help create safe and inclusive immersive spaces. Its conduct policy tips to address abuse include creating tailored policies that align with a business’ product and community and including notifications of possible violations. Moreover, the XRA has been proactive in rolling out measures for the responsible development and adoption of XR. For instance, it has held discussions on user privacy and safety in mixed reality spaces, zeroing in on how developers, policymakers, and organizations can better promote privacy, safety, and inclusion, as well as tackle issues that are unique to XR spaces. It also works with XRA member companies to create guidelines for age-appropriate use of XR technology, helping develop safer virtual spaces for younger users.

Other Key Players in XR Safety

Aside from the XRA, other organizations are also taking steps to create safer XR spaces. X Reality Safety Intelligence (XRSI), formerly known as X Reality Safety Initiative, is one of the world’s leading organizations focused on providing intelligence and advisory services to promote the safety and well-being of ecosystems for emerging technologies.

It has created a number of programs that help tackle critical issues and risks in the metaverse focusing on aspects like diversity and inclusion, trustworthy journalism, and child safety. For instance, the organization has shown support for the Kids PRIVACY Act, a legislation that aims to implement more robust measures to protect younger users online.

XRSI has also published research and shared guidelines to create standards for XR spaces. It has partnered with Standards Australia to create the first-ever Metaverse Standards whitepaper, which serves as a guide for standards in the metaverse to protect users against risks unique to the metaverse. These are categorized as Human Risks, Regulatory Risks, Financial Risks, and Legal Risks, among other metaverse-unique risks.

The whitepaper is a collaborative effort that brings together cybersecurity experts, VR and AR pioneers, strategists, and AI and metaverse specialists. One of its authors, Dr. Catriona Wallace, is the founder of the social enterprise The Responsible Metaverse Alliance. Cybersecurity professional Kavya Pearlman, the founder and CEO of XRSI, is also one of its authors. Pearlman works with various organizations and governments, advising on policymaking and cybersecurity to help keep users safe in emerging technology ecosystems.

One such issue that’s being highlighted by the XRSI is the risks that come with XR data collection in three areas: medical XR and healthcare, learning and education, and employment and work. The report highlights how emerging technologies create new privacy and safety concerns, risks such as the lack of inclusivity, the lack of equality in education, and the lack of experience in using data collected in XR spaces are cropping up.

In light of these issues, the XRSI has created goals and guidelines to help address these risks. Some of the goals include establishing a standards-based workflow to manage XR-collected data and adopting a new approach to classifying such data.

The EU is also taking steps to ensure data protection in emerging technologies, with new EU laws aiming to complement the GDPR’s requirements for XR technologies and services. Moreover, the EU data protection law applies to most XR technologies, particularly for commercial applications. It’s possible that a user’s explicit consent may be required to make data processing operations legitimate.

According to the Information Technology & Innovation Foundation (ITIF), policymakers need to mitigate so-called regulatory uncertainty by making it clear how and when laws apply to AR and VR technologies. The same ITIF report stresses that they need to collaborate with stakeholder communities and industry leaders to create and implement comprehensive guidelines and clear standards for AR and VR use.

However, while creating safer XR spaces is of utmost importance, the ITIF also highlights the risks of over-regulation, which can stifle the development of new technologies. To mitigate this risk, policymakers can instead focus on developing regulations that help promote innovation in the field, such as creating best practices for law enforcement agencies to tackle cybercrime and focusing on funding for user safety research.

Moreover, the ITIF also provides some guidelines regarding privacy concerns from AR in public spaces, as well as what steps leaders and policymakers could take to mitigate the risks and challenges that come with the use of immersive technologies.

The EFF also shares that governments need to execute or update data protection legislation to protect users and their data.

There is still a long way to go when applying real-world laws to XR spaces. However, many organizations, policymakers, and stakeholders are already taking steps to help make such spaces safer for users.

Challenges Behind Applying Real-World Laws to XR Spaces and Ensuring User Safety Read More »

why-emerging-tech-is-both-the-cause-and-solution-of-tomorrow’s-labor-challenges

Why Emerging Tech is Both the Cause and Solution of Tomorrow’s Labor Challenges

The post-pandemic workforce is experiencing several significant shifts, particularly in how organizations tackle labor challenges and approach talent acquisition. One of the key factors for this disruption is the emergence of new, game-changing technologies like AI and machine learning.

Today’s organizations are facing staffing needs and talent shortages due to the Great Resignation, prompting them to respond to an uncertain future by shifting how they approach the talent acquisition process.

For this article, we interviewed Nathan Robinson, CEO of the workforce learning platform Gemba, to discuss the future of work and the workplace. We’ll also shed more light on how new technologies and developments are shaping the future of talent acquisition.

Rethinking the Traditional Talent Acquisition Process

According to Robinson, today’s talent acquisition process vastly differs from what it was like a few years ago. With the emerging technologies such as AI, VR, and quantum computing, many jobs considered in demand today didn’t even exist a decade ago. He adds that this trend will only become even more pronounced as technological advancements continue to rise.

As a result, corporations will no longer be able to rely on higher education to supply a steady stream of necessary talent. Instead, organizations will have to hire candidates based on their ability and willingness to learn and then provide the necessary training themselves,” he remarked.

He added that, up to a year ago, no one had ever heard of ChatGPT and no one even knew what “generative AI” meant. Today, you can find job listings for prompt engineers and prominent language model specialists. Robinson also shared that technological advancement isn’t linear, with each innovation advancing and accelerating the pace of development, which can potentially change how organizations approach the talent acquisition process.

We can rightly assume that in five or ten years’ time, there will be a whole host of new positions that today we can’t reasonably predict, much less expect there to be a sufficient number of individuals already skilled or trained in that role,” Robinson told us. “That’s why we will almost certainly see a renewed focus on talent development, as opposed to acquisition, in the near future.”

How Emerging Technologies Are Changing How Organizations Look At and Acquire Talent

According to Robinson, some of the factors that have prompted this shift include the pandemic, the rise of remote and hybrid work, the Great Resignation, and Quiet Quitting. He noted that because of these shifts, the “goals and psychology of the modern worker have changed dramatically.”

This is why now, more than ever before, organizations must be clear and intentional about the culture they cultivate, the quality of life they afford, and the opportunities for learning and growth they provide their employees,” Robinson said. “These types of ‘non-traditional’ considerations are beginning to outweigh the cut-and-dry, compensation-focused costs associated with attracting top talent in some senses.”

He also shared that this new talent acquisition process can impact organizations over time, promoting them to shift away from recruitment and instead focus more on internal employee development. According to a Gartner report, 46% of HR leaders see recruitment as their top priority.

However, Robinson thinks that, as new technologies offer better solutions to labor challenges, such as on-the-job training, this number will steadily decline as HR professionals gradually focus on developing existing talent.

Emerging Tech as Both the Cause and Solution of Future Labor Challenges

Advanced technologies, such as AI, XR, and quantum computing, are the driving force behind the looming skills gap in that they are leading to the development of new types of roles for which we have very few trained professionals,” said Robinson.

A World Economic Forum report highlights that by 2027, it’s estimated that machines will instead complete 43% of tasks that used to be completed by humans. This is a significant shift from 2022’s 34%. Moreover, it’s estimated that 1.1 billion jobs may potentially be transformed by technology in the next ten years.

While emerging technologies are prompting labor challenges, they can also be seen as a solution. Robinson adds that these emerging technologies, particularly XR, can help organizations overcome the skills gap. According to him, such technologies can help organizations facilitate more efficient, cost-effective, and engaging training and development, thus allowing them to overcome such challenges.

To help potential employees overcome the upcoming skills disconnect, Robinson notes that the training should begin with management, using top-down managerial strategies and lean and agile development methodologies.

Overcoming Today’s Labor Challenges

Today, talent acquisition is seen as a key differentiator between successful and unsuccessful companies. While I think that will continue to hold true, I also think it will soon take a backseat to employee training and development,” Robinson said. “The industry leader will no longer be whoever is able to poach the best talent. It will soon be whoever is able to train and develop their existing talent to keep pace with the changing technological and economic landscape.”

At the end of the day, according to Robinson, embracing the unknown future of work and the workplace is about being ready for anything.

As the rate of technological advancement continues to accelerate, the gap between what we imagine the near future will be and what it actually looks like will only grow,” Robinson remarked. He suggests that instead of trying to predict every last development, it’s better to be agile and ready for the unpredictable. This means staying on top of new technologies and investing in tools to help organizations become more agile.

Why Emerging Tech is Both the Cause and Solution of Tomorrow’s Labor Challenges Read More »

european-council-publishes-web-4.0-strategy

European Council Publishes Web 4.0 Strategy

The European Commission is already setting out to tackle Web 4.0. There’s quite a bit to unpack here, including the EC approach, the 4-point plan that they recently published, and – of course – what they mean by Web 4.0.

What Is Web 4.0?

It’s not a typo and you’re not asleep at the wheel. While most of us haven’t gotten the hang of Web 3.0 yet, Europe is already setting the table for Web 4.0. Don’t worry, this is just a new terminology for something that’s already on your radar.

“Beyond the currently developing third generation of the internet, Web 3.0, whose main features are openness, decentralization, and user empowerment, the next generation, Web 4.0, will allow an integration between digital and real objects and environments and enhanced interactions between humans and machines,” reads the EC’s report.

So, essentially, “Web 4.0” is the metaverse. But, why not just call it that?

Webs and the Metaverse

The metaverse discussion at least started out as being largely a conversation within the world of immersive technology, with discussions of Web3 largely being topics within the blockchain and crypto spaces. (“Web3” and “Web 3.0” aren’t exactly the same concept, but both largely revolve around decentralization, so they’re more-or-less interchangeable for most levels of discussion.)

As voices from the cryptocurrency and blockchain communities promised that these technologies would be the future of a cross-platform, self-owned online future, Web3 and the metaverse were increasingly mentioned in the same breath with both being apparently convergent visions of the future.

A short-lived explosion of interest in the metaverse was so short-lived largely because – while the pieces are certainly falling into place – one connected metaverse hasn’t fully realized. While there are more-or-less realized metaverse spaces or use cases, the all-encompassing digital layer of reality isn’t here yet. Web3, while struggling with adoption, is largely functional today.

While some may groan at the introduction of yet another idealistic tech concept, “Web 4.0” does offer some clarity at least with regard to what the EC is talking about. First, it respects that the metaverse is still a thing of the (near?) future. Second, it ties in the themes of openness and decentralization that were lacking in many metaverse discussions.

Finally, it ties in “interactions between humans and machines.” While some technologists have long included this aspect in their discussions of the metaverse, recent developments in AI have led to increased interest in this field even since blockchain and the metaverse had their moments in the media over the last few years.

Bracing for Web 4.0

While it’s easy to feel like much of the world is still catching up with the previous generation of the internet, how is Europe planning to get ahead of the next generation of the internet? A lot of it has to do with knowing where current experts are and creating pathways for future builders.

To make that happen, the report outlines four “Key Strategy Pillars”:

  1. Empowering people and reinforcing skills to foster awareness, access to trustworthy information, and building a talent pool of virtual world specialists.
  2. Supporting a European Web 4.0 industrial ecosystem to scale up excellence and address fragmentation.
  3. Supporting local progress and virtual public services to leverage the opportunities virtual worlds can offer.
  4. Shaping global standards for open and interoperable virtual worlds and Web 4.0, ensuring they will not be dominated by a few big players.

One of the reasons that so much of the strategy has to do with ideas like “empowering people” and “leveraging opportunities” might be that much of the document was distilled from an earlier workshop of 150 randomly selected European citizens. The average person is likely feeling left behind Web 2.0 and out of the loop on Web 3.0.

The European Perspective

“Ensuring that [virtual worlds] will not be dominated by a few big players” may not be a uniquely European feeling, but it’s interesting to note. Meta, in particular, has gotten into trouble in EU member countries like Germany for the equivalent of antitrust concerns, which has opened the way for Pico to make headway in European markets free from its US political struggles.

At the most recent Augmented World Expo – just before Apple announced their first XR headset – some speakers even expressed concern that Apple will be able to throw its weight around the industry in a way that not even Meta enjoys.

Apple currently holds so much power that they could say ‘This is the way we’re going to go.’ and the Metaverse Standards Forum could stand up and say ‘No.’,” XRSI founder and CEO Kavya Pearlman said during a panel discussion at this year’s AWE.

Standards are a concern everywhere, but this is another area where the approach is somewhat different across the Atlantic. A number of standards groups have formed in the US, but all of them are independent groups rather than governmental initiatives – though some groups are calling for regulators to step into the space over concerns like privacy.

Thinking Globally About Web 4.0

“Europe is, in many ways, a first mover on metaverse policy, and it is putting forward a positive vision for the future of immersive technology,” the XRA’s VP of Public Policy Joan O’Hara said in an email to ARPost. “We very much appreciate the [European Commission’s] approach to balancing user protection and wellbeing with the desire to support innovation and adoption.”

The headquarters of Web 3.0 and Web 4.0 companies might be in one country or another, but most of them are offering international services. Unless they want to have different (and potentially incompatible) versions of those services available for different countries, it behooves those companies to have services that fit all national standards.

So, in the absence of officially codified US standards for immersive worlds, it is likely that the services offered to American audiences might fit into the shape described by groups like the European Commission. Fortunately, most of the organizations already looking at these problems are also international in nature and work with and between national governments.

“This will serve as a model going forward,” said O’Hara. “The XRA has been actively engaged with both European and British colleagues on these issues, and we believe the US interests are largely aligned with those of our friends across the Atlantic.”

Thinking Ahead

US discussions of Web 3.0 have largely spiraled around the nation’s failure to prepare for or recover from Web 2.0. The fact that Europe is already looking forward to Web 4.0 is definitely something to consider. In emerging tech, looking backward instead of forward is a dangerous strategy.

European Council Publishes Web 4.0 Strategy Read More »

“privacy-lost”:-new-short-film-shows-metaverse-concerns

“PRIVACY LOST”: New Short Film Shows Metaverse Concerns

Experts have been warning that, as exciting as AI and the metaverse are, these emerging technologies may have negative effects if used improperly. However, it seems like the promise of these technologies may be easier to convey than some of the concerns. A new short film, titled PRIVACY LOST, is a theatrical exploration of some of those concerns.

To learn more, ARPost talked with the writer of PRIVACY LOST – CEO and Chief Scientist of Unanimous AI and a long-time emerging technology engineer and commentator, Dr. Louis Rosenberg.

PRIVACY LOST

Parents and their son sit in a restaurant. The parents are wearing slim AR glasses while the child plays on a tablet.

As the parents argue with one another, their glasses display readouts of the other’s emotional state. The husband is made aware when his wife is getting angry and the wife is made aware when her husband is lying.

privacy lost movie emotions

A waiter appears and the child puts down the tablet and puts on a pair of AR glasses. The actual waiter never appears on screen but appears to the husband as a pleasant-looking tropical server, to the wife as a fit surf-bro, and to the child as an animated stuffed bear.

privacy lost movie sales

Just as the husband and wife used emotional information about one another to try to navigate their argument, the waiter uses emotional information to try to most effectively sell menu items – aided through 3D visual samples. The waiter takes drink orders and leaves. The couple resumes arguing.

privacy lost movie purchase probability

PRIVACY LOST presents what could be a fairly typical scene in the near future. But, should it be?

“It’s short and clean and simple, which is exactly what we aimed for – a quick way to take the complex concept of AI-powered manipulation and make it easily digestible by anyone,” Rosenberg says of PRIVACY LOST.

Creating the Film

“I’ve been developing VR, AR, and AI for over 30 years because I am convinced they will make computing more natural and human,” said Rosenberg. “I’m also keenly aware that these technologies can be abused in very dangerous ways.”

For as long as Rosenberg has been developing these technologies, he has been warning about their potential societal ramifications. However, for much of that career, people have viewed his concerns as largely theoretical. As first the metaverse and now AI have developed and attained their moments in the media, Rosenberg’s concerns take on a new urgency.

“ChatGPT happened and suddenly these risks no longer seemed theoretical,” said Rosenberg. “Almost immediately, I got flooded by interest from policymakers and regulators who wanted to better understand the potential for AI-powered manipulation in the metaverse.”

Rosenberg reached out to the Responsible Metaverse Alliance. With support from them, the XR Guild, and XRSI, Rosenberg wrote a script for PRIVACY LOST, which was produced with help from Minderoo Pictures and HeadQ Production & Post.

“The goal of the video, first and foremost, is to educate and motivate policymakers and regulators about the manipulative dangers that will emerge as AI technologies are unleashed in immersive environments,” said Rosenberg. “At the same time, the video aims to get the public thinking about these issues because it’s the public that motivates policymakers.”

Finding Middle Ground

While Rosenberg is far from the only person calling for regulation in emerging tech, that concept is still one that many see as problematic.

“Some people think regulation is a dirty word that will hurt the industry. I see it the opposite way,” said Rosenberg. “The one thing that would hurt the industry most of all is if the public loses trust. If regulation makes people feel safe in virtual and augmented worlds, the industry will grow.”

The idea behind PRIVACY LOST isn’t to prevent the development of any of the technologies shown in the video – most of which already exist, even though they don’t work together or to the exact ends displayed in the cautionary vignette. These technologies, like any technology, have the capacity to be useful but could also be used and abused for profit, or worse.

For example, sensors that could be used to determine emotion are already used in fitness apps to allow for more expressive avatars. If this data is communicated to other devices, it could enable the kinds of manipulative behavior shown in PRIVACY LOST. If it is stored and studied over time, it could be used at even greater scales and potentially for more dangerous uses.

“We need to allow for real-time emotional tracking, to make the metaverse more human, but ban the storage and profiling of emotional data, to protect against powerful forms of manipulation,” said Rosenberg. “It’s about finding a smart middle ground and it’s totally doable.”

The Pace of Regulation

Governments around the world respond to emerging technologies in different ways and at different paces, according to Rosenberg. However, across the board, policymakers tend to be “receptive but realistic, which generally means slow.” That’s not for lack of interest or effort – after all, the production of PRIVACY LOST was prompted by policymaker interest in these technologies.

“I’ve been impressed with the momentum in the EU and Australia to push regulation forward, and I am seeing genuine efforts in the US as well,” said Rosenberg. “I believe governments are finally taking these issues very seriously.”

The Fear of (Un)Regulated Tech

Depending on how you view the government, regulation can seem scary. In the case of technology, however, it seems to never be as scary as no regulation. PRIVACY LOST isn’t an exploration of a world where a controlling government prevents technological progress, it’s a view of a world where people are controlled by technology gone bad. And it doesn’t have to be that way.

“PRIVACY LOST”: New Short Film Shows Metaverse Concerns Read More »

rethinking-digital-twins

Rethinking Digital Twins

The idea of digital twins has been conceptually important to immersive technology and related ideas like the metaverse for some time – not to mention their practical employment, particularly in enterprise. However, the term doesn’t, or hasn’t, necessarily meant what it sounds like and the actual technology has so far had only limited usefulness for spatial computing.

Advances in immersive technology itself are opening up more nuanced and exciting applications for digital twins as fully-featured virtual artifacts, environments, and interfaces to the point that even experts who have been working with digital twins for decades are starting to rethink the concept.

Understanding Digital Twins

What exactly constitutes a digital twin is still a matter of some difference from company to company. ARPost defines a digital twin as “a virtual version of something that also exists as a physical object.” This basic definition includes now arguably antiquated iterations of the technology that wouldn’t be of much interest to the immersive tech crowd.

Strictly speaking, a digital twin does not have to be interactive, dynamic, or even visually representative of the physical twin. In academia and enterprise where this concept has been practically employed for decades, a digital twin might be a spreadsheet or a database.

We often think about the metaverse as like The Matrix, but we often think of it as the way Neo experiences the Matrix – from within. In that same analogy, digital twins are like the Matrix but as Tank and Dozer experience it – endless numbers that only look like numbers to the uninitiated but that paint detailed pictures to those in the know.

While that version certainly continues to have its practical applications, it’s not exactly what most readers will have in mind when they encounter the term.

The Shifting View of Digital Twins

“The traditional view of a digital twin is a row in a database that’s updated by a device,” Nstream founder and CEO Chris Sachs told ARPost. “I don’t think that view is particularly interesting or particularly useful.”

Nstream is “a vertically integrated streaming data application platform.” Their work includes digital twins in the conventional sense but it also includes more nuanced uses that incorporate the conventional but also stretch it into new fields. That’s why companies aren’t just comparing definitions, they’re also rethinking how they use these terms internally.

“How Unity talks about digital twins – real-time 3D in industry – I think we need to revamp what that means as we go along,” Unity VP of Digital Twins Rory Armes told ARPost. “We’ve had digital twins for a while […] our evolution or our kind of learning is the visualization of that data.”

This evolution naturally has a lot to do with technological advances, but Armes hypothesizes that it’s also the result of a generational shift. People who have lived their whole lives as regular computer users and gamers have a different approach to technology and its applications.

“There’s a much younger group coming into the industry […] the way they think and the way they operate is very different,” said Armes. “Their ability to digest data is way beyond anything I could do when I was 25.”

Data doesn’t always sound interesting and it doesn’t always look exciting. That is, until you remember that the metaverse isn’t just a collection of virtual worlds – it also means augmenting the physical world. That means lots of data – and doing new things with it.

Digital Twins as a User Interface

“If you have a virtual representation of a thing, you can run software on that representation as though it was running on the thing itself. That’s easier, it’s more usable, it’s more agile,” said Sachs. “You can sort of program the world by programming the digital twin.”

This approach allows limited hardware to provide minimal input to the digital twin. These provide minimal output to devices creating an automated, more affordable, more responsive Internet of Things.

“You create a kind of virtual world […] whatever they decide in the virtual world, they send it back to the real world,” said Sachs. “You can create a smarter world […] but you can’t do it one device at a time. You have to get them to work together.”

This virtual world can be controlled from the backend by VR. It can also be navigated as a user interface in AR.

“In AR, you can kind of intuit what’s happening in the world. That’s such a boost to understanding this complex technical world that we’ve built,” said Sachs. “Google and Niantic haven’t solved it, they’ve solved the photon end of it, the rendering of it, but they haven’t solved the interactivity of it […] the problem is the fabric of the web. It doesn’t work.”

To Sachs, this process of creating connected digital twins of essentially every piece of infrastructure and utility on earth isn’t just the next thing that we do with the internet – it’s how the next generation of the internet comes about.

“The world wide web was designed as a big data repository. The problem is that not everything is a document,” said Sachs. “We’re trying to upgrade the web so everything, instead of being a web page is a web agent, […] instead of a document, everything is a process.”

Rebuilding the World Wide Web

While digital twins can be a part of reinventing the internet, a lot of the tools used to build digital twins are also not made for that particular task. It doesn’t mean that they can’t do the job, it just means that providers and people using those services have to be creative about it.

“The Unity core was never developed for these VR training and geospatial data uses. […] Old-school 3D modeling like Maya was never designed for [spatial data],” said Armes. “That’s where the game engine starts.”

Unity – which is a game engine at heart – isn’t shying away from digital twins. The company works with groups, particularly in industry, to use Unity’s visual resources in these more complex use cases – often behind the scenes on internal projects.

“There are tons of companies that have bought Unity and are using it to visualize their data in whatever form,” said Armes. “People don’t necessarily use Unity to bring a product to the community, they’re using it as an asset process and that’s what Unity does really well.”

While Unity “was never developed for” those use cases, the toolkit can do it and do it well.

“We have a large geospatial model, we slap it into an engine, we’re running that engine,” said Armes. “We’re now bringing multiple layers to a world and being able to render that out.”

“Bringing the Worlds Together”

A digital twin of the real world powered by real-time data – a combination of the worlds described by Armes and Sachs – has huge potential as a way of both understanding and managing the world.

“We’re close to bringing the worlds together, in a sense,” said Armes. “Suddenly now, we’re starting to bring the pieces together […] we’re getting to that space.”

The Orlando Economic Partnership (OEP)  is working on just such a platform, with a prototype already on offer. I was fortunate enough to see a presentation of the Orlando digital twin at the Augmented World Expo. The plan is for the twin to one day show real-time information on the city in a room-scale experience accessible to planners, responders, and the government.

“It’s going to become a platform for the city to build on,” said Justin Braun, OEP Director of Marketing and Communications.

Moving Toward Tomorrow

Digital twins have a lot of potential. But, many are stuck between thinking about how digital twins have always worked and thinking about the ways that we would like them to work. The current reality is somewhere in the middle – but, like everything else in the world of emerging and converging technology – it’s moving in an interesting direction.

Rethinking Digital Twins Read More »

a-very-interesting-vr/ar-association-enterprise-&-training-forum

A Very Interesting VR/AR Association Enterprise & Training Forum

The VR/AR Association held a VR Enterprise and Training Forum yesterday, May 24. The one-day event hosted on the Hopin remote conference platform, brought together a number of industry experts to discuss the business applications of a number of XR techniques and topics including digital twins, virtual humans, and generative AI.

The VR/AR Association Gives Enterprise the Mic

The VR/AR Association hosted the event, though non-members were welcome to attend. In addition to keynotes, talks, and panel discussions, the event included opportunities for networking with other remote attendees.

“Our community is at the heart of what we do: we spark innovation and we start trends,” said VR/AR Association Enterprise Committee Co-Chair, Cindy Mallory, during a welcome session.

While there were some bonafide “technologists” in the panels, most speakers were people using the technology in industry themselves. While hearing from “the usual suspects” is nice, VR/AR Association fora are rare opportunities for industry professionals to hear from one another on how they approach problems and solutions in a rapidly changing workplace.

“I feel like there are no wrong answers,” VR/AR Association Training Committee Co-Chair,Bobby Carlton,said during the welcome session. “We’re all explorers asking where these tools fit in and how they apply.”

The Convergence

One of the reasons that the workplace is changing so rapidly has to do with not only the pace with which technologies are changing, but with the pace with which they are becoming reliant on one another. This is a trend that a number of commentators have labeled “the convergence.”

“When we talk about the convergence, we’re talking about XR but we’re also talking about computer vision and AI,” CGS Inc President of Enterprise Learning and XR, Doug Stephen, said in the keynote that opened the event, “How Integrated XR Is Creating a Connected Workplace and Driving Digital Transformation.”

CGS Australia Head, Adam Shah, was also a speaker. Together the pair discussed how using XR with advanced IT strategies, AI, and other emerging technologies creates opportunities as well as confusion for enterprise. Both commented that companies can only seize the opportunities provided by these emerging technologies through ongoing education.

“When you put all of these technologies together, it becomes harder for companies to get started on this journey,” said Shah. “Learning is the goal at the end of the day, so we ask ‘What learning outcomes do you want to achieve?’ and we work backwards from there.”

The convergence isn’t only changing how business is done, it’s changing who’s doing what. That was much of the topic of the panel discussion “What Problem Are You Trying to Solve For Your Customer? How Can Generative AI and XR Help Solve It? Faster, Cheaper, Better!”

“Things are becoming more dialectical between producers and consumers, or that line is melting where consumers can create whatever they want,” said Virtual World Society Executive Director Angelina Dayton. “We exist as both creators and as consumers … We see that more and more now.”

“The Journey” of Emerging Technology

The figure of “the journey” was also used by Overlay founder and CEO, Christopher Morace, in his keynote “Asset Vision – Using AI Models and VR to get more out of Digital Twins.” Morace stressed that we have to talk about the journey because a number of the benefits that the average user wants from these emerging technologies still aren’t practical or possible.

“The interesting thing about our space is that we see this amazing future and all of these visionaries want to start at the end,” said Morace. “How do we take people along on this journey to get to where we all want to be while still making the most out of the technology that we have today?”

Morace specifically cited ads by Meta showing software that barely exists running on hardware that’s still a few years away (though other XR companies have been guilty of this as well). The good news is that extremely practical XR technologies do exist today, including for enterprise – we just need to accept that they’re on mobile devices and tablets right now.

Digital Twins and Virtual Humans

We might first think of digital twins of places or objects – and that’s how Morace was speaking of them. However, there are also digital twins of people. Claire Hedgespeth, Head of Production and Marketing at Avatar Dimension, addressed its opportunities and obstacles in her talk, “Business of Virtual Humans.”

“The biggest obstacle for most people is the cost. … Right now, 2D videos are deemed sufficient for most outlets but I do feel that we’re missing an opportunity,” said Hedgespeth. “The potential for using virtual humans is only as limited as your imagination.”

The language of digital twins was also used on a global scale by AR Mavericks founder and CEO, William Wallace, in his talk “Augmented Reality and the Built World.” Wallace presented a combination of AR, advanced networks, and virtual positioning coming together to create an application layer he calls “The Tagisphere.”

“We can figure out where a person is so we can match them to the assets that are near them,” said Wallace. “It’s like a 3D model that you can access on your desktop, but we can bring it into the real world.”

It may sound a lot like the metaverse to some, but that word is out of fashion at the moment.

And the Destination Is … The Metaverse?

“We rarely use the M-word. We’re really not using it at all right now,” Qualcomm’s XR Senior Director, Martin Herdina, said in his talk “Spaces Enabling the Next Generation of Enterprise MR Experiences.”

Herdina put extra emphasis on computing advancements like cloud computing over the usual discussions of visual experience and form factor in his discussion of immersive technology. He also presented modern AR as a stepping stone to a largely MR future for enterprise.

“We see MR being a total game changer,” said Herdina. “Companies who have developed AR, who have tested those waters and built experience in that space, they will be first in line to succeed.”

VR/AR Association Co-Chair, Mark Gröb, expressed similar sentiments regarding “the M-word” in his VRARA Enterprise Committee Summary, which closed out the event.

“Enterprise VR had a reality check,” said Gröb. “The metaverse really was a false start. The hype redirected to AI-generated tools may or may not be a bad thing.”

Gröb further commented that people in the business of immersive technology specifically may be better able to get back to business with some of that outside attention drawn toward other things.

“Now we’re focusing on the more important thing, which was XR training,” said Gröb. “All of the business cases that we talked about today, it’s about consistent training.”

Business as Usual in the VR/AR Association

There has been a lot of discussion recently regarding “the death of the metaverse” – a topic which, arguably, hadn’t yet been born in the first place. Whether it was always just a gas and the extent to which that gas has been entirely replaced by AI is yet to be seen.

While there were people talking about “the enterprise metaverse” – particularly referring to things like remote collaboration solutions – the metaverse is arguably more of a social technology anyway. While enterprise does enterprise, someone else will build the metaverse (or whatever we end up calling it) – and they’ll probably come from within the VR/AR Association as well.

A Very Interesting VR/AR Association Enterprise & Training Forum Read More »

the-expansion-of-immersive-therapeutics-in-healthcare

The Expansion of Immersive Therapeutics in Healthcare

With better accessibility and affordability, immersive therapeutics is now transforming how patients receive optimal care. Particularly in mental health and physical therapy, it has been instrumental in optimizing treatment outcomes, and helping patients overcome anxiety, discomfort, and other challenges to their recovery.

For several years now, we’ve seen virtual reality at work in healthcare. VR is now widely used in virtual sessions for psychological therapy, training simulations for medical professionals, gamification of exercises for physical therapy, and healthcare marketing.

Today, developments in immersive technologies are widening the applications of immersive treatment options for various healthcare issues. One of these is immersive therapeutics for the treatment of intractable health conditions.

But what exactly is immersive therapeutics and how does it truly impact healthcare? Here, we dissect what this emerging treatment approach is and share feedback from tech experts and users alike.

What Is Immersive Therapeutics?

Immersive therapeutics is an evolving field of medicine that delivers treatment using advanced technologies such as virtual reality, augmented reality, and artificial intelligence. It alleviates patient suffering and enhances treatment by placing patients in highly immersive and sensory-rich environments.

Through immersive therapeutics, patients connect with virtual environments at a deep emotional level that can alter the brain’s perception of pain and divert their attention.

According to Gita Barry, President of Immersive Healthcare at Penumbra Inc., “By captivating the patients in a virtual environment, patients can engage with serene beachscapes or play cognition games to cope with the craving feeling and safeguard their path to recovery.”

Being highly effective in distracting patients from pain and cravings, immersive therapeutics has great potential for use in physical rehabilitation and addiction treatment.

“The more immersive an experience is, the more it can be engaging, positively distracting, entertaining, and effective from a therapeutic and clinical standpoint,” says Joel Breton, game designer and president of Immersive Healthcare Studios at REAL System by Penumbra. This is what makes immersive therapeutics effective in addressing challenges in therapeutic treatments.

Advancing Immersive Therapeutics for Better Healthcare

As immersive therapeutics continues to evolve, more companies are looking to increase the accessibility of these therapies, broadening their applications in new treatment options for various healthcare issues.

Some of the most promising clinical uses include treating stress, anxiety, fears, disorders, and phobias. Its applications for pain management, rehabilitation, wellness, and healthcare optimization are also expanding fast.

One company that is at the forefront of advancing the use of immersive therapeutics is Penumbra. Technologies like Penumbra’s REAL System are already showing real impact in the field of immersive therapeutics.

With platforms like the REAL i-Series for VR/mental health and the REAL y-series for VR/physical therapy, patients can access VR-based treatments and self-manage their conditions from the comfort of their own homes.

Penumbra REAL y-Series immersive therapeutics
Penumbra REAL y-Series

Of the 40 million US adults with substance abuse disorders, about 40 to 60% relapse at some point in their lives. While contact with drugs is the most obvious cause for relapse, stress cues linked to substance abuse are also common triggers.

This is where immersive therapeutics becomes most helpful. By helping counteract disruptive effects on the brain and behavior, immersive experiences help those in recovery regain control of their lives.

Real Impact of the REAL System

Barry believes that immersive experiences have the potential to impact millions of patients across a range of conditions. Developed using clinical evidence, Penumbra’s REAL System effectively supports the physical rehabilitation, cognitive, and wellness needs of patients in recovery.

The REAL i-Series, for instance, is currently in use at the Chemical Dependency unit of Hoag Health. The VR-based solution has been incorporated into group therapy sessions to create positive shared experiences and boost communal engagement.

Penumbra REAL i-Series immersive therapeutics
Penumbra REAL i-Series

The use of the system helps patients feel at ease and more open to participating in group therapy. Seeing the benefits of the i-Series, Hoag Health is now also using it to support staff well-being and retention.

As REAL System President, Breton explains that REAL’s VR experiences are designed to address physical therapy, occupational therapy, and speech-language rehab as well as general mental wellness. According to Breton, the sense of immersion VR provides distracts patients from pain and fatigue. By keeping them engaged and entertained, the patients are more likely to adhere to their treatment programs.

Transforming the Future of Health

Immersive therapeutics is undoubtedly transforming the future of healthcare by providing patients with a higher level and quality of care.

“With greater awareness of the benefits VR-based tools can provide in addiction treatment and broader healthcare, we anticipate that clinicians will identify both new applications and also new patient populations who can benefit from the immersive experience VR provides,” says Barry.

The goal of immersive therapeutics is to widen access to transformative care. Whether patients are in health facilities or at home, immersive solutions can help them overcome health challenges and ensure optimal recovery. Through the continued collaboration of tech experts and health practitioners, immersive therapeutics has the potential to transform the entire healthcare industry.

The Expansion of Immersive Therapeutics in Healthcare Read More »

treedis-transforms-physical-spaces-into-hybrid-experiences-with-a-new-augmented-reality-app

Treedis Transforms Physical Spaces Into Hybrid Experiences With a New Augmented Reality App

Augmented reality (AR) transforms how we view the world and do things. Since its first introduction in the 1960s, it has rapidly developed and been used extensively in fashion, marketing, the military, aviation, manufacturing, tourism, and many others.

Consumers are increasingly becoming adept at using augmented reality apps to try on products, learn new things, and discover information about their surroundings. Research shows that 56% of shoppers cite AR as giving them more confidence about a product’s quality, and 61% prefer to shop with retailers with AR experiences.

Aside from its impact on brands, AR is also transforming how companies operate internally by introducing better ways to perform jobs, train employees, and develop new designs.

No-Code Platform for Creating Your Own Immersive Experience

Creating AR experiences is no walk in the park. Firms that want to implement their own augmented reality apps require working with talented in-house app builders or purchasing from third-party app builders, with costs ranging from tens to hundreds of thousands of dollars.

Treedis platform

Treedis makes the process simple with its Software-as-a-Service platform, which helps users create immersive experiences using a no-code drag-and-drop visual editor. Users can create digital, virtual reality, and augmented reality dimensions of their digital twin with just a single scan.

Digital twins are immersive, interactive, and accurate 3D models of physical spaces. They’re a digital replica of devices, people, processes, and systems whose purpose is to create cost-effective simulations that help decision-makers make data-driven choices.

Powered by Matterport technology, Treedis helps companies create these immersive experiences for retail, training, marketing, onboarding, games, and more.

Enhancing Digital Twins With an Augmented Reality App

According to Treedis CEO Omer Shamay, the Treedis augmented reality app helps you “view enhanced versions of your digital twins within their physical counterparts.” You can visualize any changes or modifications in real time and view all the 3D objects, tags, directions, and content in the digital twin.

“Any changes made to your digital twin will be instantly visible in AR, ensuring seamless collaboration and communication across your team,” Shamay adds.

The platform helps 3D creators and enterprises create an immersive and powerful digital experience for their users, so they can fully harness the benefits of AR solutions without huge developmental costs or challenges.

It can be used extensively for creating unique shopping experiences that incorporate elements of virtual commerce and gamification features. It’s ideal for developing immersive learning experiences to help learners grasp concepts better through physical interaction with their environment. The app can also be used to provide indoor navigation for guiding visitors to different access points and key locations within a space.

Treedis augmented reality app

The app is already available for Treedis’ enterprise users and promises to be “an accessible app with low prices and an easy-to-use AR solution,” according to Shamay.

With AR becoming more accessible, it won’t be long before more brands and firms adapt the technology and provide better and enhanced experiences to their audiences.

Treedis Transforms Physical Spaces Into Hybrid Experiences With a New Augmented Reality App Read More »

spatial-releases-toolkit-for-“gaming-and-interactivity”

Spatial Releases Toolkit for “Gaming and Interactivity”

Spatial started out as an enterprise remote collaboration solution. Then, it changed lanes to offer virtual worlds for consumer social uses. Now, it could become an immersive gaming platform. At least, in part.

A Look at the Toolkit

The new “Spatial Creator Toolkit” is a Unity-powered interface that allows users to create custom avatars, items, and “quests.” The quests can be “games and immersive stories” as well as “interactive exhibitions” according to a release shared with ARPost.

Spatial Creator Toolkit

“This evolution to gamified and interactive co-experiences is a natural expansion for the platform and the internet,” said Jinha Lee, CPO and co-founder. “With more than 1 million registered creators on the platform today, and almost 2 million worlds, we are committed to empowering all creators.”

The toolkit also features advanced tools for linking virtual worlds together. All of it is powered by visual scripting as opposed to conventional coding. The company said that this allows “zero learning curve and instant scalability.” During a closed alpha phase that began in December, companies with advanced access including Vogue and McDonald’s broke in the toolkit.

Spatial’s Room to Grow

According to the release, the company hopes to become the YouTube of 3D games. “As Adobe is for 2D video, Unity is the software unlocking 3D games and the new medium of the internet. Spatial is like the YouTube for these games, enabling instant publishing to the mass market,” said CEO and co-founder of Spatial, Anand Agarawala. “Anyone can build, the key is unlocking the capabilities to allow the magic to happen.”

Considering plans for a creator marketplace by the end of the year, the new business model is also similar to platforms like Roblox. That platform is a flagship of the gaming creator economy but has so far stayed away from NFTs.

Having fully embraced NFTs, along with other Web3 building blocks like cross-platform avatar compatibility through Ready Player Me, Spatial has a lot of opportunities and tools at its disposal that platforms like Roblox don’t. These include partnerships in the larger Web3 community, and at least some level of interoperability with other immersive platforms.

In short, we still have to see where this direction takes the company. But, it looks like calling the platform a “YouTube” or a “Roblox” might be selling it short. Both of those are massive creator-driven online marketplaces and communities, but both of them are limited by their own walls and that might not be true of this new side of Spatial.

Let’s See How Far it Goes

Skepticism about what may seem like another blockchain game drive is understandable. However, blockchain games that have let users down in the past were largely trying to shill their own products with questionable infrastructure. Spatial is a proven company with an open ecosystem that has nothing to gain by anyone losing. This should be fun.

Spatial Releases Toolkit for “Gaming and Interactivity” Read More »

treble-technologies-brings-realistic-sound-to-virtual-spaces

Treble Technologies Brings Realistic Sound to Virtual Spaces

Immersive spaces can be very immersive visually. But they can still sound pretty flat. This can disrupt immersion in games and social applications but in automotive, engineering, and construction (AEC), understanding how a space sounds can be crucial to the design process. That’s why Treble is working on realistic sound reproduction for virtual spaces.

We spoke with Treble CEO Finnur Pind about the opportunities and obstacles in believable immersive sound in enterprise and beyond.

Sound Simulation and Rendering

A conversation inside of a car can sound a lot different than a conversation in your living room. A conversation in your living room can sound a lot different than a conversation in an auditorium. If you’re trying to hear that conversation with an assistive device like hearing aids, the conversation can be even more complicated.

Right now, a conversation in any of those spaces recreated in a virtual environment probably sounds about the same. Designers can include environmental sound like water or wind or a crackling fire as they often do for games, but the sonic profile of the environment itself is difficult to replicate.

That’s because sound is caused by vibrations of the air. In different physical environments, the environment itself absorbs and reflects those vibrations in unique ways based on their physical properties. But, virtual environments don’t have physical properties and sound is conveyed electronically rather than acoustically.

The closest we’ve come to real immersive sound is “spatial audio.” Spatial audio represents where a sound is coming from and how far away it is from a listener by manipulating stereo volume but it still doesn’t account for environmental factors. That doesn’t mean spatial audio isn’t good enough. It does what it does and it plays a part in “sound simulation and rendering.”

Sound simulation and sound rendering are “two sides of the same coin,” according to Pind. The process, which has its roots in academia before Treble started in 2020, involves simulating acoustics and rendering the environment that produces them.

How Treble Rethinks Virtual Sound

“Solving the mathematics of sound has been developed for some time but it never found practice because it’s too computationally heavy,” said Pind. “What people have been doing until now is this kind of ray-tracing simulation. … It works up to a certain degree.”

Treble - Acoustic simulation suiteTreble uses a “wave-based approach” that accounts for the source of the audio, as well as the geometry of the space and the physical properties of the building material. In the event that the virtual space includes fantastical or unspecified materials, the company assigns a set of physical characteristics from a known real-world material.

That kind of situation doesn’t arise often so far because, while Pind is open to Treble working with entertainment and consumer applications, the company is mainly focused on enhancing digital design models for the AEC industry.

“It’s not just seeing what your building will look like, but hearing what your building will sound like,” said Pind. “As long as you have a 3D building model … our platform connects directly, understands the geometry, building models, and sound sources.”

Pind says that the concept may one day have applications in augmented reality and mixed reality as well. Say in a platform like Microsoft Mesh or Varjo Reality Cloud where users are essentially sharing or exchanging surroundings via VR, recreating the real spaces of one user as the virtual space of the other user can greatly aid immersion and realism.

Treble - sound in VR

“Research has shown that having realistic sound in a VR environment improves the immersion,” said Pind. “In AR it’s more the idea of being in a real space but having sound augmented.”

Machine Learning, R&D, and Beyond

As strange as it may sound, this approach also works essentially backwards. Instead of recreating a physical environment, Treble can create sound profiles for physically-based spaces that may or may not exist – or ever exist. Why? To model how sound would behave in that environment. It’s an approach called “synthetic data generation.”

Treble - synthetic data generation“AI is kind of the talk of the town these days and one of the major issues of training AI is a lack of data,” said Pind. Training AI to work with sound requires a lot of audio which, historically, had to be sourced from physical equipment transported and set up in physical environments. “Now they’re starting to come to us to synthetically generate it.”

This same approach is increasingly being used to test audio hardware ranging from hearing aids to XR headsets.

Sounds Pretty Good

Pind thinks that the idea of using sound simulation and rendering for things like immersive concerts is interesting, even though that’s not what Treble does right now. It’s another resource already in the hands of forward-thinking companies and potentially soon coming to an XR venue in your headset.

Treble Technologies Brings Realistic Sound to Virtual Spaces Read More »

assisted-reality:-the-other-ar

Assisted Reality: The Other AR

“AR” stands for “augmented reality,” right? Almost always. However, there is another “AR” – assisted reality. The term is almost exclusively used in industry applications, and it isn’t necessarily mutually exclusive of augmented reality. There are usually some subtle differences.

Isn’t Augmented Reality Tricky Enough?

“AR” can already be confusing, particularly given its proximity to “mixed reality.” When ARPost describes something as “mixed reality” it means that digital elements and physical objects and environments can interact with one another.

This includes hand tracking beyond simple menus. If you’re able to pick something up, for example, that counts as mixed reality. In augmented reality, you might be able to do something like position an object on a table, or see a character in your environment, but you can’t realistically interact with them and they can’t realistically interact with anything else.

So, What Is “Assisted Reality?”

Assisted reality involves having a hands-free, heads-up digital display that doesn’t interact with the environment or the environment’s occupants. It might recognize the environment to do things like generate heatmaps, or incorporate data from a digital twin, but the priority is information rather than interaction.

The camera on the outside of an assisted reality device might show the frontline worker’s view to a remote expert. It might also identify information on packaging like barcodes to instruct the frontline worker how to execute an action or where to bring a package. This kind of use case is sometimes called “data snacking” – it provides just enough information exactly when needed.

Sometimes, assisted reality isn’t even that interactive. It might be used to do things like support remote instruction by enabling video calls or displaying workflows.

Part of the objective of these devices is arguably to avoid interaction with digital elements and with the device itself. As it is used in enterprise, wearers often need their hands for completing tasks rather than work an AR device or even gesture with one.

These less technologically ambitious use cases also require a lot less compute power and a significantly smaller display. This means that they can occupy a much smaller form factor than augmented reality or mixed reality glasses. This makes them lighter, more durable, easier to integrate into personal protective equipment, and easier to power for a full shift.

Where It Gets Tricky

One of the most popular uses for augmented reality, both in industry and in current consumer applications, are virtual screens. In consumer applications, these are usually media viewers for doing things like watching videos or even playing games.

However, in enterprise applications, virtual screens might be used for expanding a virtual desktop by displaying email, text documents, and other productivity tools. This is arguably an assisted reality rather than an augmented reality use case because the digital elements are working over the physical environment rather than working with it or in it.

In fact, some people in augmented reality refer to these devices as “viewers” rather than “augmented reality glasses.” This isn’t necessarily fair, as while some devices are primarily used as “viewers,” they also have augmented reality applications and interactions – Nreal Air (review) being a prime example. Still, virtually all assisted reality devices are largely “viewers.”

Nreal Air - Hands-on Review - Jon
Jon wearing Nreal Air

Words, Words, Words

All of these terms can feel overwhelming, particularly when the lines between one definition and another aren’t always straight and clear. However, emerging technology has emerging use cases and naturally has an emerging vocabulary. Terms like “assisted reality” might not always be with us, but they can help us stay on the same page in these early days.

Assisted Reality: The Other AR Read More »

redefining-immersive-virtual-experiences-with-embodied-audio

Redefining Immersive Virtual Experiences With Embodied Audio

EDGE Sound Research is pioneering “embodied audio,” a new technology that changes the way we experience virtual reality. When we think of “virtual reality,” the focus only seems to be on engaging our sense of sight. EDGE Sound Research’s embodied audio will revolutionize how we experience audio in VR worlds through its use of audible and tactile frequencies.

One of the things that sets this technology apart is that it stems from co-founder Ethan Castro’s experience. Castro had issues with hearing and, as a result, he had to resort to sound. Moreover, Castro loved music and even became a professional audio engineer and composer. He researched how sound can be perceived by combining hearing and feeling. Eventually, he teamed up with co-founder Val Salomaki to start EDGE Sound Research.

Bringing Embodied Audio to Life

Embodied audio adds realism to sound. This groundbreaking technology combines the auditory and physical sensations of sound in an “optimized and singular embodiment.”

“This means a user can enjoy every frequency range they can hear (acoustic audio) and feel (haptic and tactile audio, also known as physical audio),” said Castro and Salomaki.

Castro and Salomaki go on to explain that they invented a new patent-pending technology for embodied audio, which they dubbed ResonX™. This new technology, which has been nominated for the CES Innovation Award, has the capability to transform any physical space or environment into an embodied audio experience that has the ability to reproduce an expansive range of physical (7-5,000+ Hz) and acoustic audio frequencies (80-17,000 Hz).

Crafting New Experiences With the ResonX™ System

“The ResonX™ system is a combination of hardware and software. A user places the ResonX™ Core (hardware component) on the surface of a material and the ResonX™ software calibrates the surface of the material to resonate reliable hi-fidelity sound that the user can hear and feel,” said Castro and Salomaki.

ResonX Core - Embodied audio by Edge Sound Research

For example, when someone uses the ResonX™ system at home, they can attach the ResonX™ Core to their couch, effectively turning it into an embodied audio experience. So, when they sit on the couch while watching their favorite show, say a basketball game, they will feel as if they’re there in person. Users can hear every single sound, including the ball being dribbled and even the more subtle sounds like the squeaking sounds made by sneakers.

According to Castro and Salomaki, if a user wants to take their movie-viewing experience to the next level, here’s what they can do:

“An individual can attach the ResonX™ to flooring and then be fully immersed in walking around a new planet by hearing and feeling every moment to make the experience feel life-like.”

Aside from enriching users’ experiences in the metaverse, this new technology finally enables us to engage our other senses, thus adding a new dimension to how we experience music, games, live entertainment, and more.

Embodied audio - traditional sound vs ReasonX

“This opens the door to new possibilities in storytelling and connectivity around the world as an experience can now begin to blur what is real because of three senses simultaneously informing a user that a moment is happening. Not as an effect, but as an embodied reality,” shared the EDGE Sound Research co-founders.

Embracing Innovation in the VR Space

With ResonX™ and its ability to bring embodied audio to life, users can now have richer experiences in virtual worlds. Not only will they be engaging their sense of sight, but they’ll also get the opportunity to experience these virtual worlds using their sense of hearing and touch. Now, users have the chance to transform their physical environment into a cohesive sound system.

The good news is, users can enjoy the embodied audio experience in many public venues. According to Castro and Salomaki, they’ve already deployed the ResonX™ in various sports stadiums, bars, and art installations. Furthermore, if you want to bring home the ResonX™ experience, you can get in touch with EDGE Sound Research for a custom installation.

What will embodied audio look like in the future?

It’s likely going to become more widely accessible. “Over time, we will release a more widely available consumer version of the ResonX™ system that will make this ResonX™ technology more accessible to all,” said Castro and Salomaki.

Redefining Immersive Virtual Experiences With Embodied Audio Read More »