digital twin

new-simulation-of-titanic’s-sinking-confirms-historical-testimony

New simulation of Titanic’s sinking confirms historical testimony


NatGeo documentary follows a cutting-edge undersea scanning project to make a high-resolution 3D digital twin of the ship.

The bow of the Titanic Digital Twin, seen from above at forward starboard side. Credit: Magellan Limited/Atlantic Productions

In 2023, we reported on the unveiling of the first full-size 3D digital scan of the remains of the RMS Titanic—a “digital twin” that captured the wreckage in unprecedented detail. Magellan Ltd, a deep-sea mapping company, and Atlantic Productions conducted the scans over a six-week expedition. That project is the subject of the new National Geographic documentary Titanic: The Digital Resurrection, detailing several fascinating initial findings from experts’ ongoing analysis of that full-size scan.

Titanic met its doom just four days into the Atlantic crossing, roughly 375 miles (600 kilometers) south of Newfoundland. At 11: 40 pm ship’s time on April 14, 1912, Titanic hit that infamous iceberg and began taking on water, flooding five of its 16 watertight compartments, thereby sealing its fate. More than 1,500 passengers and crew perished; only around 710 of those on board survived.

Titanic remained undiscovered at the bottom of the Atlantic Ocean until an expedition led by Jean-Louis Michel and Robert Ballard reached the wreck on September 1, 1985. The ship split apart as it sank, with the bow and stern sections lying roughly one-third of a mile apart. The bow proved to be surprisingly intact, while the stern showed severe structural damage, likely flattened from the impact as it hit the ocean floor. There is a debris field spanning a 5×3-mile area, filled with furniture fragments, dinnerware, shoes and boots, and other personal items.

The joint mission by Magellan and Atlantic Productions deployed two submersibles nicknamed Romeo and Juliet to map every millimeter of the wreck, including the debris field spanning some three miles. The result was a whopping 16 terabytes of data, along with over 715,000 still images and 4K video footage. That raw data was then processed to create the 3D digital twin. The resolution is so good, one can make out part of the serial number on one of the propellers.

“I’ve seen the wreck in person from a submersible, and I’ve also studied the products of multiple expeditions—everything from the original black-and-white imagery from the 1985 expedition to the most modern, high-def 3D imagery,” deep ocean explorer Parks Stephenson told Ars. “This still managed to blow me away with its immense scale and detail.”

The Juliet ROV scans the bow railing of the Titanic wreck site. Magellan Limited/Atlantic Productions

The NatGeo series focuses on some of the fresh insights gained from analyzing the digital scan, enabling Titanic researchers like Stephenson to test key details from eyewitness accounts. For instance, some passengers reported ice coming into their cabins after the collision. The scan shows there is a broken porthole that could account for those reports.

One of the clearest portions of the scan is Titanic‘s enormous boiler rooms right at the rear bow section where the ship snapped in half. Eyewitness accounts reported that the ship’s lights were still on right up until the sinking, thanks to the tireless efforts of Joseph Bell and his team of engineers, all of whom perished. The boilers show up as concave on the digital replica of Titanic, and one of the valves is in an open position, supporting those accounts.

The documentary spends a significant chunk of time on a new simulation of the actual sinking, taking into account the ship’s original blueprints, as well as information on speed, direction, and position. Researchers at University College London were also able to extrapolate how the flooding progressed. Furthermore, a substantial portion of the bow hit the ocean floor with so much force that much of it remains buried under mud. Romeo’s scans of the debris field scattered across the ocean floor enabled researchers to reconstruct the damage to the buried portion.

Titanic was famously designed to stay afloat if up to four of its watertight compartments flooded. But the ship struck the iceberg from the side, causing a series of punctures along the hull across 18 feet, affecting six of the compartments. Some of those holes were quite small, about the size of a piece of paper, but water could nonetheless seep in and eventually flood the compartments. So the analysis confirmed the testimony of naval architect Edward Wilding—who helped design Titanic—as to how a ship touted as unsinkable could have met such a fate. And as Wilding hypothesized, the simulations showed that had Titanic hit the iceberg head-on, she would have stayed afloat.

These are the kinds of insights that can be gleaned from the 3D digital model, according to Atlantic Productions CEO Anthony Geffen, who produced the NatGeo series. “It’s not really a replica. It is a digital twin, down to the last rivet,” he told Ars. “That’s the only way that you can start real research. The detail here is what we’ve never had. It’s like a crime scene. If you can see what the evidence is, in the context of where it is, you can actually piece together what happened. You can extrapolate what you can’t see as well. Maybe we can’t physically go through the sand or the silt, but we can simulate anything because we’ve actually got the real thing.”

Ars caught up with Stephenson and Geffen to learn more.

A CGI illustration of the bow of the Titanic as it sinks into the ocean. National Geographic

Ars Technica: What is so unique and memorable about experiencing the full-size 3D scan of Titanic, especially for those lucky enough to have seen the actual wreckage first-hand via submersible?

Parks Stephenson: When you’re in the submersible, you are restricted to a 7-inch viewport and as far as your light can travel, which is less than 100 meters or so. If you have a camera attached to the exterior of the submersible, you can only get what comes into the frame of the camera. In order to get the context, you have to stitch it all together somehow, and, even then, you still have human bias that tends to make the wreck look more like the original Titanic of 1912 than it actually does today. So in addition to seeing it full-scale and well-lit wherever you looked, able to wander around the wreck site, you’re also seeing it for the first time as a purely data-driven product that has no human bias. As an analyst, this is an analytical dream come true.

Ars Technica: One of the most visually arresting images from James Cameron’s blockbuster film Titanic was the ship’s stern sticking straight up out of the water after breaking apart from the bow. That detail was drawn from eyewitness accounts, but a 2023 computer simulation called it into question. What might account for this discrepancy? 

Parks Stephenson: One thing that’s not included in most pictures of Titanic sinking is the port heel that she had as she’s going under. Most of them show her sinking on an even keel. So when she broke with about a 10–12-degree port heel that we’ve reconstructed from eyewitness testimony, that stern would tend to then roll over on her side and go under that way. The eyewitness testimony talks about the stern sticking up as a finger pointing to the sky. If you even take a shallow angle and look at it from different directions—if you put it in a 3D environment and put lifeboats around it and see the perspective of each lifeboat—there is a perspective where it does look like she’s sticking up like a finger in the sky.

Titanic analyst Parks Stephenson, metallurgist Jennifer Hooper, and master mariner Captain Chris Hearn find evidence exonerating First Officer William Murdoch, long accused of abandoning his post.

This points to a larger thing: the Titanic narrative as we know it today can be challenged. I would go as far as to say that most of what we know about Titanic now is wrong. With all of the human eyewitnesses having passed away, the wreck is our only remaining witness to the disaster. This photogrammetry scan is providing all kinds of new evidence that will help us reconstruct that timeline and get closer to the truth.

Ars Technica: What more are you hoping to learn about Titanic‘s sinking going forward? And how might those lessons apply more broadly?

Parks Stephenson: The data gathered in this 2022 expedition yielded more new information that could be put into this program. There’s enough material already to have a second show. There are new indicators about the condition of the wreck and how long she’s going to be with us and what happens to these wrecks in the deep ocean environment. I’ve already had a direct application of this. My dives to Titanic led me to another shipwreck, which led me to my current position as executive director of a museum ship in Louisiana, the USS Kidd.

She’s now in dry dock, and there’s a lot that I’m understanding about some of the corrosion issues that we experienced with that ship based on corrosion experiments that have been conducted at the Titanic wreck sites—specifically how metal acts underwater over time if it’s been stressed on the surface. It corrodes differently than just metal that’s been submerged. There’s all kinds of applications for this information. This is a new ecosystem that has taken root in Titanic. I would say between my dive in 2005 and 2019, I saw an explosion of life over that 14-year period. It’s its own ecosystem now. It belongs more to the creatures down there than it does to us anymore.

The bow of the Titanic Digital Twin. Magellan Limited/Atlantic Productions

As far as Titanic itself is concerned, this is key to establishing the wreck site, which is one of the world’s largest archeological sites, as an archeological site that follows archeological rigor and standards. This underwater technology—that Titanic has accelerated because of its popularity—is the way of the future for deep-ocean exploration. And the deep ocean is where our future is. It’s where green technology is going to continue to get its raw elements and minerals from. If we don’t do it responsibly, we could screw up the ocean bottom in ways that would destroy our atmosphere faster than all the cars on Earth could do. So it’s not just for the Titanic story, it’s for the future of deep-ocean exploration.

Anthony Geffen: This is the beginning of the work on the digital scan. It’s a world first. Nothing’s ever been done like this under the ocean before. This film looks at the first set of things [we’ve learned], and they’re very substantial. But what’s exciting about the digital twin is, we’ll be able to take it to location-based experiences where the public will be able to engage with the digital twin themselves, walk on the ocean floor. Headset technology will allow the audience to do what Parks did. I think that’s really important for citizen science. I also think the next generation is going to engage with the story differently. New tech and new platforms are going to be the way the next generation understands the Titanic. Any kid, anywhere on the planet, will be able to walk in and engage with the story. I think that’s really powerful.

Titanic: The Digital Resurrection premieres on April 11, 2025, on National Geographic. It will be available for streaming on Disney+ and Hulu on April 12, 2025.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

New simulation of Titanic’s sinking confirms historical testimony Read More »

rethinking-digital-twins

Rethinking Digital Twins

The idea of digital twins has been conceptually important to immersive technology and related ideas like the metaverse for some time – not to mention their practical employment, particularly in enterprise. However, the term doesn’t, or hasn’t, necessarily meant what it sounds like and the actual technology has so far had only limited usefulness for spatial computing.

Advances in immersive technology itself are opening up more nuanced and exciting applications for digital twins as fully-featured virtual artifacts, environments, and interfaces to the point that even experts who have been working with digital twins for decades are starting to rethink the concept.

Understanding Digital Twins

What exactly constitutes a digital twin is still a matter of some difference from company to company. ARPost defines a digital twin as “a virtual version of something that also exists as a physical object.” This basic definition includes now arguably antiquated iterations of the technology that wouldn’t be of much interest to the immersive tech crowd.

Strictly speaking, a digital twin does not have to be interactive, dynamic, or even visually representative of the physical twin. In academia and enterprise where this concept has been practically employed for decades, a digital twin might be a spreadsheet or a database.

We often think about the metaverse as like The Matrix, but we often think of it as the way Neo experiences the Matrix – from within. In that same analogy, digital twins are like the Matrix but as Tank and Dozer experience it – endless numbers that only look like numbers to the uninitiated but that paint detailed pictures to those in the know.

While that version certainly continues to have its practical applications, it’s not exactly what most readers will have in mind when they encounter the term.

The Shifting View of Digital Twins

“The traditional view of a digital twin is a row in a database that’s updated by a device,” Nstream founder and CEO Chris Sachs told ARPost. “I don’t think that view is particularly interesting or particularly useful.”

Nstream is “a vertically integrated streaming data application platform.” Their work includes digital twins in the conventional sense but it also includes more nuanced uses that incorporate the conventional but also stretch it into new fields. That’s why companies aren’t just comparing definitions, they’re also rethinking how they use these terms internally.

“How Unity talks about digital twins – real-time 3D in industry – I think we need to revamp what that means as we go along,” Unity VP of Digital Twins Rory Armes told ARPost. “We’ve had digital twins for a while […] our evolution or our kind of learning is the visualization of that data.”

This evolution naturally has a lot to do with technological advances, but Armes hypothesizes that it’s also the result of a generational shift. People who have lived their whole lives as regular computer users and gamers have a different approach to technology and its applications.

“There’s a much younger group coming into the industry […] the way they think and the way they operate is very different,” said Armes. “Their ability to digest data is way beyond anything I could do when I was 25.”

Data doesn’t always sound interesting and it doesn’t always look exciting. That is, until you remember that the metaverse isn’t just a collection of virtual worlds – it also means augmenting the physical world. That means lots of data – and doing new things with it.

Digital Twins as a User Interface

“If you have a virtual representation of a thing, you can run software on that representation as though it was running on the thing itself. That’s easier, it’s more usable, it’s more agile,” said Sachs. “You can sort of program the world by programming the digital twin.”

This approach allows limited hardware to provide minimal input to the digital twin. These provide minimal output to devices creating an automated, more affordable, more responsive Internet of Things.

“You create a kind of virtual world […] whatever they decide in the virtual world, they send it back to the real world,” said Sachs. “You can create a smarter world […] but you can’t do it one device at a time. You have to get them to work together.”

This virtual world can be controlled from the backend by VR. It can also be navigated as a user interface in AR.

“In AR, you can kind of intuit what’s happening in the world. That’s such a boost to understanding this complex technical world that we’ve built,” said Sachs. “Google and Niantic haven’t solved it, they’ve solved the photon end of it, the rendering of it, but they haven’t solved the interactivity of it […] the problem is the fabric of the web. It doesn’t work.”

To Sachs, this process of creating connected digital twins of essentially every piece of infrastructure and utility on earth isn’t just the next thing that we do with the internet – it’s how the next generation of the internet comes about.

“The world wide web was designed as a big data repository. The problem is that not everything is a document,” said Sachs. “We’re trying to upgrade the web so everything, instead of being a web page is a web agent, […] instead of a document, everything is a process.”

Rebuilding the World Wide Web

While digital twins can be a part of reinventing the internet, a lot of the tools used to build digital twins are also not made for that particular task. It doesn’t mean that they can’t do the job, it just means that providers and people using those services have to be creative about it.

“The Unity core was never developed for these VR training and geospatial data uses. […] Old-school 3D modeling like Maya was never designed for [spatial data],” said Armes. “That’s where the game engine starts.”

Unity – which is a game engine at heart – isn’t shying away from digital twins. The company works with groups, particularly in industry, to use Unity’s visual resources in these more complex use cases – often behind the scenes on internal projects.

“There are tons of companies that have bought Unity and are using it to visualize their data in whatever form,” said Armes. “People don’t necessarily use Unity to bring a product to the community, they’re using it as an asset process and that’s what Unity does really well.”

While Unity “was never developed for” those use cases, the toolkit can do it and do it well.

“We have a large geospatial model, we slap it into an engine, we’re running that engine,” said Armes. “We’re now bringing multiple layers to a world and being able to render that out.”

“Bringing the Worlds Together”

A digital twin of the real world powered by real-time data – a combination of the worlds described by Armes and Sachs – has huge potential as a way of both understanding and managing the world.

“We’re close to bringing the worlds together, in a sense,” said Armes. “Suddenly now, we’re starting to bring the pieces together […] we’re getting to that space.”

The Orlando Economic Partnership (OEP)  is working on just such a platform, with a prototype already on offer. I was fortunate enough to see a presentation of the Orlando digital twin at the Augmented World Expo. The plan is for the twin to one day show real-time information on the city in a room-scale experience accessible to planners, responders, and the government.

“It’s going to become a platform for the city to build on,” said Justin Braun, OEP Director of Marketing and Communications.

Moving Toward Tomorrow

Digital twins have a lot of potential. But, many are stuck between thinking about how digital twins have always worked and thinking about the ways that we would like them to work. The current reality is somewhere in the middle – but, like everything else in the world of emerging and converging technology – it’s moving in an interesting direction.

Rethinking Digital Twins Read More »

how-cities-are-taking-advantage-of-ar-tech-and-how-apple’s-vision-pro-could-fuel-innovation

How Cities Are Taking Advantage of AR Tech and How Apple’s Vision Pro Could Fuel Innovation

Apple unveiled its first mixed reality headset, the Vision Pro, at this year’s Worldwide Developers Conference (WWDC) on June 5, 2023. The company’s “first spatial computer” will enable users to interact with digital content like never before by leveraging a new 3D interface to deliver immersive spatial experiences.

The Vision Pro marks a new era for immersive technologies, and it can potentially be used to bolster efforts in using such technologies to improve communities.

How the Vision Pro Headset Can Strengthen Efforts to Transform Orlando

Cities around the world are starting to apply new technologies to help improve their communities. City, University of London, for instance, has launched an initiative that will bring about the UK’s largest AR, VR, and metaverse training center. London has also been mapped in 3D, allowing locals and visitors to have an immersive view of the city.

In 2021, Columbia University started a project called the “Hybrid Twins for Urban Transportation”, which creates a digital twin of New York’s key intersections to help optimize traffic flows.

Using New Technologies to Enhance Orlando’s Digital Twin Initiative

With Orlando, Florida, being designated as the metaverse’s MetaCenter, new MR headsets like Apple’s Vision Pro can help create radical changes to bolster the city’s digital twin efforts, which can accelerate Orlando’s metaverse capabilities.

Apple Vision Pro

In an interview with ARPost, Tim Giuliani, the President and CEO of the Orlando Economic Partnership (OEP), shared that emerging technologies like the digital twin enables them to showcase the region to executives who are planning to relocate their companies to Orlando.

Moreover, the digital twin helps local leaders ensure that the city has a robust infrastructure to support its residents, thus positively impacting the city’s economy and prosperity.

The digital twin’s physical display is currently housed at the OEP’s headquarters in downtown Orlando. However, Giuliani shared that AR headsets can make it more accessible.

We can use the headset’s technology to take our digital twin to trade shows or whenever it goes out to market to companies,” said Giuliani. According to Giuliani, utility companies and city planners can use the 3D model to access a holographic display when mapping out proposed infrastructure improvements. Stakeholders can also use it to create 3D models using their own data for simulations like climate change and infrastructure planning.

He added that equipment like the Vision Pro can help make VR, AR, and 3D simulation more widespread. According to Giuliani, while the Vision Pro is the first one to come out, other new devices will come out in the coming years and the competition will make these devices a consumer device.

Apple’s announcement cements the importance of the MetaCenter. The Orlando region has been leading in VR and AR and 3D simulation for over a decade now. So, all the things that we have been saying of why we are the MetaCenter, this hardware better positions us to continue leading in this territory,” he told us.

Leveraging the Vision Pro and MR to Usher in New Innovations

Innovate Orlando CEO and OEP Chief Information Officer David Adelson noted that aside from companies, ordinary individuals who aren’t keenly interested in immersive tech for development or work can also use devices like the Vision Pro to help Orlando with its effort to become the MetaCenter.

These new devices are one of the hardware solutions that this industry has been seeking. Through these hardware devices, the software platforms, and simulation market that has been building for decades, will now be enabled on a consumer and a business interface,” said Adelson.

Adelson also shared that Orlando has been leading in the spatial computing landscape and that the emergence of a spatial computing headset like the Vision Pro brings this particular sector into the spotlight.

How can businesses leverage the new Vision Pro headset and other MR technologies to usher in new developments?

According to Giuliani, businesses can use these technologies to provide a range of services, such as consulting services, as well as help increase customer engagement, cut costs, and make informed decisions faster.

AR can be a powerful tool to provide remote expertise and remote assistance with AR helps move projects forward and provide services that would otherwise require multiple site visits. This is what we are taking advantage of with the digital twin,” said Giuliani.

Giuliani also noted that such technologies can be a way for companies to empower both employees and customers by enhancing productivity, improving services, and fostering better communication.

Potential Drawbacks of Emerging Technologies

Given that these are still relatively new pieces of technology, it’s possible that they’ll have some drawbacks. However, according to Adelson, these can be seen as a positive movement that can potentially change the Web3 landscape. Giuliani echoes this sentiment.

We like to focus on the things that can unite us and help us move forward to advance broad-based prosperity and this means working with the new advancements created and finding ways to make them work and facilitate the work we all do,” he told us.

How Cities Are Taking Advantage of AR Tech and How Apple’s Vision Pro Could Fuel Innovation Read More »

a-very-interesting-vr/ar-association-enterprise-&-training-forum

A Very Interesting VR/AR Association Enterprise & Training Forum

The VR/AR Association held a VR Enterprise and Training Forum yesterday, May 24. The one-day event hosted on the Hopin remote conference platform, brought together a number of industry experts to discuss the business applications of a number of XR techniques and topics including digital twins, virtual humans, and generative AI.

The VR/AR Association Gives Enterprise the Mic

The VR/AR Association hosted the event, though non-members were welcome to attend. In addition to keynotes, talks, and panel discussions, the event included opportunities for networking with other remote attendees.

“Our community is at the heart of what we do: we spark innovation and we start trends,” said VR/AR Association Enterprise Committee Co-Chair, Cindy Mallory, during a welcome session.

While there were some bonafide “technologists” in the panels, most speakers were people using the technology in industry themselves. While hearing from “the usual suspects” is nice, VR/AR Association fora are rare opportunities for industry professionals to hear from one another on how they approach problems and solutions in a rapidly changing workplace.

“I feel like there are no wrong answers,” VR/AR Association Training Committee Co-Chair,Bobby Carlton,said during the welcome session. “We’re all explorers asking where these tools fit in and how they apply.”

The Convergence

One of the reasons that the workplace is changing so rapidly has to do with not only the pace with which technologies are changing, but with the pace with which they are becoming reliant on one another. This is a trend that a number of commentators have labeled “the convergence.”

“When we talk about the convergence, we’re talking about XR but we’re also talking about computer vision and AI,” CGS Inc President of Enterprise Learning and XR, Doug Stephen, said in the keynote that opened the event, “How Integrated XR Is Creating a Connected Workplace and Driving Digital Transformation.”

CGS Australia Head, Adam Shah, was also a speaker. Together the pair discussed how using XR with advanced IT strategies, AI, and other emerging technologies creates opportunities as well as confusion for enterprise. Both commented that companies can only seize the opportunities provided by these emerging technologies through ongoing education.

“When you put all of these technologies together, it becomes harder for companies to get started on this journey,” said Shah. “Learning is the goal at the end of the day, so we ask ‘What learning outcomes do you want to achieve?’ and we work backwards from there.”

The convergence isn’t only changing how business is done, it’s changing who’s doing what. That was much of the topic of the panel discussion “What Problem Are You Trying to Solve For Your Customer? How Can Generative AI and XR Help Solve It? Faster, Cheaper, Better!”

“Things are becoming more dialectical between producers and consumers, or that line is melting where consumers can create whatever they want,” said Virtual World Society Executive Director Angelina Dayton. “We exist as both creators and as consumers … We see that more and more now.”

“The Journey” of Emerging Technology

The figure of “the journey” was also used by Overlay founder and CEO, Christopher Morace, in his keynote “Asset Vision – Using AI Models and VR to get more out of Digital Twins.” Morace stressed that we have to talk about the journey because a number of the benefits that the average user wants from these emerging technologies still aren’t practical or possible.

“The interesting thing about our space is that we see this amazing future and all of these visionaries want to start at the end,” said Morace. “How do we take people along on this journey to get to where we all want to be while still making the most out of the technology that we have today?”

Morace specifically cited ads by Meta showing software that barely exists running on hardware that’s still a few years away (though other XR companies have been guilty of this as well). The good news is that extremely practical XR technologies do exist today, including for enterprise – we just need to accept that they’re on mobile devices and tablets right now.

Digital Twins and Virtual Humans

We might first think of digital twins of places or objects – and that’s how Morace was speaking of them. However, there are also digital twins of people. Claire Hedgespeth, Head of Production and Marketing at Avatar Dimension, addressed its opportunities and obstacles in her talk, “Business of Virtual Humans.”

“The biggest obstacle for most people is the cost. … Right now, 2D videos are deemed sufficient for most outlets but I do feel that we’re missing an opportunity,” said Hedgespeth. “The potential for using virtual humans is only as limited as your imagination.”

The language of digital twins was also used on a global scale by AR Mavericks founder and CEO, William Wallace, in his talk “Augmented Reality and the Built World.” Wallace presented a combination of AR, advanced networks, and virtual positioning coming together to create an application layer he calls “The Tagisphere.”

“We can figure out where a person is so we can match them to the assets that are near them,” said Wallace. “It’s like a 3D model that you can access on your desktop, but we can bring it into the real world.”

It may sound a lot like the metaverse to some, but that word is out of fashion at the moment.

And the Destination Is … The Metaverse?

“We rarely use the M-word. We’re really not using it at all right now,” Qualcomm’s XR Senior Director, Martin Herdina, said in his talk “Spaces Enabling the Next Generation of Enterprise MR Experiences.”

Herdina put extra emphasis on computing advancements like cloud computing over the usual discussions of visual experience and form factor in his discussion of immersive technology. He also presented modern AR as a stepping stone to a largely MR future for enterprise.

“We see MR being a total game changer,” said Herdina. “Companies who have developed AR, who have tested those waters and built experience in that space, they will be first in line to succeed.”

VR/AR Association Co-Chair, Mark Gröb, expressed similar sentiments regarding “the M-word” in his VRARA Enterprise Committee Summary, which closed out the event.

“Enterprise VR had a reality check,” said Gröb. “The metaverse really was a false start. The hype redirected to AI-generated tools may or may not be a bad thing.”

Gröb further commented that people in the business of immersive technology specifically may be better able to get back to business with some of that outside attention drawn toward other things.

“Now we’re focusing on the more important thing, which was XR training,” said Gröb. “All of the business cases that we talked about today, it’s about consistent training.”

Business as Usual in the VR/AR Association

There has been a lot of discussion recently regarding “the death of the metaverse” – a topic which, arguably, hadn’t yet been born in the first place. Whether it was always just a gas and the extent to which that gas has been entirely replaced by AI is yet to be seen.

While there were people talking about “the enterprise metaverse” – particularly referring to things like remote collaboration solutions – the metaverse is arguably more of a social technology anyway. While enterprise does enterprise, someone else will build the metaverse (or whatever we end up calling it) – and they’ll probably come from within the VR/AR Association as well.

A Very Interesting VR/AR Association Enterprise & Training Forum Read More »

treedis-transforms-physical-spaces-into-hybrid-experiences-with-a-new-augmented-reality-app

Treedis Transforms Physical Spaces Into Hybrid Experiences With a New Augmented Reality App

Augmented reality (AR) transforms how we view the world and do things. Since its first introduction in the 1960s, it has rapidly developed and been used extensively in fashion, marketing, the military, aviation, manufacturing, tourism, and many others.

Consumers are increasingly becoming adept at using augmented reality apps to try on products, learn new things, and discover information about their surroundings. Research shows that 56% of shoppers cite AR as giving them more confidence about a product’s quality, and 61% prefer to shop with retailers with AR experiences.

Aside from its impact on brands, AR is also transforming how companies operate internally by introducing better ways to perform jobs, train employees, and develop new designs.

No-Code Platform for Creating Your Own Immersive Experience

Creating AR experiences is no walk in the park. Firms that want to implement their own augmented reality apps require working with talented in-house app builders or purchasing from third-party app builders, with costs ranging from tens to hundreds of thousands of dollars.

Treedis platform

Treedis makes the process simple with its Software-as-a-Service platform, which helps users create immersive experiences using a no-code drag-and-drop visual editor. Users can create digital, virtual reality, and augmented reality dimensions of their digital twin with just a single scan.

Digital twins are immersive, interactive, and accurate 3D models of physical spaces. They’re a digital replica of devices, people, processes, and systems whose purpose is to create cost-effective simulations that help decision-makers make data-driven choices.

Powered by Matterport technology, Treedis helps companies create these immersive experiences for retail, training, marketing, onboarding, games, and more.

Enhancing Digital Twins With an Augmented Reality App

According to Treedis CEO Omer Shamay, the Treedis augmented reality app helps you “view enhanced versions of your digital twins within their physical counterparts.” You can visualize any changes or modifications in real time and view all the 3D objects, tags, directions, and content in the digital twin.

“Any changes made to your digital twin will be instantly visible in AR, ensuring seamless collaboration and communication across your team,” Shamay adds.

The platform helps 3D creators and enterprises create an immersive and powerful digital experience for their users, so they can fully harness the benefits of AR solutions without huge developmental costs or challenges.

It can be used extensively for creating unique shopping experiences that incorporate elements of virtual commerce and gamification features. It’s ideal for developing immersive learning experiences to help learners grasp concepts better through physical interaction with their environment. The app can also be used to provide indoor navigation for guiding visitors to different access points and key locations within a space.

Treedis augmented reality app

The app is already available for Treedis’ enterprise users and promises to be “an accessible app with low prices and an easy-to-use AR solution,” according to Shamay.

With AR becoming more accessible, it won’t be long before more brands and firms adapt the technology and provide better and enhanced experiences to their audiences.

Treedis Transforms Physical Spaces Into Hybrid Experiences With a New Augmented Reality App Read More »