Guest Article

designing-mixed-reality-apps-that-adapt-to-different-spaces

Designing Mixed Reality Apps That Adapt to Different Spaces

Laser Dance is an upcoming mixed reality game that seeks to use Quest’s passthrough capability as more than just a background. In this Guest Article, developer Thomas Van Bouwel explains his approach to designing an MR game that adapts to different environments.

Guest Article by Thomas Van Bouwel

Thomas is a Belgian-Brazilian VR developer currently based in Brussels. Although his original background is in architecture, his work in VR spans from indie games like Cubism to enterprise software for architects and engineers like Resolve. His Latest project, Laser Dance, is coming to Quest 3 late next year.

For the past year I’ve been working on a new game called Laser Dance. Built from the ground up for Mixed Reality (MR), my goal is to make a game that turns any room in your house into a laser obstacle course. Players walk back and forth between two buttons, and each button press spawns a new parametric laser pattern they have to navigate through. The game is still in full development, aiming for a release in 2024.

If you’d like to sign up for playtesting Laser Dance, you can do so here!

Laser Dance’s teaser trailer, which was first shown right after Meta Connect 2023

The main challenge with a game like this, and possibly any roomscale MR game, is to make levels that adapt well to any room regardless of its size and layout. Furthermore, since Laser Dance is a game that requires a lot of physical motion, the game should also try to accommodate differences in people’s level of mobility.

To try and overcome these challenges, having good room-emulation tools that enable quick level design iteration is essential. In this article, I want to go over how levels in Laser Dance work, and share some of the developer tools that I’m building to help me create and test the game’s adaptive laser patterns.

Laser Pattern Definition

To understand how Laser Dance’s room emulation tools work, we first need to cover how laser patterns work in the game.

A level in Laser Dance consists of a sequence of laser patterns – players walk (or crawl) back and forth between two buttons on opposite ends of the room, and each button press enables the next pattern. These laser patterns will try to adapt to the room size and layout.

Since the laser patterns in Laser Dance’s levels need to adapt to different types of spaces, the specific positions of lasers aren’t pre-determined, but calculated parametrically based on the room.

Several methods are used to position the lasers. The most straightforward one is to apply a uniform pattern over the entire room. An example is shown below of a level that applies a uniform grid of swinging lasers across the room.

An example of a pattern-based level, a uniform pattern of movement is applied to a grid of lasers, covering the entire room.

Other levels may use the button orientation relative to each other to determine the laser pattern. The below example shows a pattern that creates a sequence of blinking laser walls between the buttons .

Blinking walls of lasers are oriented perpendicular to the imaginary line between the two buttons.

One of the more versatile tools for level generation is a custom pathfinding algorithm, which was written for Laser Dance by Mark Schramm, guest developer on the project. This algorithm tries to find paths between the buttons that maximize the distance from furniture and walls, making a safer path for players.

The paths created by this algorithm allow for several laser patterns, like a tunnel of lasers, or placing a laser obstacle in the middle of the player’s path between the buttons.

This level uses pathfinding to spawn a tunnel of lasers that snakes around the furniture in this room.

Room Emulation

The different techniques described above for creating adaptive laser patterns can sometimes lead to unexpected results or bugs in specific room layouts. Additionally, it can be challenging to design levels while trying to keep different types of rooms in mind.

To help with this, I spent much of early development for Laser Dance on building a set of room emulation tools to let me simulate and directly compare what a level will look like between different room layouts.

Rooms are stored in-game as a simple text file containing all wall and furniture positions and dimensions. The emulation tool can take these files, and spawn several rooms next to each other directly in the Unity editor.

You can then swap out different levels, or even just individual laser patterns, and emulate these side by side in various rooms to directly compare them.

A custom tool built in Unity spawns several rooms side by side in an orthographic view, showing how a certain level in Laser Dance would look in different room layouts.

Accessibility and Player Emulation

Just as the rooms that people play in may differ, the people playing themselves will be very different as well. Not everyone may be able to crawl on the floor to dodge lasers, or feel capable of squeezing through a narrow corridor of lasers.

Because of the physical nature of Laser Dance’s gameplay, there will always be a limit to its accessibility. However, to the extent possible, I would still like to try and have the levels adapt to players in the same way they adapt to rooms.

Currently, Laser Dance allows players to set their height, shoulder width, and the minimum height they’re able to crawl under. Levels will try and use these values to adjust certain parameters of how they’re spawned. An example is shown below, where a level would typically expect players to crawl underneath a field of lasers. When adjusting the minimum crawl height, this pattern adapts to that new value, making the level more forgiving.

Accessibility settings allow players to tailor some of Laser Dance’s levels to their body type and mobility restrictions. This example shows how a level that would have players crawl on the floor, can adjust itself for folks with more limited vertical mobility.

These player values can also be emulated in the custom tools I’m building. Different player presets can be swapped out to directly compare how a level may look different between two players.

Laser Dance’s emulation tools allow you to swap out different preset player values to test their effect on the laser patterns. In this example, you can notice how swapping to a more accessible player value preset makes the tunnel of lasers wider.

Data, Testing, and Privacy

A key problem with designing an adaptive game like Laser Dance is that unexpected room layouts and environments might break some of the levels.

To try and prepare for this during development, there is a button in the settings players can choose to press to share their room data with me. Using these emulation tools, I can then try and reproduce their issue in an effort to resolve it.

Playtesters can press a button in the settings to share their room layout. This allows for local reproduction of potential issues they may have seen, using the emulation tools mentioned above.

This of course should raise some privacy concerns, as players are essentially sharing parts of their home layout with me. From a developers standpoint, it has a clear benefit to the design and quality control process, but as consumers of MR we should also have an active concern on what personal data developers should have access to and how it is used.

Personally, I think it’s important that sharing sensitive data like this requires active consent of the player each time it is shared – hence the button that needs to be actively pressed in the settings. Clear communication on why this data is needed and how it will be used is also important, which is a big part of my motivation for writing this article.

When it comes to MR platforms, an active discussion on data privacy is important too. We can’t always assume sensitive room data will be used in good faith by all developers, so as players we should expect clear communication and clear limitations from platforms regarding how apps can access and use this type of sensitive data, and stay vigilant on how and why certain apps may request access to this data.

Do You Need to Build Custom Tools?

Is building a handful of custom tools a requirement for developing adaptive Mixed Reality? Luckily the answer to that is: probably not.

We’re already seeing Meta and Apple come out with mixed reality emulation tools of their own, letting developers test their apps in a simulated virtual environment, even without a headset. These tools are likely to only get better and more robust in time.

There is still merit to building custom tools in some cases, since they will give you the most flexibility to test against your specific requirements. Being able to emulate and compare between multiple rooms or player profiles at the same time in Laser Dance is a good example of this.

– – — – –

Development of Laser Dance is still in full swing. My hope is that I’ll end up with a fun game that can also serve as an introduction to mixed reality for newcomers to the medium. Though it took some time to build out these emulation tools, they will hopefully both enable and speed up the level design process to help achieve this goal.

If you would like to help with the development of the game, please consider signing up for playtesting!


If you found these insights interesting, check out Van Bouwel’s other Guest Articles:

Designing Mixed Reality Apps That Adapt to Different Spaces Read More »

crafting-memorable-vr-experiences-–-the-interaction-design-of-‘fujii’

Crafting Memorable VR Experiences – The Interaction Design of ‘Fujii’

Creating a VR that truly immerses the user is no easy feat. To pull this off correctly requires a careful blend of graphics, animations, audio, and haptics that work together in deliberate concert to suspend disbelief and engross the user. Fujii is a joyful interactive adventure and a masterclass in rich VR interactions. The President of Funktronic Labs, the studio behind the game, is here to tell us more about his design approach.

Guest Article by Eddie Lee

Eddie Lee is the President and co-founder of Funktronic Labs, an LA-based independent game studio that focuses on delivering high-quality experiences through games, XR, and other interactive media. His experience spans nearly 15 years in the fields of graphics, game design, and computer simulations.

Today, we are thrilled to pull back the curtain and give you an inside look into our thought processing while developing Fujii, a title that has been a labor of love for us at Funktronic Labs. As the landscape of virtual reality continues its dynamic evolution, we saw a golden opportunity not just to adapt, but to breathe new life into Fujii. We’re eager to re-introduce our experience to a burgeoning new community of VR enthusiasts. Stick with us as we delve into the design process that originally brought this magical floral adventure to life.

A Brief Foray into Funktronic Labs

Founded a decade ago at the intersection of art, technology, and design, Funktronic Labs took the plunge into VR development back in 2015, a time when the industry was still in its infancy and precedents were scarce. This compelled us to adopt a ground-up, first-principles approach to game design and VR interactions—an ethos that has become the backbone of all our projects since then—from our pioneering VR venture, Cosmic Trip, to Fujii, and all the way to our latest release, Light Brigade.

Fujii – A Harmonious Blend of Nature and Technology

Fujii first made its debut as an auteur, art-focused launch title for the release of Quest 1 in May 2019. This project holds a special place in our hearts as a resonant blend of artistic vision and interactive design, exploring the wonders of humanity’s connection with nature. Conceived as a soulful sojourn, Fujii interweaves the realms of nature exploration and whimsical gardening, creating an interactive meditative space for players to lose themselves in.

In an industry landscape where unconventional, art-focused projects often struggle to find support, we were extraordinarily fortunate to connect with Meta (at the time known as Oculus). Recognizing the artistic merit and unique potential in our vision, they granted us the exceptional opportunity and support to bring this artsy-fartsy, non-core experience to fruition.

Fujii’s Overall Design Philosophy

During Fujii’s development, we were acutely aware that a substantial portion of our audience would be stepping into the realm of VR for the first time via the Quest 1—the industry’s first major standalone 6DoF headset.

This keen insight significantly sculpted our design approach. We opted for intuitive, physics-driven interactions that mirror the tactile simplicity of the natural world, consciously avoiding complex VR interactions, elaborate interfaces or dense text.

By refraining from controls that demand steep learning curves, we zeroed in on cultivating immediate, natural interactions, thereby offering a warm invitation to VR newcomers of all ages and gameplay experience. Remarkably, this has led to an incredibly diverse player base, attracting everyone from young children to the elderly, many of whom have found Fujii to be an accessible and joyous experience. [Editor’s note: we quite liked the game too].

VR as a New Interaction Paradigm

It’s an oversimplification to regard VR as merely a ‘stereoscopic monitor strapped to your face.’ We see it as much more than just a visual spectacle; VR introduces a groundbreaking paradigm shift in user interaction. With its 6DoF capabilities, VR transcends conventional gaming by enabling intuitive physical actions like grabbing, touching, and gesturing.

This new paradigm unlocks a whole new layer of tactile engagement and immersion, connecting players directly with their virtual surroundings. This stands in contrast to the abstract, button-press or cursor interactions that characterize traditional, non-VR games. In essence, VR offers a far more integrated and visceral form of engagement, elevating the gaming experience to a whole new level.

Physics-based Inventory

In the realm of VR, the addition of physics and animations to objects isn’t just aesthetic; it serves as a vital conduit for player engagement and understanding. The enjoyment derived from physics-based interactions comes from the brain’s innate satisfaction in grasping the object’s physical properties—be it weight, drag, or inertia.

Absent these nuanced physics, interactions feel insubstantial and weightless, breaking the immersive spell. As a guiding principle, consider incorporating physics into every touchpoint, enriching the player’s tactile connection to the game world and making interactions incredibly rewarding.

To illustrate, let’s delve into the inventory system in Fujii. Far from being a mere menu or grid, our inventory system is organically woven into the fabric of the game’s universe. We’ve opted for a physically-driven inventory, where items like seeds find their homes in “natural slots” in the virtual environment, echoing real-world interactions.

This design choice is not only intuitive but negates the need for a separate tutorial. To further enhance this connection, we’ve enriched these interactions with animations and robust physics feedback, providing an additional layer of tangibility that helps players more fully connect with their virtual environment.

Plants and Touch

Another compelling instance of the importance of physics-based design in VR can be found in our intricate interaction model for plants within Fujii. Human interaction with plants is often tactile and visceral; we touch, we feel, we connect. Our aim was to preserve that authentic texture and intimacy in a virtual context. But we went a step further by infusing every plant with musical responsiveness, adding an ethereal layer of magic and wonder to your botanical encounters.

In Fujii, each interaction with plant life is designed to resonate on a meaningful level. Every plant, leaf, and stem adheres to its own tailored set of physics rules. Whether it’s the gentle sway of a leaf in response to your touch or the subtle recoil of a stem, our objective has been to make these virtual interactions indistinguishable from real-life ones.

Achieving this required painstaking attention to detail, coupled with robust physics simulations, ensuring that each touch aligns with natural expectations, thereby deepening your immersion in this magical realm.

Watering

Watering plants in Fujii isn’t just a game mechanic; it’s crafted to be a tactile and immersive VR experience that mimics the soothing and nurturing act of watering real plants. From the way the water cascades to how it nourishes the flora, every detail has been considered. Even the extension of your arms into playful, jiggly water hoses has been designed to offer a sense of whimsy while maintaining an air of naturalism. The water interacts realistically with both the plants and the landscape, underlining the game’s commitment to intuitive, lifelike design.

To infuse an additional layer of enchantment into this seemingly simple act, we’ve introduced a delightful touch: any water droplets that fall onto the ground trigger a temporary, flower-sprouting animation. This whimsical feature serves to amplify the ‘reality’ of the droplets, allowing them to interact with the world in a way that grounds them.

The Symphony of Sound Design

In Fujii, sound design is far from peripheral; it’s an integral facet of the game’s immersive landscape. Sound doesn’t merely serve as an auditory backdrop; it plays a pivotal role in how humans subconsciously interpret the physical makeup of the objects they interact with.

When sound, physics, and visuals synergize, they allow the brain to construct a comprehensive mental model of the object’s material properties. Numerous studies have even demonstrated that superior sound design can elevate players’ perception of the graphics, making them appear more lifelike, despite no actual change in visual quality (see this and this).

Seizing this opportunity, we’ve added a unique aural dimension to Fujii. Instead of sticking strictly to realistic, organic sounds, we’ve imbued interactions with melody, notes, and keys, creating an atmosphere of musical exploration and wonder. It’s as if you’re navigating through a symphonic wonderland, amplifying the sense of enchantment and, ideally, offering players a synesthetic experience that enriches their immersion in this captivating virtual world.

Trust the Design Process

In the course of game development, we’ve learned that it’s often impractical, if not impossible, to map out every component of a game’s design during pre-production. Instead, we’ve increasingly embraced a mindset of ‘discovery’ rather than ‘invention’.

While we adhere to certain design principles, the elusive process of ‘finding the fun’ in a VR experience continues to be a mystifying yet exciting challenge, even with over a decade of experience under our belts. The magic often unfolds when the game seems to take on a life of its own, almost as if it wishes to manifest itself in a particular way.

To best facilitate this organic process, we’ve found that maintaining a high degree of flexibility and adopting an iterative mindset is crucial—especially in VR development, where ideas don’t always translate well into enjoyable VR interactions.

Take, for example, the design of our watering mechanic (from earlier): initial concepts like grabbable watering cans or throwable water orbs seemed engaging on paper but fell flat in practice. It wasn’t until we stumbled upon the random idea of water shooting magically from the player’s hands that everything seemed to click into place. Allowing room for such iterative spontaneity has often led us to unexpected yet delightful game mechanics.

– – — – –

In the development of Fujii, our aim was to establish a meaningful benchmark for what can be achieved through simple yet thoughtful interaction design in VR. As technology marches forward, we anticipate that the fidelity of these virtual experiences will continue to gain depth and realism. Yet, the essence of our objective remains constant: to forge not just visually impressive virtual landscapes, but also highly interactive and emotionally resonant experiences.

Members of Funktronic Labs

We hope this in-depth technical exploration has offered you valuable insights into the thought process that go into shaping a VR experience like Fujii. As we continue on this journey, we invite you to explore and to keep your faith in the limitless possibilities that VR offers. Thank you for sharing this journey with us.


Fujii – A Magical Gardening Adventure is now available at the new low price of $10 on Meta Quest, SteamVR and PSVR 1.

Crafting Memorable VR Experiences – The Interaction Design of ‘Fujii’ Read More »

a-concise-beginner’s-guide-to-apple-vision-pro-design-&-development

A Concise Beginner’s Guide to Apple Vision Pro Design & Development

Apple Vision Pro has brought new ideas to the table about how XR apps should be designed, controlled, and built. In this Guest Article, Sterling Crispin offers up a concise guide for what first-time XR developers should keep in mind as they approach app development for Apple Vision Pro.

Guest Article by Sterling Crispin

Sterling Crispin is an artist and software engineer with a decade of experience in the spatial computing industry. His work has spanned between product design and the R&D of new technologies at companies like Apple, Snap Inc, and various other tech startups working on face computers.

Editor’s Note:  The author would like to remind readers that he is not an Apple representative; this info is personal opinion and does not contain non-public information. Additionally, more info on Vision Pro development can be found in Apple’s WWDC23 videos (select Filter → visionOS).

Ahead is my advice for designing and developing products for Vision Pro. This article includes a basic overview of the platform, tools, porting apps, general product design, prototyping, perceptual design, business advice, and more.

Overview

Apps on visionOS are organized into ‘scenes’, which are Windows, Volumes, and Spaces.

Windows are a spatial version of what you’d see on a normal computer. They’re bounded rectangles of content that users surround themselves with. These may be windows from different apps or multiple windows from one app.

Volumes are things like 3D objects, or small interactive scenes. Like a 3D map, or small game that floats in front of you rather than being fully immersive.

Spaces are fully immersive experiences where only one app is visible. That could be full of many Windows and Volumes from your app. Or like VR games where the system goes away and it’s all fully immersive content that surrounds you. You can think of visionOS itself like a Shared Space where apps coexist together and you have less control. Whereas Full Spaces give you the most control and immersiveness, but don’t coexist with other apps. Spaces have immersion styles: mixed, progressive, and full. Which defines how much or little of the real world you want the user to see.

User Input

Users can look at the UI and pinch like the Apple Vision Pro demo videos show. But you can also reach out and tap on windows directly, sort of like it’s actually a floating iPad. Or use a bluetooth trackpad or video game controller. You can also look and speak in search bars. There’s also a Dwell Control for eyes-only input, but that’s really an accessibility feature. For a simple dev approach, your app can just use events like a TapGesture. In this case, you won’t need to worry about where these events originate from.

Spatial Audio

Vision Pro has an advanced spatial audio system that makes sounds seem like they’re really in the room by considering the size and materials in your room. Using subtle sounds for UI interaction and taking advantage of sound design for immersive experiences is going to be really important. Make sure to take this topic seriously.

Development

If you want to build something that works between Vision Pro, iPad, and iOS, you’ll be operating within the Apple dev ecosystem, using tools like XCode and SwiftUI. However, if your goal is to create a fully immersive VR experience for Vision Pro that also works on other headsets like Meta’s Quest or PlayStation VR, you have to use Unity.

Apple Tools

For Apple’s ecosystem, you’ll use SwiftUI to create the UI the user sees and the overall content of your app. RealityKit is the 3D rendering engine that handles materials, 3D objects, and light simulations. You’ll use ARKit for advanced scene understanding, like if you want someone to throw virtual darts and have them collide with their real wall, or do advanced things with hand tracking. But those rich AR features are only available in Full Spaces. There’s also Reality Composer Pro which is a 3D content editor that lets you drag things around a 3D scene and make media rich Spaces or Volumes. It’s like diet-Unity that’s built specifically for this development stack.

One cool thing with Reality Composer is that it’s already full of assets, materials, and animations. That helps developers who aren’t artists build something quickly and should help to create a more unified look and feel to everything built with the tool. Pros and cons to that product decision, but overall it should be helpful.

Existing iOS Apps

If you’re bringing an iPad or iOS app over, it will probably work unmodified as a Window in the Shared Space. If your app supports both iPad and iPhone, the headset will use the iPad version.

To customize your existing iOS app to take better advantage of the headset you can use the Ornament API to make little floating islands of UI in front of, or besides your app, to make it feel more spatial. Ironically, if your app is using a lot of ARKit features, you’ll likely need to ‘reimagine’ it significantly to work on Vision Pro, as ARKit has been upgraded a lot for the headset.

If you’re excited about building something new for Vision Pro, my personal opinion is that you should prioritize how your app will provide value across iPad and iOS too. Otherwise you’re losing out on hundreds of millions of users.

Unity

You can build to Vision Pro with the Unity game engine, which is a massive topic. Again, you need to use Unity if you’re building to Vision Pro as well as a Meta headset like the Quest or PSVR 2.

Unity supports building Bounded Volumes for the Shared Space which exist alongside native Vision Pro content. And Unbounded Volumes, for immersive content that may leverage advanced AR features. Finally you can also build more VR-like apps which give you more control over rendering but seem to lack support for ARKit scene understanding like plane detection. The Volume approach gives RealityKit more control over rendering, so you have to use Unity’s PolySpatial tool to convert materials, shaders, and other features.

Unity support for Vision Pro includes for tons of interactions you’d expect to see in VR, like teleporting to a new location or picking up and throwing virtual objects.

Product Design

You could just make an iPad-like app that shows up as a floating window, use the default interactions, and call it a day. But like I said above, content can exist in a wide spectrum of immersion, locations, and use a wide range of inputs. So the combinatorial range of possibilities can be overwhelming.

If you haven’t spent 100 hours in VR, get a Quest 2 or 3 as soon as possible and try everything. It doesn’t matter if you’re a designer, or product manager, or a CEO, you need to get a Quest and spend 100 hours in VR to begin to understand the language of spatial apps.

I highly recommend checking out Hand Physics Lab as a starting point and overview for understanding direct interactions. There’s a lot of subtle things they do which imbue virtual objects with a sense of physicality. And the Youtube VR app that was released in 2019 looks and feels pretty similar to a basic visionOS app, it’s worth checking out.

Keep a diary of what works and what doesn’t.

Ask yourself: ‘What app designs are comfortable, or cause fatigue?’, ‘What apps have the fastest time-to-fun or value?’, ‘What’s confusing and what’s intuitive?’, ‘What experiences would you even bother doing more than once?’ Be brutally honest. Learn from what’s been tried as much as possible.

General Design Advice

I strongly recommend the IDEO style design thinking process, it works for spatial computing too. You should absolutely try it out if you’re unfamiliar. There’s Design Kit with resources and this video which, while dated, is a great example of the process.

The road to spatial computing is a graveyard of utopian ideas that failed. People tend to spend a very long time building grand solutions for the imaginary problems of imaginary users. It sounds obvious, but instead you should try to build something as fast as possible that fills a real human need, and then iteratively improve from there.

Continue on Page 2: Spatial Formats and Interaction »

A Concise Beginner’s Guide to Apple Vision Pro Design & Development Read More »

cloudhead-games-ceo:-apple-vision-pro-is-an-ar-headset-wearing-vr-clothes

Cloudhead Games CEO: Apple Vision Pro is an AR Headset Wearing VR Clothes

Cloudhead Games is one of the most successful and senior VR studios in the industry. In this Guest Article, studio head Denny Unger shares his thoughts on Apple’s entrance into the space.

Guest Article by Denny Unger

Denny Unger is CEO and CCO at Cloudhead Games. Based in British Columbia and founded in 2012 Cloudhead’s pioneering approach to VR gave rise to broadly adopted movement standards including Snap Turns and Teleportation. Working closely with Valve, Sony, and Meta, Cloudhead is best known for their title Pistol Whip and has shipped four popular VR titles (Pistol Whip, Valve’s Aperture Hand Labs, Call of the Starseed, and Heart of the Emberstone).

So let’s get the obvious over first; Apple Vision Pro is Apple’s first generation attempt at AR glasses using a Mixed Reality VR headset. AVP is a development platform also serving an enthusiast demographic. Make no mistake, this no compromise MR device appears to get many things right for AR at a premium cost. Will Cloudhead Games be buying one to better understand Apple’s approach? Heck yes. AVP will give developers a powerful foundation and ecosystem for which to develop AR apps for a future ‘glasses formfactor’ device in that mythical 5–10 year window. And to the victor, the spoils of a smartphone replacing device.

No doubt (and if rumors are true) there were many debates at Apple HQ about VR. Whether or not to open the device up to VR studios and successful titles. Whether or not to include controllers to support legacy VR titles. Whether to allow users to full-dive into Virtual Reality, freely move around, and be active in the medium. But in an effort to sharpen their messaging, and to command a dominating lead within the AR space, VR and its many benefits were expertly omitted on nearly every level. Do I understand the strategy to strike a different cord as an XR business owner? Absolutely. Does it frustrate me as a VR-centric studio owner? You bet it does.

Image courtesy Apple

I question why the AVP didn’t maximize its potential, leveraging almost a decade of know-how from the VR community working within this space. Why not set a vision for a future device that would accommodate both AR and VR as complimentary mediums? Apple could have embraced a dual launch strategy with a rich and proven catalog of best selling VR games, perfectly tuned to onboard a completely new audience to XR. Apple could have expanded into VR’s recent success, growth and competition within the current market. In their recent presentation VR is essentially reduced to a gimmick, the thing you lightly touch the edges of, instead of a complimentary and equally important medium. Unity engine support is promised but with no plans for motion control support, Apple has cut out any possibility of porting most of the existing or future VR catalog to its platform.

Hand-tracking is a logical affordance for AR based spatial computing and no doubt some experiences will work well with that design philosophy. However it is important to point out that most VR games built over the last 10 years (and many more in production) are not compatible with, nor will they ever be “portable” to hand-tracking only design. Inputs and Haptics are incredibly important to Virtual Reality as a major tenant in reinforcing immersion and tactile interaction with virtual objects. Buttons pushed, triggers pulled, vibrational feedback experienced, objects held, thrown or touched, alternative movement schemes supported. There is a comfort in understanding the topological landscape of a controller and a physical touchpoint within the virtual environments themselves. When introducing users to a radically different medium like VR, convention & feedback matters. And over the last 50 years in gaming, input has evolved to encourage a suite of highly refined game design standards, creating a particular kind of muscle memory in the gaming population. Say what you will about which genres remain popular in this 450 Billion dollar industry but it does strain belief to think we’ll all be playing with finger guns in the latest and greatest shooter.

I know what some are likely to say “ there will be new innovative standards and we’ll look back on controllers as a crutch”, but I would push back and say hand-tracked or not, moving away from future haptic devices and innovation is a backwards step in XR design. Even smartphone games utilize basic haptics, because touch is foundational to the human experience.

In the aftermath of the AVP launch some would argue that VR is not yet mainstream and that Apple did the right thing by ignoring it. I would argue that VR turned a significant mainstream corner when Quest 2 outsold Xbox, when Sony reentered the market with PSVR2, and when Google teamed up with Samsung to work on what’s next, and on it goes. Over its 10 year rebirth, the last 3 years of VR have experienced Hockey Stick levels of growth. OEM’s have increased investments, and significant indicators keep coming with more titles earning revenues north of $20 Million. Fully immersive VR is a legitimized medium not because I say it is but because people like it, and are willing to part with their hard earned money to experience it.

Image courtesy Apple

I hope Apple is more inclusive of VR over time but the Apple Vision Pro appears to be a VR headset pretending not to be a VR headset. Because of this strategy it represents a unique opportunity for Apple’s competitors to double-down on supporting Virtual Reality at a more affordable entry point. Sure, they can all wage the 5-10 year war for a smartphone replacement but why in the world would one ignore an equally compelling revenue stream within a blended MR ecosystem? Maybe, because it took too long to go mainstream? Sorry all, we had to learn a few things along the way but I’m happy to say that after 10 years, the trail ahead has never been this clear.

Cloudhead Games CEO: Apple Vision Pro is an AR Headset Wearing VR Clothes Read More »

the-hidden-design-behind-the-ingenious-room-scale-gameplay-in-‘eye-of-the-temple’

The Hidden Design Behind the Ingenious Room-Scale Gameplay in ‘Eye of the Temple’

Eye of the Temple is one of the rare VR games that focuses on not just on pure room-scale movement, but dynamic room-scale movement. The result is a uniquely immersive experience that required some clever design behind the scenes to make it all work. This guest article by developer Rune Skovbo Johansen explains the approach.

Guest Article by Rune Skovbo Johansen

Rune Skovbo Johansen is a Danish independent game developer based in Turku, Finland. His work spans games and other interactive experiences, focused on tech, wonder, and exploration. After positive reception of the 2016 VR game jam game Chrysalis Pyramid, he started working on a more ambitious spiritual successor, Eye of the Temple, and at the end of 2020 he quit his day job to pursue indie game development full-time.

In Eye of the Temple, you move through a vast environment, not by teleportation or artificial locomotion, but by using your own feet. It makes unique use of room-scale VR to deliver an experience of navigating an expansive space.

In Eye of the Temple you move around large environments using your own feet

But how does it work behind the scenes? To mark the upcoming release of Eye of the Temple on Quest 2, I wanted to take the time to explain these aspects of the game’s design that I’ve never fully gone into detail with before. In this article we’ll go over a variety of the tricks the game uses to make it all work. Let’s start with the basics of keeping the player in the play area

Keeping the Player in the Play Area

Say you need to go from one tall pillar in the game to another via a moving platform. You step forward onto the platform, the platform moves, and then you step forward onto the next pillar. But now you’re outside your physical play area.

Moving platforms are positioned in a way to keep players inside the play area

If we instead position the moving platform to the side, it goes like this: You sidestep onto the platform, it moves, and you sidestep onto the next pillar. Since you took a step right, and then left, you’re back where you started in the center of the play area. So the game’s tricks are all about how the platforms are positioned relative to each other.

Now, to get a better sense for it, let’s look at some mixed reality footage (courtesy of Naysy) where a grid representing the play area is overlaid on top.

Mixed reality footage with a grid overlaid on top which represents the play area

Keeping an Overview in the Level Design

Now that we’ve seen how the trick works, let’s take a look at how I keep track of it all when doing the level design for the game. First things first – I made this pattern, which represents the player’s entire play area – or the part of it the game takes advantage of anyway:

A pattern representing the physical play area

As you can see, there’s a thick white border along the edge, and a thick circle in the center.

Every platform in the game has a designated spot in the play area and a pattern overlay that shows what that spot is. For platforms that are a single tile large, it’s generally one of nine positions. The overlay makes it easy to see if a given platform is positioned in the center of the play area, or at an edge or corner.

The play area pattern overlaid on each platform and its end positions make it easy to see if they are lined up correctly in the level design

Additional overlays show a ghostly version of the pattern at both the start and end positions of a moving platform. This is the real trick of keeping track of how the platforms connect together, because these ghostly overlays at the end positions make it trivial to see if the platforms are lined up correctly in the level design when they touch each other. If the adjacent ghostly patterns are continuous like puzzle pieces that fit together, then the platforms work correctly together.

It still took a lot of ingenuity to work out how to position all the platforms so they both fit correctly together and also take the player where they need to go in the virtual world, but now you know how I kept the complexity of it manageable.

Getting the Player’s Cooperation

The whole premise of getting around the world via these moving platforms is based on an understanding that the player should step from one platform to another when they’re lined up, and not at other times. The most basic way the game establishes this is by just telling it outright to the player in safety instructions displayed prior to starting the game.

One of the safety instructions shown before the game begins

This instructions is shown for two reasons:

One is safety. You should avoid jumping over gaps, otherwise you would risk jumping right out of your play area and into a wall, for example.

The other is that the game’s system of traversal only works correctly when stepping from one platform to another when they line up. This is not as critical – I’ll get back to later what happens if stepping onto a platform that’s misaligned – but it still provides the best play experience.

Apart from the explicit instructions, the game also employs more subtle tricks to help ensure the player only steps over when blocks are correctly aligned. Consider the following example of a larger 2 x 2 tile static platform the player can step onto. A moving platform arrives from the side in a way that would allow the player to step off well before the platform has stopped moving, but that would break the game’s traversal logic.

In this room, ‘foot fences’ are used to discourage the player from stepping from one platform to another when they are not correctly aligned

To avoid this, “foot fences” were placed to discourage the player from stepping over onto the static platform (or away from it) at incorrect positions. The fences are purely visual and don’t technically prevent anything. The player can still step over them if they try, or right through them for that matter. However, psychologically it feels like less effort to not step over or through a fence and instead step onto the static platform where there’s a gap in the fence. In this way, a purely non-technical solution is used as part of the game’s arsenal of tricks.

Continued on Page 2: Correcting for Unaligned Platforms »

The Hidden Design Behind the Ingenious Room-Scale Gameplay in ‘Eye of the Temple’ Read More »