AR Design

these-details-make-‘half-life:-alyx’-unlike-any-other-vr-game-–-inside-xr-design

These Details Make ‘Half-Life: Alyx’ Unlike Any Other VR Game – Inside XR Design

In Inside XR Design we examine specific examples of great VR design. Today we’re looking at the details of Half-Life: Alyx and how they add an immersive layer to the game rarely found elsewhere.

You can find the complete video below, or continue reading for an adapted text version.

Intro

Now listen, I know you’ve almost certainly heard of Half-Life: Alyx (2020), it’s one of the best VR games made to date. And there’s tons of reasons why it’s so well regarded. It’s got great graphics, fun puzzles, memorable set-pieces, an interesting story… and on and on. We all know this already.

But the scope of Alyx allows the game to go above and beyond what we usually see in VR with some awesome immersive details that really make it shine. Today I want to examine a bunch of those little details—and even if you’re an absolute master of the game, I hope you’ll find at least one thing you didn’t already know about.

Inertia Physics

First is the really smart way that Alyx handles inertia physics. Lots of VR games use inertia to give players the feeling that objects have different weights. This makes moving a small and light object feel totally different than a large and heavy object, but it usually comes with a sacrifice which is making larger objects much more challenging to throw because the player has to account for the inertia sway as they throw the object.

Alyx makes a tiny little tweak to this formula by ignoring the inertia sway only in its throwing calculation. That means if you’re trying to accurately throw a large object, you can just swing your arm and release in a way that feels natural and you’ll get an accurate throw even if you didn’t consider the object’s inertia.

This gives the game the best of both worlds—an inertia system to convey weight but without sacrificing the usability of throwing.

I love this kind of attention to detail because it makes the experience better without players realizing anything is happening.

Sound Design

Note: Make sure to unmute clips in this section

When it comes to sound design, Alyx is really up there not just in terms of quality, but in detail too. One of my absolute favorite details in this game is that almost every object has a completely unique sound when being shaken. And this reads especially well because it’s spatial audio, so you’ll hear it most from the ear that’s closest to the shaken object:

This is something that no flatscreen game needs because only in VR do players have the ability to pick up practically anything in the game.

I can just imagine the sound design team looking at the game’s extensive list of props and realizing they need to come up with what a VHS tape or a… TV sounds like when shaken.

That’s a ton of work for this little detail that most people won’t notice, but it really helps keep players immersed when they pick up, say, a box of matches and hear the exact sound they would expect to hear if they shook it in real life.

Gravity Gloves In-depth

Ok so everyone knows the Gravity Gloves in Alyx are a diegetic way to give players a force pull capability so it’s easier to grab objects at a distance. And practically everyone I’ve talked to agrees they work exceptionally well. They’re not only helpful, but fun and satisfying to use.

But what exactly makes the gravity gloves perhaps the single best force-pull implementation seen in VR to date? Let’s break it down.

In most VR games, force-pull mechanics have two stages:

  1. The first, which we’ll call ‘selection’, is pointing at an object and seeing it highlighted.
  2. The second, which we’ll call ‘confirmation’, is pressing the grab button which pulls the object to your hand.

Half-Life: Alyx adds a third stage to this formula which is the key to why it works so well:

  1. First is ‘selection’, where the object glows so you know what is being targeted.
  2. The second—let’s call it lock-on’—involves pulling the trigger to confirm your selection. Once you do, the selection is locked-on; even if you move your hand now the selection won’t change to any other object.
  3. The final stage, ‘confirmation’, requires not a button press but a pulling gesture to finally initiate the force pull.

Adding that extra lock-on stage to the process significantly improves reliability because it ensures that both the player and the game are on the same page before the object is pulled.

And it should be noted that each of these stages has distinct sounds which make it even clearer to the player what’s being selected so they know that everything is going according to their intentions.

The use of a pulling gesture makes the whole thing more immersive by making it feel like the game world is responding to your physical actions, rather than the press of a button.

There’s also a little bit of magic to the exact speed and trajectory the objects follow, like how the trajectory can shift in real-time to reach the player’s hand. Those parameters are carefully tuned to feel satisfying without feeling like the object just automatically attaches to your hand every time.

This strikes me as something that an animator may even have weighed in on to say, “how do we get that to feel just right?”

Working Wearables

It’s natural for players in VR to try to put a hat on their head when they find one, but did you know that wearing a hat protects you from barnacles? And yes, that’s the official name for those horrible creatures that stick to the ceiling.

But it’s not just hats you can wear. The game is surprisingly good about letting players wear anything that’s even vaguely hat-shaped. Like cones or even pots.

I figure this is something that Valve added after watching more than a few playtesters attempt to wear those objects on their head during development.

Speaking of wearing props, you can also wear gas masks. And the game takes this one step further… the gas masks actually work. One part of the game requires you to hold your hand up to cover you mouth to avoid breathing spores which make you cough and give away your position.

If you wear a gas mask you are equally protected, but you also get the use of both hands which gives the gas mask an advantage over covering your mouth with your hand.

The game never explicitly tells you that the gas mask will also protect you from the spores, it just lets players figure it out on their own—sort of like a functional easter egg.

Spectator View

Next up is a feature that’s easy to forget about unless you’ve spent a lot of time watching other people play Half-Life: Alyx… the game has an optional spectator interface which shows up only on the computer monitor. The interface gives viewers the exact same information that the actual player has while in the game: like, which weapons they have unlocked or equipped and how much health and resin they have. The interface even shows what items are stowed in the player’s ‘hand-pockets’.

And Valve went further than just adding an interface for spectators, they also added built-in camera smoothing, zoom levels, and even a selector to pick which eye the camera will look through.

The last one might seem like a minor detail, but because people are either left or right-eye dominant, being able to choose your dominant eye means the spectator will correctly see what you’re aiming at when you’re aiming down the scope of a gun.

Multi-modal Menu

While we’re looking at the menus here, it’s also worth noting that the game menu is primarily designed for laser pointer interaction, but it also works like a touchscreen.

While this seems maybe trivial today, let’s remember that Alyx was released almost four years ago(!). The foresight to offer both modalities means that no matter if the player’s first instinct is to touch the menu or use the laser, both choices are equally correct.

Guiding Your Eye

All key items in Alyx have subtle lights on them to draw your attention. This is basic game design stuff, but I have to say that Alyx’s approach is much less immersion breaking than many VR games where key objects are highlighted in a glaringly obvious yellow mesh.

For the pistol magazine, the game makes it clear even at a distance how many bullets are in the magazine… in fact, it does this in two different ways.

First, every bullet has a small light on it which lets you see from the side of the magazine roughly how full it is.

And then on the bottom of the magazine there’s a radial indicator that depletes as the ammo runs down.

Because this is all done with light, if the magazine is half full, it will be half as bright—making it easy for players to tell just how ‘valuable’ the magazine is with just a glance, even at a distance. Completely empty magazines emit no light so you don’t mistake them for something useful. Many players learn this affordance quickly, even without thinking much about it.

The takeaway here is that a game’s most commonly used items—the things players will interact with the most—should be the things that are most thoughtfully designed. Players will collect and reload literally hundreds of magazines throughout the game, so spending time to add these subtle details meaningfully improves the entire experience.

Continue on Page 2 »

These Details Make ‘Half-Life: Alyx’ Unlike Any Other VR Game – Inside XR Design Read More »

collaborative-spatial-design-app-‘shapesxr’-raises-$8.6m,-expanding-to-apple-vision-pro-&-other-headsets

Collaborative Spatial Design App ‘ShapesXR’ Raises $8.6M, Expanding to Apple Vision Pro & Other Headsets

ShapesXR is a collaborative spatial design app built to make it easy to prototype spatial interfaces, interactions, and environments. The company announced today it has raised an $8.6 million seed investment, part of which the company plans to use to expand to more headsets.

While so many VR interfaces and interactions borrow heavily (if not entirely) from existing ‘flat’ design paradigms, ShapesXR is built on the premise that in order to build spatial applications you need spatial design tools. With that in mind, the app functions like a freeform canvas that allows users to mock up designs inside of VR to understand how everything fits together at scale and in 3D. With collaborative functionality, multiple people can work on projects simultaneously.

Right now that collaboration is limited to those with a Quest headset, but as part of an $8.6 million seed investment, ShapesXR says it plans to expand the app to Apple Vision Pro, Pico, and Magic Leap headsets, opening the door to broader accessibility and cross-headset collaboration.

Image courtesy ShapesXR

The seed investment was led by Supernode Global, with participation from Triptyq VC, Boost VC, Hartmann Capital, and Geek Ventures.

Inga Petryaevskaya, CEO and Founder of ShapesXR says, “VR has such huge potential to transform how we all collaborate on projects and design new products, however, one of the main barriers to entry is the level of technical skill required to get started. ShapesXR has been built to remove these hurdles—it’s as easy to learn as PowerPoint. This truly democratizes 3D content creation and enables anyone to become a VR, AR and mixed reality storyteller.”

Beyond just creating shapes and scenes, ShapesXR also has a ‘layers’ function which lets users create slideshows of spatial content. This works like a simple flip-book animation, except in a 3D environment instead of a flat doodle at the corner of your notebook. Using the layers function, designers can prototype and show how spatial content should interact with the user, which allows design work to be done before any of the interactions are actually programmed.

“ShapesXR’s goal is to become the de facto industry standard for [spatial] UI/UX design—achieving for spatial computing what Figma did for the mobile computing era,” the company said in its seed investment announcement.

Collaborative Spatial Design App ‘ShapesXR’ Raises $8.6M, Expanding to Apple Vision Pro & Other Headsets Read More »

a-concise-beginner’s-guide-to-apple-vision-pro-design-&-development

A Concise Beginner’s Guide to Apple Vision Pro Design & Development

Apple Vision Pro has brought new ideas to the table about how XR apps should be designed, controlled, and built. In this Guest Article, Sterling Crispin offers up a concise guide for what first-time XR developers should keep in mind as they approach app development for Apple Vision Pro.

Guest Article by Sterling Crispin

Sterling Crispin is an artist and software engineer with a decade of experience in the spatial computing industry. His work has spanned between product design and the R&D of new technologies at companies like Apple, Snap Inc, and various other tech startups working on face computers.

Editor’s Note:  The author would like to remind readers that he is not an Apple representative; this info is personal opinion and does not contain non-public information. Additionally, more info on Vision Pro development can be found in Apple’s WWDC23 videos (select Filter → visionOS).

Ahead is my advice for designing and developing products for Vision Pro. This article includes a basic overview of the platform, tools, porting apps, general product design, prototyping, perceptual design, business advice, and more.

Overview

Apps on visionOS are organized into ‘scenes’, which are Windows, Volumes, and Spaces.

Windows are a spatial version of what you’d see on a normal computer. They’re bounded rectangles of content that users surround themselves with. These may be windows from different apps or multiple windows from one app.

Volumes are things like 3D objects, or small interactive scenes. Like a 3D map, or small game that floats in front of you rather than being fully immersive.

Spaces are fully immersive experiences where only one app is visible. That could be full of many Windows and Volumes from your app. Or like VR games where the system goes away and it’s all fully immersive content that surrounds you. You can think of visionOS itself like a Shared Space where apps coexist together and you have less control. Whereas Full Spaces give you the most control and immersiveness, but don’t coexist with other apps. Spaces have immersion styles: mixed, progressive, and full. Which defines how much or little of the real world you want the user to see.

User Input

Users can look at the UI and pinch like the Apple Vision Pro demo videos show. But you can also reach out and tap on windows directly, sort of like it’s actually a floating iPad. Or use a bluetooth trackpad or video game controller. You can also look and speak in search bars. There’s also a Dwell Control for eyes-only input, but that’s really an accessibility feature. For a simple dev approach, your app can just use events like a TapGesture. In this case, you won’t need to worry about where these events originate from.

Spatial Audio

Vision Pro has an advanced spatial audio system that makes sounds seem like they’re really in the room by considering the size and materials in your room. Using subtle sounds for UI interaction and taking advantage of sound design for immersive experiences is going to be really important. Make sure to take this topic seriously.

Development

If you want to build something that works between Vision Pro, iPad, and iOS, you’ll be operating within the Apple dev ecosystem, using tools like XCode and SwiftUI. However, if your goal is to create a fully immersive VR experience for Vision Pro that also works on other headsets like Meta’s Quest or PlayStation VR, you have to use Unity.

Apple Tools

For Apple’s ecosystem, you’ll use SwiftUI to create the UI the user sees and the overall content of your app. RealityKit is the 3D rendering engine that handles materials, 3D objects, and light simulations. You’ll use ARKit for advanced scene understanding, like if you want someone to throw virtual darts and have them collide with their real wall, or do advanced things with hand tracking. But those rich AR features are only available in Full Spaces. There’s also Reality Composer Pro which is a 3D content editor that lets you drag things around a 3D scene and make media rich Spaces or Volumes. It’s like diet-Unity that’s built specifically for this development stack.

One cool thing with Reality Composer is that it’s already full of assets, materials, and animations. That helps developers who aren’t artists build something quickly and should help to create a more unified look and feel to everything built with the tool. Pros and cons to that product decision, but overall it should be helpful.

Existing iOS Apps

If you’re bringing an iPad or iOS app over, it will probably work unmodified as a Window in the Shared Space. If your app supports both iPad and iPhone, the headset will use the iPad version.

To customize your existing iOS app to take better advantage of the headset you can use the Ornament API to make little floating islands of UI in front of, or besides your app, to make it feel more spatial. Ironically, if your app is using a lot of ARKit features, you’ll likely need to ‘reimagine’ it significantly to work on Vision Pro, as ARKit has been upgraded a lot for the headset.

If you’re excited about building something new for Vision Pro, my personal opinion is that you should prioritize how your app will provide value across iPad and iOS too. Otherwise you’re losing out on hundreds of millions of users.

Unity

You can build to Vision Pro with the Unity game engine, which is a massive topic. Again, you need to use Unity if you’re building to Vision Pro as well as a Meta headset like the Quest or PSVR 2.

Unity supports building Bounded Volumes for the Shared Space which exist alongside native Vision Pro content. And Unbounded Volumes, for immersive content that may leverage advanced AR features. Finally you can also build more VR-like apps which give you more control over rendering but seem to lack support for ARKit scene understanding like plane detection. The Volume approach gives RealityKit more control over rendering, so you have to use Unity’s PolySpatial tool to convert materials, shaders, and other features.

Unity support for Vision Pro includes for tons of interactions you’d expect to see in VR, like teleporting to a new location or picking up and throwing virtual objects.

Product Design

You could just make an iPad-like app that shows up as a floating window, use the default interactions, and call it a day. But like I said above, content can exist in a wide spectrum of immersion, locations, and use a wide range of inputs. So the combinatorial range of possibilities can be overwhelming.

If you haven’t spent 100 hours in VR, get a Quest 2 or 3 as soon as possible and try everything. It doesn’t matter if you’re a designer, or product manager, or a CEO, you need to get a Quest and spend 100 hours in VR to begin to understand the language of spatial apps.

I highly recommend checking out Hand Physics Lab as a starting point and overview for understanding direct interactions. There’s a lot of subtle things they do which imbue virtual objects with a sense of physicality. And the Youtube VR app that was released in 2019 looks and feels pretty similar to a basic visionOS app, it’s worth checking out.

Keep a diary of what works and what doesn’t.

Ask yourself: ‘What app designs are comfortable, or cause fatigue?’, ‘What apps have the fastest time-to-fun or value?’, ‘What’s confusing and what’s intuitive?’, ‘What experiences would you even bother doing more than once?’ Be brutally honest. Learn from what’s been tried as much as possible.

General Design Advice

I strongly recommend the IDEO style design thinking process, it works for spatial computing too. You should absolutely try it out if you’re unfamiliar. There’s Design Kit with resources and this video which, while dated, is a great example of the process.

The road to spatial computing is a graveyard of utopian ideas that failed. People tend to spend a very long time building grand solutions for the imaginary problems of imaginary users. It sounds obvious, but instead you should try to build something as fast as possible that fills a real human need, and then iteratively improve from there.

Continue on Page 2: Spatial Formats and Interaction »

A Concise Beginner’s Guide to Apple Vision Pro Design & Development Read More »