VR development

designing-mixed-reality-apps-that-adapt-to-different-spaces

Designing Mixed Reality Apps That Adapt to Different Spaces

Laser Dance is an upcoming mixed reality game that seeks to use Quest’s passthrough capability as more than just a background. In this Guest Article, developer Thomas Van Bouwel explains his approach to designing an MR game that adapts to different environments.

Guest Article by Thomas Van Bouwel

Thomas is a Belgian-Brazilian VR developer currently based in Brussels. Although his original background is in architecture, his work in VR spans from indie games like Cubism to enterprise software for architects and engineers like Resolve. His Latest project, Laser Dance, is coming to Quest 3 late next year.

For the past year I’ve been working on a new game called Laser Dance. Built from the ground up for Mixed Reality (MR), my goal is to make a game that turns any room in your house into a laser obstacle course. Players walk back and forth between two buttons, and each button press spawns a new parametric laser pattern they have to navigate through. The game is still in full development, aiming for a release in 2024.

If you’d like to sign up for playtesting Laser Dance, you can do so here!

Laser Dance’s teaser trailer, which was first shown right after Meta Connect 2023

The main challenge with a game like this, and possibly any roomscale MR game, is to make levels that adapt well to any room regardless of its size and layout. Furthermore, since Laser Dance is a game that requires a lot of physical motion, the game should also try to accommodate differences in people’s level of mobility.

To try and overcome these challenges, having good room-emulation tools that enable quick level design iteration is essential. In this article, I want to go over how levels in Laser Dance work, and share some of the developer tools that I’m building to help me create and test the game’s adaptive laser patterns.

Laser Pattern Definition

To understand how Laser Dance’s room emulation tools work, we first need to cover how laser patterns work in the game.

A level in Laser Dance consists of a sequence of laser patterns – players walk (or crawl) back and forth between two buttons on opposite ends of the room, and each button press enables the next pattern. These laser patterns will try to adapt to the room size and layout.

Since the laser patterns in Laser Dance’s levels need to adapt to different types of spaces, the specific positions of lasers aren’t pre-determined, but calculated parametrically based on the room.

Several methods are used to position the lasers. The most straightforward one is to apply a uniform pattern over the entire room. An example is shown below of a level that applies a uniform grid of swinging lasers across the room.

An example of a pattern-based level, a uniform pattern of movement is applied to a grid of lasers, covering the entire room.

Other levels may use the button orientation relative to each other to determine the laser pattern. The below example shows a pattern that creates a sequence of blinking laser walls between the buttons .

Blinking walls of lasers are oriented perpendicular to the imaginary line between the two buttons.

One of the more versatile tools for level generation is a custom pathfinding algorithm, which was written for Laser Dance by Mark Schramm, guest developer on the project. This algorithm tries to find paths between the buttons that maximize the distance from furniture and walls, making a safer path for players.

The paths created by this algorithm allow for several laser patterns, like a tunnel of lasers, or placing a laser obstacle in the middle of the player’s path between the buttons.

This level uses pathfinding to spawn a tunnel of lasers that snakes around the furniture in this room.

Room Emulation

The different techniques described above for creating adaptive laser patterns can sometimes lead to unexpected results or bugs in specific room layouts. Additionally, it can be challenging to design levels while trying to keep different types of rooms in mind.

To help with this, I spent much of early development for Laser Dance on building a set of room emulation tools to let me simulate and directly compare what a level will look like between different room layouts.

Rooms are stored in-game as a simple text file containing all wall and furniture positions and dimensions. The emulation tool can take these files, and spawn several rooms next to each other directly in the Unity editor.

You can then swap out different levels, or even just individual laser patterns, and emulate these side by side in various rooms to directly compare them.

A custom tool built in Unity spawns several rooms side by side in an orthographic view, showing how a certain level in Laser Dance would look in different room layouts.

Accessibility and Player Emulation

Just as the rooms that people play in may differ, the people playing themselves will be very different as well. Not everyone may be able to crawl on the floor to dodge lasers, or feel capable of squeezing through a narrow corridor of lasers.

Because of the physical nature of Laser Dance’s gameplay, there will always be a limit to its accessibility. However, to the extent possible, I would still like to try and have the levels adapt to players in the same way they adapt to rooms.

Currently, Laser Dance allows players to set their height, shoulder width, and the minimum height they’re able to crawl under. Levels will try and use these values to adjust certain parameters of how they’re spawned. An example is shown below, where a level would typically expect players to crawl underneath a field of lasers. When adjusting the minimum crawl height, this pattern adapts to that new value, making the level more forgiving.

Accessibility settings allow players to tailor some of Laser Dance’s levels to their body type and mobility restrictions. This example shows how a level that would have players crawl on the floor, can adjust itself for folks with more limited vertical mobility.

These player values can also be emulated in the custom tools I’m building. Different player presets can be swapped out to directly compare how a level may look different between two players.

Laser Dance’s emulation tools allow you to swap out different preset player values to test their effect on the laser patterns. In this example, you can notice how swapping to a more accessible player value preset makes the tunnel of lasers wider.

Data, Testing, and Privacy

A key problem with designing an adaptive game like Laser Dance is that unexpected room layouts and environments might break some of the levels.

To try and prepare for this during development, there is a button in the settings players can choose to press to share their room data with me. Using these emulation tools, I can then try and reproduce their issue in an effort to resolve it.

Playtesters can press a button in the settings to share their room layout. This allows for local reproduction of potential issues they may have seen, using the emulation tools mentioned above.

This of course should raise some privacy concerns, as players are essentially sharing parts of their home layout with me. From a developers standpoint, it has a clear benefit to the design and quality control process, but as consumers of MR we should also have an active concern on what personal data developers should have access to and how it is used.

Personally, I think it’s important that sharing sensitive data like this requires active consent of the player each time it is shared – hence the button that needs to be actively pressed in the settings. Clear communication on why this data is needed and how it will be used is also important, which is a big part of my motivation for writing this article.

When it comes to MR platforms, an active discussion on data privacy is important too. We can’t always assume sensitive room data will be used in good faith by all developers, so as players we should expect clear communication and clear limitations from platforms regarding how apps can access and use this type of sensitive data, and stay vigilant on how and why certain apps may request access to this data.

Do You Need to Build Custom Tools?

Is building a handful of custom tools a requirement for developing adaptive Mixed Reality? Luckily the answer to that is: probably not.

We’re already seeing Meta and Apple come out with mixed reality emulation tools of their own, letting developers test their apps in a simulated virtual environment, even without a headset. These tools are likely to only get better and more robust in time.

There is still merit to building custom tools in some cases, since they will give you the most flexibility to test against your specific requirements. Being able to emulate and compare between multiple rooms or player profiles at the same time in Laser Dance is a good example of this.

– – — – –

Development of Laser Dance is still in full swing. My hope is that I’ll end up with a fun game that can also serve as an introduction to mixed reality for newcomers to the medium. Though it took some time to build out these emulation tools, they will hopefully both enable and speed up the level design process to help achieve this goal.

If you would like to help with the development of the game, please consider signing up for playtesting!


If you found these insights interesting, check out Van Bouwel’s other Guest Articles:

Designing Mixed Reality Apps That Adapt to Different Spaces Read More »

apple-expands-vision-pro-developer-labs-to-two-new-cities

Apple Expands Vision Pro Developer Labs to Two New Cities

Apple is adding two new locations to its Vision Pro ‘Developer Labs so devs can get their hands on the headset before it launches early next year.

It might not feel like it but 2024 will be here before we know it, and Apple has recently said it’s on track to launch Vision Pro “early” next year.

To get developers’ hands on Vision Pro launches, Apple has a handful of ‘Developer Labs’ where developers can go to check out the device and get feedback on their apps. Today the company announced it’s opening two more locations: New York City, USA and Sydney Australia.

Even with the two new locations, Vision Pro Developer Labs are still pretty sparce, but here’s the full list to date:

  • Cupertino, CA, USA
  • London, England, UK
  • Munich, Germany
  • Shanghai, China
  • Tokyo, Japan
  • New York City, USA
  • Sydney, Australia
  • Singapore

Apple is also offering developers ‘compatibility evaluations’ where the company will test third-party Vision Pro apps and provide feedback. The company is also giving select developers access to Vision Pro development kits.

Vision Pro is Apple’s first-ever XR headset and it’s sure going to shake up the industry one way or another, perhaps starting with the way the company is approaching ‘social’ in XR.

Apple Expands Vision Pro Developer Labs to Two New Cities Read More »

meta-reveals-new-prototype-vr-headsets-focused-on-retinal-resolution-and-light-field-passthrough

Meta Reveals New Prototype VR Headsets Focused on Retinal Resolution and Light Field Passthrough

Meta unveiled two new VR headset prototypes that showcase more progress in the fight to solve some persistent technical challenges facing VR today. Presenting at SIGGRAPH 2023, Meta is demonstrating a headset with retinal resolution combined with varifocal optics, and another headset with advanced light field passthrough capabilities.

Butterscotch Varifocal Prototype

Revealed in a developer blogpost, Meta showed off a varifocal research prototype that demonstrates a VR display system which provides “visual clarity that can closely match the capabilities of the human eye,” says Meta Optical Scientist Yang Zhao. The so-called ‘Butterscotch Varifocal’ prototype provides retinal resolution of up to 56 pixels per degree (PPD), which is sufficient for 20/20 visual acuity, researchers say.

Since its displays are also varifocal, it can support from 0 to 4 diopter (i.e. infinity to 25 cm), and matching what researchers say are “the dynamics of eye accommodation with at least 10 diopter/s peak velocity and 100 diopter/s2 acceleration.” The pulsing motors below control the displays’ focal distance in an effort to match the human eye.

Varifocal headsets represent a solution to the vergence-accommodation conflict (VAC) which has plagued standard VR headsets, the most advanced consumer headsets included. Varifocal headsets not only include the same standard support for the vergence reflex (when eyes converge on objects to form a stereo image), but also the accommodation reflex (when the lens of the eye changes shape to focus light at different depths). Without support for accommodation, VR displays can cause eye strain, make it difficult to focus on close imagery, and may even limit visual immersion.

Check out the through-the-lens video below to see how Butterscotch’s varifocal bit works:

Using LCD panels readily available on the market, Butterscotch manages its 20/20 retinal display by reducing the field of view (FOV) to 50 degrees, smaller than Quest 2’s ~89 degree FOV.

Although Butterscotch’s varifocal abilities are similar to the company’s prior Half Dome prototypes, the company says Butterscotch is “solely focused on showcasing the experience of retinal resolution in VR—but not necessarily with hardware technologies that are ultimately appropriate for the consumer.”

“In contrast, our work on Half Dome 1 through 3 focused on miniaturizing varifocal in a fully practical manner, albeit with lower-resolution optics and displays more similar to today’s consumer headsets,” explains Display Systems Research Director Douglas Lanman. “Our work on Half Dome prototypes continues, but we’re pausing to exhibit Butterscotch Varifocal to show why we remain so committed to varifocal and delivering better visual acuity and comfort in VR headsets. We want our community to experience varifocal for themselves and join in pushing this technology forward.”

Flamera Lightfield Passthrough Prototype

Another important side of making XR more immersive is undoubtably the headset’s passthrough capabilities, like you might see on Quest Pro or the upcoming Apple Vision Pro. The decidedly bug-eyed design of Meta’s Flamera research prototype is looking for a better way to create more realistic passthrough by using light fields.

Research Scientist Grace Kuo wearing the Flamera research prototype | Image courtesy Meta

In standard headsets, cameras are typically placed a few inches from where your eyes actually sit, capturing a different view than what you’d see if you weren’t wearing a headset. While there’s a lot of distortion and placement correction going on in standard headsets of today, you’ll probably still notice a ton of visual artifacts as the software tries to correctly resolve and render different depths of field.

“To address this challenge, we brainstormed optical architectures that could directly capture the same rays of light that you’d see with your bare eyes,” says Meta Research Scientist Grace Kuo. “By starting our headset design from scratch instead of modifying an existing design, we ended up with a camera that looks quite unique but can enable better passthrough image quality and lower latency.”

Check out the quick explainer below to see how Flamera’s ingenious capture methods work:

Now, here’s a comparison between an unobstructed view and Flamera’s light field capture, showing off some pretty compelling results:

As research prototypes, there’s no indication when we can expect these technologies to come to consumer headsets. Still, it’s clear that Meta is adamant about showing off just how far ahead it is in tackling some of the persistent issues in headsets today—something you probably won’t see from the patently black box that is Apple.

You can read more about Butterscotch and Flamera in their respective research papers, which are being presented at SIGGRAPH 2023, taking place August 6th – 10th in Los Angeles. Click here for the Butterscotch Varifocal abstract and Flamera full paper.

Meta Reveals New Prototype VR Headsets Focused on Retinal Resolution and Light Field Passthrough Read More »

sony-details-psvr-2-prototypes-from-conception-to-production

Sony Details PSVR 2 Prototypes from Conception to Production

Sony released a peek into the prototyping stages that led to PSVR 2, showing off a number of test units for both the headset and controllers.

In an extensive interview on the PS blog, PSVR 2’s Product Manager Yasuo Takahashi reveals the development process behind Sony’s latest VR headset.

Takahashi reveals that detailed discussions on the company’s next-gen PSVR began in earnest after the launch of the original in 2016. From there, the team started prototyping various technologies for PSVR 2 starting in early 2017.

Below is a condensed version of the interview, including all provided photos. If you want to read the full article, click here.

Challenges of Design & Optimization

Maintaining a light and compact design while implementing new features was a challenge, Takahashi says, requiring the teams to work closely to produce detailed technical estimates and optimize the design.

Prototype for testing inside-out in tracking cameras with evaluation board | Image courtesy Sony

While comfort was a significant focus during the development process, the initial prototype focused on evaluating functionality rather than weight.

All of that top bulk is dedicated to inside-out camera evaluation boards which would eventually be shrunk down to an SoC embedded within the headset.

Room-scale & Eye-tracking Tech

Various prototypes were created and tested before integration including both inside-out and outside-in tracking methods. Of course, we know inside-out tracking was eventually the winner, but it’s interesting to note the company was at one point still considering an outside-in approach, similar to the original PSVR.

Eye-tracking tech was also explored as a new UI feature in addition to foveated rendering, which allows developer to push the boundaries of PS5’s VR rendering capabilities and serve up higher-fidelity visuals in games.

Testing and optimizing eye tracking took time, considering different eye colors and accommodating players wearing glasses.

Eye-tracking evaluation prototype 2 | Image courtesy Sony

Comfort & Design

The development team assessed comfort and wearability, evaluating numerous configurations based on the headset’s expected weight. The team put a lot of thought into the materials and shape to make the headset feel lightweight while maintaining strength.

A cool ‘skeleton’ prototype shows all of the pieces of the puzzle together, also showing the headset’s halo strap, which like the original PSVR, keeps the bulk of the weight off the user’s forehead. This one should definitely get a spot on the museum shelves (or maybe a fun mid-generation release?).

The ‘skeleton’ prototype | Image courtesy Sony

Headset haptics were also added as a new feature based on the idea of using the rumble motor from the DualShock 4 wireless controller.

PSVR 2 Sense Controllers

The PSVR 2 Sense controllers were developed in parallel with the headset, starting discussions in 2016 and prototyping in 2017.

Features like haptic feedback, adaptive triggers, and finger-touch detection were early additions, although the team was still sussing out tracking. Notice the Move-style tracking sphere on the tip of an early prototype.

Prototype 1 | Image courtesy Sony

The final shape of the Sense controller was achieved through extensive prototyping and user testing to ensure a comfortable fit and optimized center of gravity.

Here you can see a number of IR tracking marker configurations that would eventually settle on the production model’s current form.

While Sony is undoubtedly sitting on a lot more prototypes than this—they began prototype when the original PSVR had only been in the wild for less than a year—it’s an interesting look at how Takahashi’s team eventually settled on the current form and function of what will likely be PS5’s only VR headset for years to come.

If you’re interested to learn more, check out the full interview with Takahashi.

Sony Details PSVR 2 Prototypes from Conception to Production Read More »

meta-introduces-‘super-resolution’-feature-for-improved-quest-visuals

Meta Introduces ‘Super Resolution’ Feature for Improved Quest Visuals

Meta today introduced a new developer feature called Super Resolution that’s designed to improve the look of VR apps and games on Quest. The company says the new feature offers better quality upscaling at similar costs as previous techniques.

Meta today announced the new Super Resolution feature for developers on the company’s XR developer blog. Available for apps built on the Quest V55 update and later, Super Resolution is a new upscaling method for applications that aren’t already rendering at the screen’s display resolution (as many do in order to meet performance requirements).

“Super Resolution is a VR-optimized edge-aware scaling and sharpening algorithm built upon Snapdragon Game Super Resolution with Meta Quest-specific performance optimizations developed in collaboration with the Qualcomm Graphics Team,” the company says.

Meta further explains that, by default, apps are scaled up to the headset’s display resolution with bilinear scaling, which is fast but often introduces blurring in the process. Super Resolution is presented as an alternative that can produce better upscaling results with low performance costs.

“Super Resolution is a single-pass spatial upscaling and sharpening technique optimized to run on Meta Quest devices. It uses edge- and contrast-aware filtering to preserve and enhance details in the foveal region while minimizing halos and artifacts.”

Upscaling using bilinear (left), Normal Sharpening (center), and Super Resolution (right). The new technique prevents blur without introducing as much aliasing. | Image courtesy Meta

Unlike the recent improvements to CPU and GPU power on Quest headsets, Super Resolution isn’t an automatic benefit to all applications; developers will need to opt-in to the feature, and even then, Meta warns that benefits from the feature will need to be assessed on an app-by-app basis.

“The exact GPU cost of Super Resolution is content-dependent, as Super Resolution devotes more computation to regions of the image with fine detail. The cost of enabling Super Resolution over the default bilinear filtering is lower for content containing primarily regions of solid colors or smooth gradients when compared to content with highly detailed images or objects,” the company explains.

Developers can implement Super Resolution into Quest apps on V55+ immediately, and those using Quest Link (AKA Oculus Link) for PC VR content can also enable the sharpening feature by using the Oculus Debug Tool and setting the Link Sharpening option to Quality.

Meta Introduces ‘Super Resolution’ Feature for Improved Quest Visuals Read More »

vision-pro-dev-kit-applications-will-open-in-july

Vision Pro Dev Kit Applications Will Open in July

Apple says it will give developers the opportunity to apply for Vision Pro dev kits starting sometime in July.

In addition to releasing a first round of developer tools last week, including a software ‘Simulator’ of Vision Pro, Apple also wants to give developers a chance to get their hands on the headset itself.

The company indicates that applications for a Vision Pro development kit will open starting in July, and developers will be able to find details here when the time comes.

There’s no telling how many of the development kits the company plans to send out, or exactly when they will start shipping, but given Apple’s culture of extreme secrecy you can bet selected developers will be locked down with strict NDAs regarding their use of the device.

The Vision Pro developer kit isn’t the only way developers will be able to test their apps on a real headset.

Developers will also be able to apply to attend ‘Vision Pro developer labs’:

Apply for the opportunity to attend an Apple Vision Pro developer lab, where you can experience your visionOS, iPadOS, and iOS apps running on Apple Vision Pro. With direct support from Apple, you’ll be able to test and optimize your apps and games, so they’ll be ready when Apple Vision Pro is available to customers. Labs will be available in six locations worldwide: Cupertino, London, Munich, Shanghai, Singapore, and Tokyo.

Our understanding is that applications for the developer labs will also open in July.

Additionally, developers will also be able to request that their app be reviewed by Apple itself on visionOS, though this is restricted to existing iPhone and iPad apps, rather than newly created apps for visionOS:

If you currently have an iPad or iPhone app on the App Store, we can help you test it on Apple Vision Pro. Request a compatibility evaluation from App Review to get a report on your app or game’s appearance and how it behaves in visionOS.

Vision Pro isn’t planned to ship until early 2024, but Apple wants to have third-party apps ready and waiting for when that time comes.

Vision Pro Dev Kit Applications Will Open in July Read More »

apple-releases-vision-pro-development-tools-and-headset-emulator

Apple Releases Vision Pro Development Tools and Headset Emulator

Apple has released new and updated tools for developers to begin building XR apps on Apple Vision Pro.

Apple Vision Pro isn’t due out until early 2024, but the company wants developers to get a jump-start on building apps for the new headset.

To that end the company announced today it has released the visionOS SDK, updated Xcode, Simulator, and Reality Composer Pro, which developers can get access to at the Vision OS developer website.

While some of the tools will be familiar to Apple developers, tools like Simulator and Reality Composer Pro are newly released for the headset.

Simulator is the Apple Vision Pro emulator, which aims to give developers a way to test their apps before having their hands on the headset. The tool effectively acts as a software version of Apple Vision Pro, allowing developers see how their apps will render and act on the headset.

Reality Composer Pro is aimed at making it easy for developers to build interactive scenes with 3D models, sounds, and textures. From what we understand, it’s sort of like an easier (albeit less capable) alternative to Unity. However, developers who already know or aren’t afraid to learn a full-blown game engine can also use Unity to build visionOS apps.

Image courtesy Apple

In addition to the release of the visionOS SDK today, Apple says it’s still on track to open a handful of ‘Developer Labs’ around the world where developers can get their hands on the headset and test their apps. The company also says developers will be able to apply to receive Apple Vision Pro development kits next month.

Apple Releases Vision Pro Development Tools and Headset Emulator Read More »

a-concise-beginner’s-guide-to-apple-vision-pro-design-&-development

A Concise Beginner’s Guide to Apple Vision Pro Design & Development

Apple Vision Pro has brought new ideas to the table about how XR apps should be designed, controlled, and built. In this Guest Article, Sterling Crispin offers up a concise guide for what first-time XR developers should keep in mind as they approach app development for Apple Vision Pro.

Guest Article by Sterling Crispin

Sterling Crispin is an artist and software engineer with a decade of experience in the spatial computing industry. His work has spanned between product design and the R&D of new technologies at companies like Apple, Snap Inc, and various other tech startups working on face computers.

Editor’s Note:  The author would like to remind readers that he is not an Apple representative; this info is personal opinion and does not contain non-public information. Additionally, more info on Vision Pro development can be found in Apple’s WWDC23 videos (select Filter → visionOS).

Ahead is my advice for designing and developing products for Vision Pro. This article includes a basic overview of the platform, tools, porting apps, general product design, prototyping, perceptual design, business advice, and more.

Overview

Apps on visionOS are organized into ‘scenes’, which are Windows, Volumes, and Spaces.

Windows are a spatial version of what you’d see on a normal computer. They’re bounded rectangles of content that users surround themselves with. These may be windows from different apps or multiple windows from one app.

Volumes are things like 3D objects, or small interactive scenes. Like a 3D map, or small game that floats in front of you rather than being fully immersive.

Spaces are fully immersive experiences where only one app is visible. That could be full of many Windows and Volumes from your app. Or like VR games where the system goes away and it’s all fully immersive content that surrounds you. You can think of visionOS itself like a Shared Space where apps coexist together and you have less control. Whereas Full Spaces give you the most control and immersiveness, but don’t coexist with other apps. Spaces have immersion styles: mixed, progressive, and full. Which defines how much or little of the real world you want the user to see.

User Input

Users can look at the UI and pinch like the Apple Vision Pro demo videos show. But you can also reach out and tap on windows directly, sort of like it’s actually a floating iPad. Or use a bluetooth trackpad or video game controller. You can also look and speak in search bars. There’s also a Dwell Control for eyes-only input, but that’s really an accessibility feature. For a simple dev approach, your app can just use events like a TapGesture. In this case, you won’t need to worry about where these events originate from.

Spatial Audio

Vision Pro has an advanced spatial audio system that makes sounds seem like they’re really in the room by considering the size and materials in your room. Using subtle sounds for UI interaction and taking advantage of sound design for immersive experiences is going to be really important. Make sure to take this topic seriously.

Development

If you want to build something that works between Vision Pro, iPad, and iOS, you’ll be operating within the Apple dev ecosystem, using tools like XCode and SwiftUI. However, if your goal is to create a fully immersive VR experience for Vision Pro that also works on other headsets like Meta’s Quest or PlayStation VR, you have to use Unity.

Apple Tools

For Apple’s ecosystem, you’ll use SwiftUI to create the UI the user sees and the overall content of your app. RealityKit is the 3D rendering engine that handles materials, 3D objects, and light simulations. You’ll use ARKit for advanced scene understanding, like if you want someone to throw virtual darts and have them collide with their real wall, or do advanced things with hand tracking. But those rich AR features are only available in Full Spaces. There’s also Reality Composer Pro which is a 3D content editor that lets you drag things around a 3D scene and make media rich Spaces or Volumes. It’s like diet-Unity that’s built specifically for this development stack.

One cool thing with Reality Composer is that it’s already full of assets, materials, and animations. That helps developers who aren’t artists build something quickly and should help to create a more unified look and feel to everything built with the tool. Pros and cons to that product decision, but overall it should be helpful.

Existing iOS Apps

If you’re bringing an iPad or iOS app over, it will probably work unmodified as a Window in the Shared Space. If your app supports both iPad and iPhone, the headset will use the iPad version.

To customize your existing iOS app to take better advantage of the headset you can use the Ornament API to make little floating islands of UI in front of, or besides your app, to make it feel more spatial. Ironically, if your app is using a lot of ARKit features, you’ll likely need to ‘reimagine’ it significantly to work on Vision Pro, as ARKit has been upgraded a lot for the headset.

If you’re excited about building something new for Vision Pro, my personal opinion is that you should prioritize how your app will provide value across iPad and iOS too. Otherwise you’re losing out on hundreds of millions of users.

Unity

You can build to Vision Pro with the Unity game engine, which is a massive topic. Again, you need to use Unity if you’re building to Vision Pro as well as a Meta headset like the Quest or PSVR 2.

Unity supports building Bounded Volumes for the Shared Space which exist alongside native Vision Pro content. And Unbounded Volumes, for immersive content that may leverage advanced AR features. Finally you can also build more VR-like apps which give you more control over rendering but seem to lack support for ARKit scene understanding like plane detection. The Volume approach gives RealityKit more control over rendering, so you have to use Unity’s PolySpatial tool to convert materials, shaders, and other features.

Unity support for Vision Pro includes for tons of interactions you’d expect to see in VR, like teleporting to a new location or picking up and throwing virtual objects.

Product Design

You could just make an iPad-like app that shows up as a floating window, use the default interactions, and call it a day. But like I said above, content can exist in a wide spectrum of immersion, locations, and use a wide range of inputs. So the combinatorial range of possibilities can be overwhelming.

If you haven’t spent 100 hours in VR, get a Quest 2 or 3 as soon as possible and try everything. It doesn’t matter if you’re a designer, or product manager, or a CEO, you need to get a Quest and spend 100 hours in VR to begin to understand the language of spatial apps.

I highly recommend checking out Hand Physics Lab as a starting point and overview for understanding direct interactions. There’s a lot of subtle things they do which imbue virtual objects with a sense of physicality. And the Youtube VR app that was released in 2019 looks and feels pretty similar to a basic visionOS app, it’s worth checking out.

Keep a diary of what works and what doesn’t.

Ask yourself: ‘What app designs are comfortable, or cause fatigue?’, ‘What apps have the fastest time-to-fun or value?’, ‘What’s confusing and what’s intuitive?’, ‘What experiences would you even bother doing more than once?’ Be brutally honest. Learn from what’s been tried as much as possible.

General Design Advice

I strongly recommend the IDEO style design thinking process, it works for spatial computing too. You should absolutely try it out if you’re unfamiliar. There’s Design Kit with resources and this video which, while dated, is a great example of the process.

The road to spatial computing is a graveyard of utopian ideas that failed. People tend to spend a very long time building grand solutions for the imaginary problems of imaginary users. It sounds obvious, but instead you should try to build something as fast as possible that fills a real human need, and then iteratively improve from there.

Continue on Page 2: Spatial Formats and Interaction »

A Concise Beginner’s Guide to Apple Vision Pro Design & Development Read More »

apple-vision-pro-will-have-an-‘avatar-webcam’,-automatically-integrating-with-popular-video-chat-apps

Apple Vision Pro Will Have an ‘Avatar Webcam’, Automatically Integrating with Popular Video Chat Apps

In addition to offering immersive experiences, Apple says that Vision Pro will be able to run most iPad and iOS apps out of the box with no changes. For video chat apps like Zoom, Messenger, Discord, and others, the company says that an ‘avatar webcam’ will be supplied to apps, making them automatically able to handle video calls between the headset and other devices.

Apple says that on day one, all suitable iOS and iPad OS apps will be available on the headset’s App Store. According to the company, “most apps don’t need any changes at all,” and the majority should run on the headset right out of the box. Developers will be able to opt-out from having their apps on the headset if they’d like.

For video conferencing apps like Zoom, Messenger, Discord, Google Meet, which expect access to the front-camera of an iPhone or iPad, Apple has done something clever for Vision Pro.

Instead of a live camera view, Vision Pro provides a view of the headset’s computer-generated avatar of the user (which Apple calls a ‘Persona’). That means that video chat apps that are built according to Apple’s existing guidelines should work on Vision Pro without any changes to how the app handles camera input.

How Apple Vision Pro ‘Persona’ avatars are represented | Image courtesy Apple

Persona’s use the headset’s front cameras to scan the user’s face to create a model, then the model is animated according to head, eye, and hand inputs tracked by the headset.

Image courtesy Apple

Apple confirmed as much in a WWDC developer session called Enhance your iPad and iPhone apps for the Shared Space. The company also confirmed that apps asking for access to a rear-facing camera (ie: a photography app) on Apple Vision Pro will get only black frames with a ‘no camera’ symbol. This alerts the user that there’s no rear-facing camera available, but also means that iOS and iPad apps will continue to run without errors, even when they expect to see a rear-facing camera.

There’s potentially other reasons that video chat apps like Zoom, Messenger, or Discord might not work with Apple Vision Pro right out of the box, but at least as far as camera handling goes, it should be easy for developers to get video chats up and running using a view of the user’s Persona.

It’s even possible that ‘AR face filters’ in apps like Snapchat and Messenger will work correctly with the user’s Apple Vision Pro avatar, with the app being none-the-wiser that it’s actually looking at a computer-generated avatar rather than a real person.

Image courtesy Apple

In another WWDC session, the company explained more about how iOS and iPad apps behave on Apple Vision Pro without modification.

Developers can expect up to two inputs from the headset (the user can pinch each hand as its own input), meaning any apps expecting two-finger gestures (like pinch-zoom) should work just fine, but three fingers or more won’t be possible from the headset. As for apps that require location information, Apple says the headset can provide an approximate location via Wi-Fi, or a specific location shared via the user’s iPhone.

Unfortunately, existing ARKit apps won’t work out of the box on Apple Vision Pro. Developers will need to use a newly upgraded ARKit (and other tools) to make their apps ready for the headset. This is covered in the WWDC session Evolve your ARKit app for spatial experiences.

Apple Vision Pro Will Have an ‘Avatar Webcam’, Automatically Integrating with Popular Video Chat Apps Read More »

apple’s-computer-vision-tool-for-developers-now-tracks-dogs-&-cats

Apple’s Computer Vision Tool for Developers Now Tracks Dogs & Cats

Would reality really be complete without our beloved four-legged friends? Certainly not. Luckily the latest update to Apple’s ‘Vision’ framework—which gives developers a bunch of useful computer vision tools for iOS and iPad apps—includes the ability to identify and track the skeletal position of dogs and cats.

At Apple’s annual WWDC the company posted a session introducing developers to the new animal tracking capabilities in the Vision developer tool, and explained that the system can work on videos in real-time and on photos.

The system, which is also capable of tracking the skeletal position of people, gives developers six tracked ‘joint groups’ to work with, which collectively describe the position of the animal’s body.

Image courtesy Apple

Tracked joint groups include:

  • Head: Ears, Eyes, Nose
  • Front Legs: Right leg, Left leg
  • Hind Legs: Right rear leg, Left rear leg
  • Tail: Tail start, Tail middle, Tail end
  • Trunk (neck)
  • All (contains all tracked points representing a complete skeletal pose)

Yes, you read that right, the system has ‘tail tracking’ and ‘ear tracking’ so your dog’s tail wags and floppy ears won’t be missed.

The system supports up to two animals in the scene at one time and, in additional to tracking their position, can also identify a cat from a dog… just in case you have trouble with that.

Image courtesy Apple

Despite the similarity in name to the Vision Pro headset, it isn’t yet clear if Apple will expose the ‘Vision’ computer vision framework to developers of the headset, but it may well be the same foundation that allows the device to identify people in the room around you and fade them into the virtual view so you can talk to them.

That may have also been a reason for building out this animal tracking system in the first place—so you don’t trip over fido when you’re dancing around the room in your new Vision Pro headset—though we haven’t been able to confirm that system will work with pets just yet.

Apple’s Computer Vision Tool for Developers Now Tracks Dogs & Cats Read More »

apple-to-open-locations-for-devs-to-test-vision-pro-this-summer,-sdk-this-month

Apple to Open Locations for Devs to Test Vision Pro This Summer, SDK This Month

Ahead of the Apple Vision Pro’s release in ‘early 2024’, the company says it will open several centers in a handful of locations around the world, giving some developers a chance to test the headset before it’s released to the public.

It’s clear that developers will need time to start building Apple Vision Pro apps ahead of its launch, and it’s also clear that Apple doesn’t have heaps of headsets on hand for developers to start working with right away. In an effort to give developers the earliest possible chance to test their immersive apps, the company says it plans to open ‘Apple Vision Pro Developer Labs’ in a handful of locations around the world.

Starting this Summer, the Apple Vision Pro Developer Labs will open in London, Munich, Shanghai, Singapore, Tokyo, and Cupertino.

Apple also says developers will be able to submit a request to have their apps tested on Vision Pro, with testing and feedback being done remotely by Apple.

Image courtesy Apple

Of course, developers still need new tools to build for the headset in the first place. Apple says devs can expect a visionOS SDK and updated versions of Reality Composer and Xcode by the end of June so support development on the headset. That will be accompanied by new Human Interface Guidelines to help developers follow best practices for spatial apps on Vision Pro.

Additionally, Apple says it will make available a Vision Pro Simulator, an emulator that allows developers to see how their apps would look through the headset.

Developers can find more info when it’s ready at Apple’s developer website. Closer to launch Apple says Vision Pro will be available for the public to test in stores.

Apple to Open Locations for Devs to Test Vision Pro This Summer, SDK This Month Read More »

croquet-for-unity:-a-new-era-for-multiplayer-development-with-“no-netcode”-solution

Croquet for Unity: A New Era for Multiplayer Development With “No Netcode” Solution

Croquet, the multiplayer platform for web and gaming, which took home the WebXR Platform of the Year award at this year’s Polys WebXR Awards, recently announced Croquet for Unity.

Croquet for Unity is an innovative JavaScript multiplayer framework for Unity – a platform for creating interactive, real-time 3D content – that simplifies development by eliminating multiplayer code and server setup. It connects developers with the distinct global architecture of the Croquet Multiplayer Network. The framework was demonstrated at GDC last week, while early access beta is arriving in April 2023.

Effortless Networking for Developers

Croquet for Unity alleviates the developers’ need to generate and sustain networking code. By employing Croquet’s Synchronized Computation Architecture, server-side programming and traditional servers become unnecessary.

Users connect through the Croquet Multiplayer Network, which consists of Reflectors—stateless microservers located across four continents—that guarantee smooth and uniform experiences for gamers.

Synchronizing Computation for Flawless Multiplayer

At its essence, Croquet focuses on synchronizing not only the state but also its progression over time. By harmonizing computation, Croquet eliminates the need to transmit the outcomes of intricate computations like physics or AI.

It also eliminates the necessity for particular data structures or sync indicators for designated objects. As a result, crafting multiplayer code becomes akin to creating single-player code, with the full game simulation executing on-device.

Shared Virtual Computers for Perfect Sync

A shared virtual computer runs identically on all clients, providing perfect synchronization and giving each player a unique perspective. Lightweight reflectors can be positioned at the edge of the cloud or in a 5G network’s MEC, offering lower latency than older architectures.

In addition, synchronized calculations performed on each client will replace traditional server computations, resulting in reduced bandwidth and improved latency.

Unprecedented Shared Multiplayer Simulations

Croquet not only facilitates multiplayer development but also enables previously unfeasible shared multiplayer simulations. Examples include real-time interactive physics as a fundamental game feature, fully reproduced non-player character behaviors, and sophisticated player interactions that allow players to interact while the game is live.

Due to bandwidth limits and intrinsic complexity, traditional networks are incapable of supporting these simulations.

“Innately Multiplayer” Games With No Netcode

“Multiplayer games are the most important and fastest-growing part of the gaming market. But building and maintaining multiplayer games is still just too hard,” said David A. Smith, founder and CTO of Croquet, in a press release shared with ARPost. “Croquet takes the netcode out of creating multiplayer games. When we say, ‘innately multiplayer,’ we mean games are multiuser automatically from the first line of code and not as an afterthought writing networking code to make it multiplayer.”

Croquet’s goal is to simplify developing multiplayer games, making it as easy as building single-player games. By removing netcode creation and administration, developers can concentrate on improving player experiences while benefiting from reduced overall creation and distribution costs, a speedier time to market, and enhanced player satisfaction.

Opening Doors for Indie Developers

Croquet for Unity is created for a wide range of gaming developers, but it is highly advantageous for small, independent developers that often find it more difficult to create multiplayer games because of the absence of in-house networking and backend technical background.

Secure Your Spot on the Croquet for Unity Beta Waitlist

Developers can sign up for the Beta Waitlist to access the Croquet for Unity beta, launching in April.The Croquet for Unity Package will be available in the Unity Asset Store upon commercial release for free, requiring a Croquet gaming or enterprise subscription and developer API key for global Croquet Multiplayer Network access.

Croquet for Unity: A New Era for Multiplayer Development With “No Netcode” Solution Read More »