apple vision pro development

oculus-founder-explains-what-apple-got-right-&-wrong-on-vision-pro

Oculus Founder Explains What Apple Got Right & Wrong on Vision Pro

Apple Vision Pro is about to set a lot of expectations in the industry of what’s ‘right’ and ‘wrong’ about mixed reality, something the fruit company prefers to call spatial computing. Oculus founder Palmer Luckey weighed in on his thoughts, and coming from one of the main figures who kicked off the VR revolution of today, it means something.

Speaking to Peter Diamandis in a nearly two hour-long podcast, Luckey delved into many areas of his work over the years, touching on the role at his defense company Anduril, his role in kickstarting the modern era of VR, and basically everything under the Sun that the tech entrepreneur is doing, or thinks about when it comes to augmented and virtual reality.

Undoubtedly the hottest of hot button issues is whether Apple is doing mixed reality ‘right’ as a newcomer to the space. Luckey is mostly positive about Vision Pro, saying it’s patently Apple.

“I think there were things that I would do differently if I were Apple,” Luckey tells Diamandis. “They did basically everything right—they didn’t do anything terrible. I mean, I think Apple is going after the exact right segment of the market that Apple should be going after.”

Luckey maintains that if Apple went after the low end of the market, it would be “a mistake,” saying the Cupertino tech giant is taking “the exact approach that I had always wanted Apple to take, and really the approach that Oculus had been taking in the early years.”

Apple is admittedly going at XR with little regard for affordability, but that’s not the sticking point you might think it would be. To him, the $3,500 headset packs the best components for the premium segment, including “the highest possible resolution, the highest quality possible displays, the best possible ergonomics.”

In fact, Apple’s first-gen device shouldn’t be about affordability at this point, Luckey says. It’s about “inspiring lust in a much larger group of people, who, as I dreamed all those years ago, see VR as something they desperately want before it becomes something they can afford.”

Image courtesy Apple

In the world of component configurations, there’s very little that catches Luckey off guard, although Vision Pro’s tethered battery ‘puck’ was choice that surprised the Oculus founder a little bit. When it comes to offloading weight from the user’s head, Luckey says shipping a battery puck was the “right way to do things.”

“I was a big advocate of [external pucks] in Oculus, but unfortunately it was a battle that I lost in my waning years, and [Oculus] went all in on putting all batteries, all the processing in actual headset itself. And not just in the headset, but in the front of the headset itself, which hugely increases the weight of the front of the device, the asymmetric torque load… it’s not a good decision.”

One direction Apple has going that Luckey isn’t a fan of: controllers, or rather, the lack thereof. Vision Pro is set to ship without any sort of VR motion controller, which means developers will need to target hand and eye-tracking as the primary input methods.

“It’s no secret that I’m a big fan of VR input, and I think that’s probably one of the things I would have done differently than Apple. On the other hand, they have a plan for VR input that goes beyond just the finger [click] inputs. They’re taking a focused marketing approach, but I think they have a broader vision for the future than everything just being eyes and fingers.”

Luckey supports the company’s decision to split the headset into a puck and head-worn device not only for Vision Pro in the near term, but also for future iterations of the device, which will likely need more batteries, processing, and antennas. Setting those expectations now of a split configuration could help Apple move lighter and thinner on head-worn components, and never even deal with the problems of balancing the girth and weight seen in the all-in-one, standalone headsets of today.

In the end, whether the average person will wear such things in the future will ultimately come down to clever marketing, Luckey maintains, as it’s entirely possible to slim down to thinner form factors, but devices may not be nearly as functional at sizes smaller than “chunky sunglasses”. To Luckey, companies like Apple have their work cut out for them when it comes to normalizing these AR/VR headsets of the near future, and Apple will most definitely be seeding their devices on the heads of “the right celebrities, the right influencers” in the meantime.

You can check out the full 15-minute clip where Luckey talks about his thoughts on Apple Vision Pro below:

Oculus Founder Explains What Apple Got Right & Wrong on Vision Pro Read More »

a-concise-beginner’s-guide-to-apple-vision-pro-design-&-development

A Concise Beginner’s Guide to Apple Vision Pro Design & Development

Apple Vision Pro has brought new ideas to the table about how XR apps should be designed, controlled, and built. In this Guest Article, Sterling Crispin offers up a concise guide for what first-time XR developers should keep in mind as they approach app development for Apple Vision Pro.

Guest Article by Sterling Crispin

Sterling Crispin is an artist and software engineer with a decade of experience in the spatial computing industry. His work has spanned between product design and the R&D of new technologies at companies like Apple, Snap Inc, and various other tech startups working on face computers.

Editor’s Note:  The author would like to remind readers that he is not an Apple representative; this info is personal opinion and does not contain non-public information. Additionally, more info on Vision Pro development can be found in Apple’s WWDC23 videos (select Filter → visionOS).

Ahead is my advice for designing and developing products for Vision Pro. This article includes a basic overview of the platform, tools, porting apps, general product design, prototyping, perceptual design, business advice, and more.

Overview

Apps on visionOS are organized into ‘scenes’, which are Windows, Volumes, and Spaces.

Windows are a spatial version of what you’d see on a normal computer. They’re bounded rectangles of content that users surround themselves with. These may be windows from different apps or multiple windows from one app.

Volumes are things like 3D objects, or small interactive scenes. Like a 3D map, or small game that floats in front of you rather than being fully immersive.

Spaces are fully immersive experiences where only one app is visible. That could be full of many Windows and Volumes from your app. Or like VR games where the system goes away and it’s all fully immersive content that surrounds you. You can think of visionOS itself like a Shared Space where apps coexist together and you have less control. Whereas Full Spaces give you the most control and immersiveness, but don’t coexist with other apps. Spaces have immersion styles: mixed, progressive, and full. Which defines how much or little of the real world you want the user to see.

User Input

Users can look at the UI and pinch like the Apple Vision Pro demo videos show. But you can also reach out and tap on windows directly, sort of like it’s actually a floating iPad. Or use a bluetooth trackpad or video game controller. You can also look and speak in search bars. There’s also a Dwell Control for eyes-only input, but that’s really an accessibility feature. For a simple dev approach, your app can just use events like a TapGesture. In this case, you won’t need to worry about where these events originate from.

Spatial Audio

Vision Pro has an advanced spatial audio system that makes sounds seem like they’re really in the room by considering the size and materials in your room. Using subtle sounds for UI interaction and taking advantage of sound design for immersive experiences is going to be really important. Make sure to take this topic seriously.

Development

If you want to build something that works between Vision Pro, iPad, and iOS, you’ll be operating within the Apple dev ecosystem, using tools like XCode and SwiftUI. However, if your goal is to create a fully immersive VR experience for Vision Pro that also works on other headsets like Meta’s Quest or PlayStation VR, you have to use Unity.

Apple Tools

For Apple’s ecosystem, you’ll use SwiftUI to create the UI the user sees and the overall content of your app. RealityKit is the 3D rendering engine that handles materials, 3D objects, and light simulations. You’ll use ARKit for advanced scene understanding, like if you want someone to throw virtual darts and have them collide with their real wall, or do advanced things with hand tracking. But those rich AR features are only available in Full Spaces. There’s also Reality Composer Pro which is a 3D content editor that lets you drag things around a 3D scene and make media rich Spaces or Volumes. It’s like diet-Unity that’s built specifically for this development stack.

One cool thing with Reality Composer is that it’s already full of assets, materials, and animations. That helps developers who aren’t artists build something quickly and should help to create a more unified look and feel to everything built with the tool. Pros and cons to that product decision, but overall it should be helpful.

Existing iOS Apps

If you’re bringing an iPad or iOS app over, it will probably work unmodified as a Window in the Shared Space. If your app supports both iPad and iPhone, the headset will use the iPad version.

To customize your existing iOS app to take better advantage of the headset you can use the Ornament API to make little floating islands of UI in front of, or besides your app, to make it feel more spatial. Ironically, if your app is using a lot of ARKit features, you’ll likely need to ‘reimagine’ it significantly to work on Vision Pro, as ARKit has been upgraded a lot for the headset.

If you’re excited about building something new for Vision Pro, my personal opinion is that you should prioritize how your app will provide value across iPad and iOS too. Otherwise you’re losing out on hundreds of millions of users.

Unity

You can build to Vision Pro with the Unity game engine, which is a massive topic. Again, you need to use Unity if you’re building to Vision Pro as well as a Meta headset like the Quest or PSVR 2.

Unity supports building Bounded Volumes for the Shared Space which exist alongside native Vision Pro content. And Unbounded Volumes, for immersive content that may leverage advanced AR features. Finally you can also build more VR-like apps which give you more control over rendering but seem to lack support for ARKit scene understanding like plane detection. The Volume approach gives RealityKit more control over rendering, so you have to use Unity’s PolySpatial tool to convert materials, shaders, and other features.

Unity support for Vision Pro includes for tons of interactions you’d expect to see in VR, like teleporting to a new location or picking up and throwing virtual objects.

Product Design

You could just make an iPad-like app that shows up as a floating window, use the default interactions, and call it a day. But like I said above, content can exist in a wide spectrum of immersion, locations, and use a wide range of inputs. So the combinatorial range of possibilities can be overwhelming.

If you haven’t spent 100 hours in VR, get a Quest 2 or 3 as soon as possible and try everything. It doesn’t matter if you’re a designer, or product manager, or a CEO, you need to get a Quest and spend 100 hours in VR to begin to understand the language of spatial apps.

I highly recommend checking out Hand Physics Lab as a starting point and overview for understanding direct interactions. There’s a lot of subtle things they do which imbue virtual objects with a sense of physicality. And the Youtube VR app that was released in 2019 looks and feels pretty similar to a basic visionOS app, it’s worth checking out.

Keep a diary of what works and what doesn’t.

Ask yourself: ‘What app designs are comfortable, or cause fatigue?’, ‘What apps have the fastest time-to-fun or value?’, ‘What’s confusing and what’s intuitive?’, ‘What experiences would you even bother doing more than once?’ Be brutally honest. Learn from what’s been tried as much as possible.

General Design Advice

I strongly recommend the IDEO style design thinking process, it works for spatial computing too. You should absolutely try it out if you’re unfamiliar. There’s Design Kit with resources and this video which, while dated, is a great example of the process.

The road to spatial computing is a graveyard of utopian ideas that failed. People tend to spend a very long time building grand solutions for the imaginary problems of imaginary users. It sounds obvious, but instead you should try to build something as fast as possible that fills a real human need, and then iteratively improve from there.

Continue on Page 2: Spatial Formats and Interaction »

A Concise Beginner’s Guide to Apple Vision Pro Design & Development Read More »