Tech

vision-pro-m5-review:-it’s-time-for-apple-to-make-some-tough-choices

Vision Pro M5 review: It’s time for Apple to make some tough choices


A state of the union from someone who actually sort of uses the thing.

The M5 Vision Pro with the Dual Knit Band. Credit: Samuel Axon

With the recent releases of visionOS 26 and newly refreshed Vision Pro hardware, it’s an ideal time to check in on Apple’s Vision Pro headset—a device I was simultaneously amazed and disappointed by when it launched in early 2024.

I still like the Vision Pro, but I can tell it’s hanging on by a thread. Content is light, developer support is tepid, and while Apple has taken action to improve both, it’s not enough, and I’m concerned it might be too late.

When I got a Vision Pro, I used it a lot: I watched movies on planes and in hotel rooms, I walked around my house placing application windows and testing out weird new ways of working. I tried all the neat games and educational apps, and I watched all the immersive videos I could get ahold of. I even tried my hand at developing my own applications for it.

As the months went on, though, I used it less and less. The novelty wore off, and as cool as it remained, practicality beat coolness. By the time Apple sent me the newer model a couple of weeks ago, I had only put the original one on a few times in the prior couple of months. I had mostly stopped using it at home, but I still took it on trips as an entertainment device for hotel rooms now and then.

That’s not an uncommon story. You even see it in the subreddit for Vision Pro owners, which ought to be the home of the device’s most dedicated fans. Even there, people say, “This is really cool, but I have to go out of my way to keep using it.”

Perhaps it would have been easier to bake it into my day-to-day habits if developer and content creator support had been more robust, a classic chicken-and-egg problem.

After a few weeks of using the new Vision Pro hardware refresh daily, it’s clear to me that the platform needs a bigger rethink. As a fan of the device, I’m concerned it won’t get that, because all the rumors point to Apple pouring its future resources into smart glasses, which, to me, are a completely different product category.

What changed in the new model?

For many users, the most notable change here will be something you can buy separately (albeit at great expense) for the old model: A new headband that balances the device’s weight on your head better, making it more comfortable to wear for long sessions.

Dubbed the Dual Knit Band, it comes with an ingeniously simple adjustment knob that can be used to tighten or loosen either the band that goes across the back of your head (similar to the old band) or the one that wraps around the top.

It’s well-designed, and it will probably make the Vision Pro easier to use for many people who found the old model to be too uncomfortable—even though this model is slightly heavier than its predecessor.

The band fit is adjusted with this knob. You can turn it to loosen or tighten one strap, then pull it out and turn it again to adjust the other. Credit: Samuel Axon

I’m one of the lucky few who never had any discomfort problems with the Vision Pro, but I know a bunch of folks who said the pressure the device put on their foreheads was unbearable. That’s exactly what this new band remedies, so it’s nice to see.

The M5 chip offers more than just speed

Whereas the first Vision Pro had Apple’s M2 chip—which was already a little behind the times when it launched—the new one adds the M5. It’s much faster, especially for graphics-processing and machine-learning tasks. We’ve written a lot about the M5 in our articles on other Apple products if you’re interested to learn more about it.

Functionally, this means a lot of little things are a bit faster, like launching certain applications or generating a Persona avatar. I’ll be frank: I didn’t notice any difference that significantly impacted the user experience. I’m not saying I couldn’t tell it was faster sometimes. I’m just saying it wasn’t faster in a way that’s meaningful enough to change any attitudes about the device.

It’s most noticeable with games—both native mixed-reality Vision Pro titles and the iPad versions of demanding games that you can run on a virtual display on the device. Demanding 3D games look and run nicer, in many cases. The M5 also supports more recent graphics advancements like ray tracing and mesh shading, though very few games support them, even in terms of iPad versions.

All this is to say that while I always welcome performance improvements, they are definitely not enough to convince an M2 Vision Pro owner to upgrade, and they won’t tip things over for anyone who has been on the fence about buying one of these things.

The main perk of the new chip is improved efficiency, which is the driving force behind modestly increased battery life. When I first took the M2 Vision Pro on a plane, I tried watching 2021’s Dune. I made it through the movie, but just barely; the battery ran out during the closing credits. It’s not a short movie, but there are longer ones.

Now, the new headset can easily get another 30 or 60 minutes, depending on what you’re doing, which finally puts it in “watch any movie you want” territory.

Given how short battery life was in the original version, even a modest bump like that makes a big difference. That, alongside a marginally increased field of view (about 10 percent) and a new 120 Hz maximum refresh rate for passthrough are the best things about the new hardware. These are nice-to-haves, but they’re not transformational by any means.

We already knew the Vision Pro offered excellent hardware (even if it’s overkill for most users), but the platform’s appeal is really driven by software. Unfortunately, this is where things are running behind expectations.

For content, it’s quality over quantity

When the first Vision Pro launched, I was bullish about the promise of the platform—but a lot of that was contingent on a strong content cadence and third-party developer support.

And as I’ve written since, the content cadence for the first year was a disappointment. Whereas I expected weekly episodes of Apple’s Immersive Videos in the TV app, those short videos arrived with gaps of several months. There’s an enormous wealth of great immersive content outside of Apple’s walled garden, but Apple didn’t seem interested in making that easily accessible to Vision Pro owners. Third-party apps did some of that work, but they lagged behind those on other platforms.

The first-party content cadence picked up after the first year, though. Plus, Apple introduced the Spatial Gallery, a built-in app that aggregates immersive 3D photos and the like. It’s almost TikTok-like in that it lets you scroll through short-form content that leverages what makes the device unique, and it’s exactly the sort of thing that the platform so badly needed at launch.

The Spatial Gallery is sort of like a horizontally-scrolling TikTok for 3D photos and video. Credit: Samuel Axon

The content that is there—whether in the TV app or the Spatial Gallery—is fantastic. It’s beautifully, professionally produced stuff that really leans on the hardware. For example, there is an autobiographical film focused on U2’s Bono that does some inventive things with the format that I had never seen or even imagined before.

Bono, of course, isn’t everybody’s favorite, but if you can stomach the film’s bloviating, it’s worth watching just with an eye to what a spatial video production can or should be.

I still think there’s significant room to grow, but the content situation is better than ever. It’s not enough to keep you entertained for hours a day, but it’s enough to make putting on the headset for a bit once a week or so worth it. That wasn’t there a year ago.

The software support situation is in a similar state.

App support is mostly frozen in the year 2024

Many of us have a suite of go-to apps that are foundational to our individual approaches to daily productivity. For me, primarily a macOS user, they are:

  • Firefox
  • Spark
  • Todoist
  • Obsidian
  • Raycast
  • Slack
  • Visual Studio Code
  • Claude
  • 1Password

As you can see, I don’t use most of Apple’s built-in apps—no Safari, no Mail, no Reminders, no Passwords, no Notes… no Spotlight, even. All that may be atypical, but it has never been a problem on macOS, nor has it been on iOS for a few years now.

Impressively, almost all of these are available on visionOS—but only because it can run iPad apps as flat, virtual windows. Firefox, Spark, Todoist, Obsidian, Slack, 1Password, and even Raycast are all available as supported iPad apps, but surprisingly, Claude isn’t, even though there is a Claude app for iPads. (ChatGPT’s iPad app works, though.) VS Code isn’t available, of course, but I wasn’t expecting it to be.

Not a single one of these applications has a true visionOS app. That’s too bad, because I can think of lots of neat things spatial computing versions could do. Imagine browsing your Obsidian graph in augmented reality! Alas, I can only dream.

You can tell the native apps from the iPad ones: The iPad ones have rectangular icons nested within circles, whereas the native apps fill the whole circle. Credit: Samuel Axon

If you’re not such a huge productivity software geek like me and you use Apple’s built-in apps, things look a little better, but surprisingly, there are still a few apps that you would imagine would have really cool spatial computing features—like Apple Maps—that don’t. Maps, too, is just an iPad app.

Even if you set productivity aside and focus on entertainment, there are still frustrating gaps. Almost two years later, there is still no Netflix or YouTube app. There are decent-enough third-party options for YouTube, but you have to watch Netflix in a browser, which is lower-quality than in a native app and looks horrible on one of the Vision Pro’s big virtual screens.

To be clear, there is a modest trickle of interesting spatial app experiences coming in—most of them games, educational apps, or cool one-off ideas that are fun to check out for a few minutes.

All this is to say that nothing has really changed since February 2024. There was an influx of apps at launch that included a small number of show-stoppers (mostly educational apps), but the rest ranged from “basically the iPad app but with one or two throwaway tech-demo-style spatial features you won’t try more than once” to “basically the iPad app but a little more native-feeling” to “literally just the iPad app.” As far as support from popular, cross-platform apps, it’s mostly the same list today as it was then.

Its killer app is that it’s a killer monitor

Even though Apple hasn’t made a big leap forward in developer support, it has made big strides in making the Vision Pro a nifty companion to the Mac.

From the start, it has had a feature that lets you simply look at a Mac’s built-in display, tap your fingers, and launch a large, resizable virtual monitor. I have my own big, multi-monitor setup at home, but I have used the Vision Pro this way sometimes when traveling.

I had some complaints at the start, though. It could only do one monitor, and that monitor was limited to 60 Hz and a standard widescreen resolution. That’s better than just using a 14-inch MacBook Pro screen, but it’s a far cry from the sort of high-end setup a $3,500 price tag suggests. Furthermore, it didn’t allow you to switch audio between the two devices.

Thanks to both software and hardware updates, that has all changed. visionOS now supports three different monitor sizes: the standard widescreen aspect ratio, a wider one that resembles a standard ultra-wide monitor, and a gigantic, ultra-ultra-wide wrap-around display that I can assure you will leave no one wanting for desktop space. It looks great. Problem solved! Likewise, it will now transfer your Mac audio to the Vision Pro or its Bluetooth headphones automatically.

All of that works not just on the new Vision Pro, but also on the M2 model. The new M5 model exclusively addresses the last of my complaints: You can now achieve higher refresh rates for that virtual monitor than 60 Hz. Apple says it goes “up to 120 Hz,” but there’s no available tool for measuring exactly where it’s landing. Still, I’m happy to see any improvement here.

This is the standard width for the Mac monitor feature… Samuel Axon

Through a series of updates, Apple has turned a neat proof-of-concept feature into something that is genuinely valuable—especially for folks who like ultra-wide or multi-monitor setups but have to travel a lot (like myself) or who just don’t want to invest in the display hardware at home.

You can also play your Mac games on this monitor. I tried playing No Man’s Sky and Cyberpunk 2077 on it with a controller, and it was a fantastic experience.

This, alongside spatial video and watching movies, is the Vision Pro’s current killer app and one of the main areas where Apple has clearly put a lot of effort into improving the platform.

Stop trying to make Personas happen

Strangely, another area where Apple has invested quite a bit to make things better is in the Vision Pro’s usefulness as a communications and meetings device. Personas—the 3D avatars of yourself that you create for Zoom calls and the like—were absolutely terrible when the M2 Vision Pro came out.

There is also EyeSight, which uses your Persona to show a simulacrum of your eyes to people around you in the real world, letting them know you are aware of your surroundings and even allowing them to follow your gaze. I understand the thought behind this feature—Apple doesn’t want mixed reality to be socially isolating—but it sometimes puts your eyes in the wrong place, it’s kind of hard to see, and it honestly seems like a waste of expensive hardware.

Primarily via software updates, I’m pleased to report that Personas are drastically improved. Mine now actually looks like me, and it moves more naturally, too.

I joined a FaceTime call with Apple reps where they showed me how Personas float and emote around each other, and how we could look at the same files and assets together. It was indisputably cool and way better than before, thanks to the improved Personas.

I can’t say as much for EyeSight, which looks the same. It’s hard for me to fathom that Apple has put multiple sensors and screens on this thing to support this feature.

In my view, dropping EyeSight would be the single best thing Apple could do for this headset. Most people don’t like  it, and most people don’t want it, yet there is no question that its inclusion adds a not-insignificant amount to both the price and the weight, the product’s two biggest barriers to adoption.

Likewise, Personas are theoretically cool, and it is a novel and fun experience to join a FaceTime call with people and see how it works and what you could do. But it’s just that: a novel experience. Once you’ve done it, you’ll never feel the need to do it again. I can barely imagine anyone who would rather show up to a call as a Persona than take the headset off for 30 minutes to dial in on their computer.

Much of this headset is dedicated to this idea that it can be a device that connects you with others, but maintaining that priority is simply the wrong decision. Mixed reality is isolating, and Apple is treating that like a problem to be solved, but I consider that part of its appeal.

If this headset were capable of out-in-the-world AR applications, I would not feel that way, but the Vision Pro doesn’t support any application that would involve taking it outside the home into public spaces. A lot of the cool, theoretical AR uses I can think of would involve that, but still no dice here.

The metaverse (it’s telling that this is the first time I’ve typed that word in at least a year) already exists: It’s on our phones, in Instagram and TikTok and WeChat and Fortnite. It doesn’t need to be invented, and it doesn’t need a new, clever approach to finally make it take off. It has already been invented. It’s already in orbit.

Like the iPad and the Apple Watch before it, the Vision Pro needs to stop trying to be a general-purpose device and instead needs to lean into what makes it special.

In doing so, it will become a better user experience, and it will get lighter and cheaper, too. There’s real potential there. Unfortunately, Apple may not go that route if leaks and insider reports are to be believed.

There’s still a ways to go, so hopefully this isn’t a dead end

The M5 Vision Pro was the first of four planned new releases in the product line, according to generally reliable industry analyst Ming-Chi Kuo. Next up, he predicted, would be a full Vision Pro 2 release with a redesign, and a Vision Air, a cheaper, lighter alternative. Those would all precede true smart glasses many years down the road.

I liked that plan: keep the full-featured Vision Pro for folks who want the most premium mixed reality experience possible (but maybe drop EyeSight), and launch a cheaper version to compete more directly with headsets like Meta’s Quest line of products, or the newly announced Steam Frame VR headset from Valve, along with planned competitors by Google, Samsung, and others.

True augmented reality glasses are an amazing dream, but there are serious problems of optics and user experience that we’re still a ways off from solving before those can truly replace the smartphone as Tim Cook once predicted.

All that said, it looks like that plan has been called into question. A Bloomberg report in October claimed that Apple CEO Tim Cook had told employees that the company was redirecting resources from future passthrough HMD products to accelerate work on smart glasses.

Let’s be real: It’s always going to be a once-in-a-while device, not a daily driver. For many people, that would be fine if it cost $1,000. At $3,500, it’s still a nonstarter for most consumers.

I believe there is room for this product in the marketplace. I still think it’s amazing. It’s not going to be as big as the iPhone, or probably even the iPad, but it has already found a small audience that could grow significantly if the price and weight could come down. Removing all the hardware related to Personas and EyeSight would help with that.

I hope Apple keeps working on it. When Apple released the Apple Watch, it wasn’t entirely clear what its niche would be in users’ lives. The answer (health and fitness) became crystal clear over time, and the other ambitions of the device faded away while the company began building on top of what was working best.

You see Apple doing that a little bit with the expanded Mac spatial display functionality. That can be the start of an intriguing journey. But writers have a somewhat crass phrase: “kill your darlings.” It means that you need to be clear-eyed about your work and unsentimentally cut anything that’s not working, even if you personally love it—even if it was the main thing that got you excited about starting the project in the first place.

It’s past time for Apple to start killing some darlings with the Vision Pro, but I truly hope it doesn’t go too far and kill the whole platform.

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Vision Pro M5 review: It’s time for Apple to make some tough choices Read More »

plex’s-crackdown-on-free-remote-streaming-access-starts-this-week

Plex’s crackdown on free remote streaming access starts this week

Plex has previously emphasized its need to keep up with “rising costs,” which include providing support for many different devices and codecs. It has also said that it needs money to implement new features, including an integration with Common Sense Media, a new “bespoke server management app” for managing server users, and “an open and documented API for server integrations,” including custom metadata agents,” per a March blog post.

In January 2024, TechCrunch reported that Plex was nearing profitability and raised $40 million in funding (Plex raised a $50 million growth equity round in 2021). Theoretically, the new remote access rules can also increase subscription revenue and help Plex’s backers see returns on their investments.

However, Plex’s evolution could isolate long-time users who have relied on Plex as a media server for years and those who aren’t interested in subscriptions, FAST (free ad-supported streaming TV) channels, or renting movies. Plex is unlikely to give up on its streaming business, though. In 2023, Scott Hancock, Plex’s then-VP of marketing, said that Plex had more people using its online streaming service than using its media server features since 2022. For people seeking software packages more squarely focused on media hosting, Plex alternatives, like Jellyfin, increasingly look attractive.

Plex’s crackdown on free remote streaming access starts this week Read More »

gpu-prices-are-coming-to-earth-just-as-ram-costs-shoot-into-the-stratosphere

GPU prices are coming to earth just as RAM costs shoot into the stratosphere

It’s not just PC builders

PC and phone manufacturers—and makers of components that use memory chips, like GPUs—mostly haven’t hiked prices yet. These companies buy components in large quantities, and they typically do so ahead of time, dulling the impact of the increases in the short-term. The kinds of price increases we see, and what costs are passed on to consumers, will vary from company to company.

Bloomberg reports that Lenovo is “stockpiling memory and other critical components” to get it through 2026 without issues and that the company “will aim to avoid passing on rising costs to its customers in the current quarter.” Apple may also be in a good position to weather the shortage; analysts at Morgan Stanley and Bernstein Research believe that Apple has already laid claim to the RAM that it needs and that its healthy profit margins will allow it to absorb the increases better than most.

Framework on the other hand, a smaller company known best for its repairable and upgradeable laptop designs, says “it is likely we will need to increase memory pricing soon” to reflect price increases from its suppliers. The company has also stopped selling standalone RAM kits in its online store in an effort to fight scalpers who are trying to capitalize on the shortages.

Tom’s Hardware reports that AMD has told its partners that it expects to raise GPU prices by about 10 percent starting next year and that Nvidia may have canceled a planned RTX 50-series Super launch entirely because of shortages and price increases (the main draw of this Super refresh, according to the rumor mill, would have a bump from 2GB GDDR7 chips to 3GB chips, boosting memory capacities across the lineup by 50 percent).

GPU prices are coming to earth just as RAM costs shoot into the stratosphere Read More »

arduino’s-new-terms-of-service-worries-hobbyists-ahead-of-qualcomm-acquisition

Arduino’s new terms of service worries hobbyists ahead of Qualcomm acquisition

“The Qualcomm acquisition doesn’t modify how user data is handled or how we apply our open-source principles,” the Arduino blog says.

Arduino’s blog didn’t discuss the company’s new terms around patents, which states:

User will use the Site and the Platform in accordance with these Terms and for the sole purposes of correctly using the Services. Specifically, User undertakes not to: … “use the Platform, Site, or Services to identify or provide evidence to support any potential patent infringement claim against Arduino, its Affiliates, or any of Arduino’s or Arduino’s Affiliates’ suppliers and/or direct or indirect customers.

“No open-source company puts language in their ToS banning users from identifying potential patent issues. Why was this added, and who requested it?” Fried and Torrone said.

Arduino’s new terms include similar language around user-generated content that its ToS has had for years. The current terms say that users grant Arduino the:

non-exclusive, royalty free, transferable, sub-licensable, perpetual, irrevocable, to the maximum extent allowed by applicable law … right to use the Content published and/or updated on the Platform as well as to distribute, reproduce, modify, adapt, translate, publish and make publicly visible all material, including software, libraries, text contents, images, videos, comments, text, audio, software, libraries, or other data (collectively, “Content”) that User publishes, uploads, or otherwise makes available to Arduino throughout the world using any means and for any purpose, including the use of any username or nickname specified in relation to the Content.

“The new language is still broad enough to republish, monetize, and route user content into any future Qualcomm pipeline forever,” Torrone told Ars. He thinks Arduino’s new terms should have clarified Arduino’s intent, narrowed the term’s scope, or explained “why this must be irrevocable and transferable at a corporate level.”

In its blog, Arduino said that the new ToS “clarifies that the content you choose to publish on the Arduino platform remains yours and can be used to enable features you’ve requested, such as cloud services and collaboration tools.”

As Qualcomm works toward completing its Arduino acquisition, there appears to be more work ahead for the smartphone processor and modem vendor to convince makers that Arduino’s open source and privacy principles will be upheld. While the Arduino IDE and its source code will stay on GitHub per the AGPL-3.0 Open-Source License, some users remain worried about Arduino’s future under Qualcomm.

Arduino’s new terms of service worries hobbyists ahead of Qualcomm acquisition Read More »

science-centric-streaming-service-curiosity-stream-is-an-ai-licensing-firm-now

Science-centric streaming service Curiosity Stream is an AI-licensing firm now

We all know streaming services’ usual tricks for making more money: get more subscribers, charge those subscribers more money, and sell ads. But science streaming service Curiosity Stream is taking a new route that could reshape how streaming companies, especially niche options, try to survive.

Discovery Channel founder John Hendricks launched Curiosity Stream in 2015. The streaming service costs $40 per year, and it doesn’t have commercials.

The streaming business has grown to also include the Curiosity Channel TV channel. CuriosityStream Inc. also makes money through original programming and its Curiosity University educational programming. The firm turned its first positive net income in its fiscal Q1 2025, after about a decade of business.

With its focus on science, history, research, and education, Curiosity Stream will always be a smaller player compared to other streaming services. As of March 2023, Curiosity Stream had 23 million subscribers, a paltry user base compared to Netflix’s 301.6 million (as of January 2025).

Still, in an extremely competitive market, Curiosity Stream’s revenue increased 41 percent year over year in its Q3 2025 earnings announced this month. This was largely due to the licensing of Curiosity Stream’s original programming to train large language models (LLMs).

“Looking at our year-to-date numbers, licensing generated $23.4 million through September, which … is already over half of what our subscription business generated for all of 2024,” Phillip Hayden, Curiosity Stream’s CFO, said during a call with investors this month.

Thus far, Curiosity Stream has completed 18 AI-related fulfillments “across video, audio, and code assets” with nine partners, an October announcement said.

The company expects to make more revenue from IP licensing deals with AI companies than it does from subscriptions by 2027, “possibly earlier,” CEO Clint Stinchcomb said during the earnings call.

Science-centric streaming service Curiosity Stream is an AI-licensing firm now Read More »

hp-and-dell-disable-hevc-support-built-into-their-laptops’-cpus

HP and Dell disable HEVC support built into their laptops’ CPUs

The OEMs disabling codec hardware also comes as associated costs for the international video compression standard are set to increase in January, as licensing administrator Access Advance announced in July. Per a breakdown from patent pool administration VIA Licensing Alliance, royalty rates for HEVC for over 100,001 units are increasing from $0.20 each to $0.24 each in the United States. To put that into perspective, in Q3 2025, HP sold 15,002,000 laptops and desktops, and Dell sold 10,166,000 laptops and desktops, per Gartner.

Last year, NAS company Synology announced that it was ending support for HEVC, as well as H.264/AVC and VCI, transcoding on its DiskStation Manager and BeeStation OS platforms, saying that “support for video codecs is widespread on end devices, such as smartphones, tablets, computers, and smart TVs.”

“This update reduces unnecessary resource usage on the server and significantly improves media processing efficiency. The optimization is particularly effective in high-user environments compared to traditional server-side processing,” the announcement said.

Despite the growing costs and complications with HEVC licenses and workarounds, breaking features that have been widely available for years will likely lead to confusion and frustration.

“This is pretty ridiculous, given these systems are $800+ a machine, are part of a ‘Pro’ line (jabs at branding names are warranted – HEVC is used professionally), and more applications these days outside of Netflix and streaming TV are getting around to adopting HEVC,” a Redditor wrote.

HP and Dell disable HEVC support built into their laptops’ CPUs Read More »

microsoft-makes-zork-i,-ii,-and-iii-open-source-under-mit-license

Microsoft makes Zork I, II, and III open source under MIT License

Zork, the classic text-based adventure game of incalculable influence, has been made available under the MIT License, along with the sequels Zork II and Zork III.

The move to take these Zork games open source comes as the result of the shared work of the Xbox and Activision teams along with Microsoft’s Open Source Programs Office (OSPO). Parent company Microsoft owns the intellectual property for the franchise.

Only the code itself has been made open source. Ancillary items like commercial packaging and marketing assets and materials remain proprietary, as do related trademarks and brands.

“Rather than creating new repositories, we’re contributing directly to history. In collaboration with Jason Scott, the well-known digital archivist of Internet Archive fame, we have officially submitted upstream pull requests to the historical source repositories of Zork I, Zork II, and Zork III. Those pull requests add a clear MIT LICENSE and formally document the open-source grant,” says the announcement co-written by Stacy Haffner (director of the OSPO at Microsoft) and Scott Hanselman (VP of Developer Community at the company).

Microsoft gained control of the Zork IP when it acquired Activision in 2022; Activision had come to own it when it acquired original publisher Infocom in the late ’80s. There was an attempt to sell Zork publishing rights directly to Microsoft even earlier in the ’80s, as founder Bill Gates was a big Zork fan, but it fell through, so it’s funny that it eventually ended up in the same place.

To be clear, this is not the first time the original Zork source code has been available to the general public. Scott uploaded it to GitHub in 2019, but the license situation was unresolved, and Activision or Microsoft could have issued a takedown request had they wished to.

Now that’s obviously not at risk of happening anymore.

Microsoft makes Zork I, II, and III open source under MIT License Read More »

the-eu-made-apple-adopt-new-wi-fi-standards,-and-now-android-can-support-airdrop

The EU made Apple adopt new Wi-Fi standards, and now Android can support AirDrop

Last year, Apple finally added support for Rich Communications Services (RCS) texting to its platforms, improving consistency, reliability, and security when exchanging green-bubble texts between the competing iPhone and Android ecosystems. Today, Google is announcing another small step forward in interoperability, pointing to a slightly less annoying future for friend groups or households where not everyone owns an iPhone.

Google has updated Android’s Quick Share feature to support Apple’s AirDrop, which allows users of Apple devices to share files directly using a local peer-to-peer Wi-Fi connection. Apple devices with AirDrop enabled and set to “everyone for 10 minutes” mode will show up in the Quick Share device list just like another Android phone would, and Android devices that support this new Quick Share version will also show up in the AirDrop menu.

Google will only support this feature on the Pixel 10 series, at least to start. The company is “looking forward to improving the experience and expanding it to more Android devices,” but it didn’t announce anything about a timeline or any hardware or software requirements. Quick Share also won’t work with AirDrop devices working in the default “contacts only” mode, though Google “[welcomes] the opportunity to work with Apple to enable ‘Contacts Only’ mode in the future.” (Reading between the lines: Google and Apple are not currently working together to enable this, and Google confirmed to The Verge that Apple hadn’t been involved in this at all.)

Like AirDrop, Google notes that files shared via Quick Share are transferred directly between devices, without being sent to either company’s servers first.

Google shared a little more information in a separate post about Quick Share’s security, crediting Android’s use of the memory-safe Rust programming language with making secure file sharing between platforms possible.

“Its compiler enforces strict ownership and borrowing rules at compile time, which guarantees memory safety,” writes Google VP of Platforms Security and Privacy Dave Kleidermacher. “Rust removes entire classes of memory-related bugs. This means our implementation is inherently resilient against attackers attempting to use maliciously crafted data packets to exploit memory errors.”

The EU made Apple adopt new Wi-Fi standards, and now Android can support AirDrop Read More »

flying-with-whales:-drones-are-remaking-marine-mammal-research

Flying with whales: Drones are remaking marine mammal research

In 2010, the Deepwater Horizon oil rig exploded in the Gulf of Mexico, causing one of the largest marine oil spills ever. In the aftermath of the disaster, whale scientist Iain Kerr traveled to the area to study how the spill had affected sperm whales, aiming specialized darts at the animals to collect pencil eraser-sized tissue samples.

It wasn’t going well. Each time his boat approached a whale surfacing for air, the animal vanished beneath the waves before he could reach it. “I felt like I was playing Whac-A-Mole,” he says.

As darkness fell, a whale dove in front of Kerr and covered him in whale snot. That unpleasant experience gave Kerr, who works at the conservation group Ocean Alliance, an idea: What if he could collect that same snot by somehow flying over the whale? Researchers can glean much information from whale snot, including the animal’s DNA sequence, its sex, whether it is pregnant, and the makeup of its microbiome.

After many experiments, Kerr’s idea turned into what is today known as the SnotBot: a drone fitted with six petri dishes that collect a whale’s snot by flying over the animal as it surfaces and exhales through its blowhole. Today, drones like this are deployed to gather snot all over the world, and not just from sperm whales: They’re also collecting this scientifically valuable mucus from other species, such as blue whales and dolphins. “I would say drones have changed my life,” says Kerr.

S’not just mucus

Gathering snot is one of many ways that drones are being used to study whales. In the past 10 to 15 years, drone technology has made great strides, becoming affordable and easy to use. This has been a boon for researchers. Scientists “are finding applications for drones in virtually every aspect of marine mammal research,” says Joshua Stewart, an ecologist at the Marine Mammal Institute at Oregon State University.

Flying with whales: Drones are remaking marine mammal research Read More »

testing-shows-apple-n1-wi-fi-chip-improves-on-older-broadcom-chips-in-every-way

Testing shows Apple N1 Wi-Fi chip improves on older Broadcom chips in every way

This year’s newest iPhones included one momentous change that marked a new phase in the evolution of Apple Silicon: the Apple N1, Apple’s first in-house chip made to handle local wireless connections. The N1 supports Wi-Fi 7, Bluetooth 6, and the Thread smart home communication protocol, and it replaces the third-party wireless chips (mostly made by Broadcom) that Apple used in older iPhones.

Apple claimed that the N1 would enable more reliable connectivity for local communication features like AirPlay and AirDrop but didn’t say anything about how users could expect it to perform. But Ookla, the folks behind the SpeedTest app and website, have analyzed about five weeks’ worth of users’ testing data to get an idea of how the iPhone 17 lineup stacks up to the iPhone 16, as well as Android phones with Wi-Fi chips from Qualcomm, MediaTek, and others.

While the N1 isn’t at the top of the charts, Ookla says Apple’s Wi-Fi chip “delivered higher download and upload speeds on Wi-Fi compared to the iPhone 16 across every studied percentile and virtually every region.” The median download speed for the iPhone 17 series was 329.56Mbps, compared to 236.46Mbps for the iPhone 16; the upload speed also jumped from 73.68Mbps to 103.26Mbps.

Ookla noted that the N1’s best performance seemed to improve scores most of all in the bottom 10th percentile of performance tests, “implying Apple’s custom silicon lifts the floor more than the ceiling.” The iPhone 17 also didn’t top Ookla’s global performance charts—Ookla found that the Pixel 10 Pro series slightly edges out the iPhone 17 in download speed, while a Xiaomi 15T Pro with MediaTek Wi-Fi silicon featured better upload speeds.

Testing shows Apple N1 Wi-Fi chip improves on older Broadcom chips in every way Read More »

celebrated-game-developer-rebecca-heineman-dies-at-age-62

Celebrated game developer Rebecca Heineman dies at age 62

From champion to advocate

During her later career, Heineman served as a mentor and advisor to many, never shy about celebrating her past as a game developer during the golden age of the home computer.

Her mentoring skills became doubly important when she publicly came out as transgender in 2003. She became a vocal advocate for LGBTQ+ representation in gaming and served on the board of directors for GLAAD. Earlier this year, she received the Gayming Icon Award from Gayming Magazine.

Andrew Borman, who serves as director of digital preservation at The Strong National Museum of Play in Rochester, New York, told Ars Technica that her influence made a personal impact wider than electronic entertainment. “Her legacy goes beyond her groundbreaking work in video games,” he told Ars. “She was a fierce advocate for LGBTQ rights and an inspiration to people around the world, including myself.”

The front cover of

The front cover of Dragon Wars on the Commodore 64, released in 1989. Credit: MobyGames

In the Netflix documentary series High Score, Heineman explained her early connection to video games. “It allowed me to be myself,” she said. “It allowed me to play as female.”

“I think her legend grew as she got older, in part because of her openness and approachability,” journalist Ernie Smith told Ars. “As the culture of gaming grew into an online culture of people ready to dig into the past, she remained a part of it in a big way, where her war stories helped fill in the lore about gaming’s formative eras.”

Celebrated to the end

Heineman was diagnosed with adenocarcinoma in October 2025 after experiencing shortness of breath at the PAX game convention. After diagnostic testing, doctors found cancer in her lungs and liver. That same month, she launched a GoFundMe campaign to help with medical costs. The campaign quickly surpassed its $75,000 goal, raising more than $157,000 from fans, friends, and industry colleagues.

Celebrated game developer Rebecca Heineman dies at age 62 Read More »

microsoft-tries-to-head-off-the-“novel-security-risks”-of-windows-11-ai-agents

Microsoft tries to head off the “novel security risks” of Windows 11 AI agents

Microsoft has been adding AI features to Windows 11 for years, but things have recently entered a new phase, with both generative and so-called “agentic” AI features working their way deeper into the bedrock of the operating system. A new build of Windows 11 released to Windows Insider Program testers yesterday includes a new “experimental agentic features” toggle in the Settings to support a feature called Copilot Actions, and Microsoft has published a detailed support article detailing more about just how those “experimental agentic features” will work.

If you’re not familiar, “agentic” is a buzzword that Microsoft has used repeatedly to describe its future ambitions for Windows 11—in plainer language, these agents are meant to accomplish assigned tasks in the background, allowing the user’s attention to be turned elsewhere. Microsoft says it wants agents to be capable of “everyday tasks like organizing files, scheduling meetings, or sending emails,” and that Copilot Actions should give you “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”

But like other kinds of AI, these agents can be prone to error and confabulations and will often proceed as if they know what they’re doing even when they don’t. They also present, in Microsoft’s own words, “novel security risks,” mostly related to what can happen if an attacker is able to give instructions to one of these agents. As a result, Microsoft’s implementation walks a tightrope between giving these agents access to your files and cordoning them off from the rest of the system.

Possible risks and attempted fixes

For now, these “experimental agentic features” are optional, only available in early test builds of Windows 11, and off by default. Credit: Microsoft

For example, AI agents running on a PC will be given their own user accounts separate from your personal account, ensuring that they don’t have permission to change everything on the system and giving them their own “desktop” to work with that won’t interfere with what you’re working with on your screen. Users need to approve requests for their data, and “all actions of an agent are observable and distinguishable from those taken by a user.” Microsoft also says agents need to be able to produce logs of their activities and “should provide a means to supervise their activities,” including showing users a list of actions they’ll take to accomplish a multi-step task.

Microsoft tries to head off the “novel security risks” of Windows 11 AI agents Read More »