Features

we-put-the-new-pocket-size-vinyl-format-to-the-test—with-mixed-results

We put the new pocket-size vinyl format to the test—with mixed results


is that a record in your pocket?

It’s a fun new format, but finding a place in the market may be challenging.

A 4-inch Tiny Vinyl record. Credit: Chris Foresman

A 4-inch Tiny Vinyl record. Credit: Chris Foresman

We recently looked at Tiny vinyl, a new miniature vinyl single format developed through a collaboration between a toy industry veteran and the world’s largest vinyl record manufacturer. The 4-inch singles are pressed in a process nearly identical to standard 12-inch LPs or 7-inch singles, except everything is smaller. They have a standard-size spindle hole and play at 33⅓ RPM, and they hold up to four minutes of music per side.

Several smaller bands, like The Band Loula and Rainbow Kitten Surprise, and some industry veterans like Blake Shelton and Melissa Etheridge, have already experimented with the format. But Tiny Vinyl partnered with US retail giant Target for its big coming-out party this fall, with 44 exclusive titles launching throughout the end of this year.

Tiny Vinyl supplied a few promotional copies of releases from former America’s Got Talent finalist Grace VanderWaal, The Band Loula, country pop stars Florida Georgia Line, and jazz legends the Vince Guaraldi Trio so I could get a first-hand look at how the records actually play. I tested these titles as well as several others I picked up at retail, playing them on an Audio Technica LP-120 direct drive manual turntable connected to a Yamaha S-301 integrated amplifier and playing through a pair of vintage Klipsch kg4 speakers.

I also played them out on a Crosley portable suitcase-style turntable, and for fun, I tried to play them on the miniature RSD3 turntable made for 3-inch singles to try to see what’s possible with a variety of hardware.

Tiny Vinyl releases cover several genres, including hip-hop, rock, country, pop, indie, and show tunes. Credit: Chris Foresman

Automatic turntables need not apply

First and foremost, I’ll note that the 4-inch diameter is essentially the same size as the label on a standard 12-inch LP. So any sort of automatic turntable won’t really work for 4-inch vinyl; most aren’t equipped to set the stylus at anything other than 12 inches or 7 inches, and even if they could, the automatic return would kick in before reaching the grooves where the music starts. Some automatic turntables allow switching to a manual mode, but they otherwise cannot play Tiny Vinyl records.

But if you have a turntable with a fully manual tonearm—including a wide array from DJ-style direct drive turntables or audiophile belt-drive turntables like those from Fluance, U-turn, or Pro-ject—you’re in luck. The tonearm can be placed on these records, and they will track the grooves well.

Lining up the stylus can be a challenge with such small records, but once it’s in place, the stylus on my LP120—a nude elliptical—tracked well. I also tried a few listens with a standard conical stylus since that’s what would be most common across a variety of low- and mid-range turntables. The elliptical stylus tracked slightly better in our experience; higher-end styli may track the extremely fine grooves even better but would probably be overkill given that the physical limitations of the format introduce some distortion, which would likely be more apparent with such gear.

While Tiny Vinyl will probably appeal most to pop music fans, I played a variety of music styles, including rock, country, dance pop, hip-hop, jazz, and even showtunes. The main sonic difference I noted when a direct comparison was available was that the Tiny Vinyl version of a track tended to sound quieter than the same track playing on a 12-inch LP at the same volume setting on the amplifier.

This Kacey Musgraves Tiny Vinyl includes songs from her album Deeper Well. Credit: Chris Foresman

It’s not unusual for different records to be mastered at different volumes; making the overall sound quieter means smaller modulations in the groove so they can be placed closer together. This is true for any album that has a side running longer than about 22 minutes, but it’s especially important to maintain the four-minute runtime on Tiny Vinyl. (This is also why the last song or two on many LP slides tend to be quieter or slower songs; it’s easier for these songs to sound better at the center of the record, where linear tracking speed decreases.)

That said, most of the songs I listened to tended to have a slight but audible increase in distortion as the grooves approached the physical limits of alignment for the stylus. This was usually only perceptible in the last several seconds of a song, which more discerning listeners would likely find objectionable. But sound quality overall is still comparable to typical vinyl records. It won’t compare to the most exacting pressings from the likes of Mobile Fidelity Labs, for instance, but then again, the sort of audiophile who would pay for the equipment to get the most out of such records probably won’t buy Tiny Vinyl in the first place, except perhaps as a conversation piece.

I also tried playing our Tiny Vinyl on a Crosley suitcase-style turntable since it has a manual tone arm. The model I have on hand has an Audio Technica AT3600L cartridge and stereo speakers, so it’s a bit nicer than the entry-level Cruiser models you’ll typically find at malls or department stores. But these are extremely popular first turntables for a lot of young people, so it seemed reasonable to consider how Tiny Vinyl sounds on these devices.

Unfortunately, I couldn’t play Tiny Vinyl on this turntable. Despite having a manual tone arm and an option to turn off the auto-start and stop of the turntable platter, the Crosley platter is designed for 7-inch and 12-inch vinyl—the Tiny Vinyl we tried wouldn’t even spin on the turntable without the addition of a slipmat of some kind.

Once I got it spinning, though, the tone arm simply would not track beyond the first couple of grooves before hitting some physical limitation of its gimbal. Since many of the suitcase-style turntables often share designs and parts, I suspect this would be a problem for most of the Crosley, Victrola, or other brands you might find at a big-box retailer.

Some releases really take advantage of the extra real estate of the gatefold jacket and printed inner sleeve,  Chris Foresman

Additionally, I compared the classic track “Linus and Lucy” from A Charlie Brown Christmas with a 2012 pressing of the full album, as well as the 2019 3-inch version using an adapter, all on the LP-120, to give readers the best comparison across formats.

Again, the LP version of the seminal soundtrack from A Charlie Brown Christmas sounded bright and noticeably louder than its 4-inch counterpart. No major surprises here. And of course, the LP includes the entire soundtrack, so if you’re a big fan of the film or the kind of contemplative, piano-based jazz that Vince Guaraldi is famous for, you’ll probably spring for the full album.

The 3-inch version of “Linus and Lucy” unsurprisingly sounds fairly comparable to the Tiny Vinyl version, with a much quieter playback at the same amplifier settings. But it also sounds a lot noisier, likely due to the differences in materials used in manufacturing.

Though 3-inch records can play on standard turntables, as I did here, they’re designed to go hand-in-hand with one of the many Crosley RSD3 variants released in the last five years, or on the Crosley Mini Cruiser turntable. If you manage to pick up an original 8ban player, you could get the original lo-fi, “noisy analog” sound that Bandai had intended as well. That’s really part of the 3-inch vinyl aesthetic.

Newer 3-inch vinyl singles are coming with a standard spindle hole, which makes them easier to play on standard turntables. It also means there are now adapters for the tiny spindle to fit these holes, so you can technically put a 4-inch single on them. But due to the design of the tonearm and its rest, the stylus won’t swing out to the edge of Tiny Vinyl; instead, you can only play starting at grooves around the 3-inch mark. It’s a little unfortunate because it would otherwise be fun to play these miniature singles on hardware that is a little more right-sized ergonomically.

Big stack of tiny records. Credit: Chris Foresman

Four-inch Tiny Vinyl singles, on the other hand, are intended to be played on standard turntables, and they do that fairly well as long as you can manually place the tonearm and it’s not otherwise limited physically from tracking its grooves. The sound was not expected to compare to a quality 12-inch pressing, and it doesn’t. But it still sounds good. And especially if your available space is at a premium, you might consider a Tiny Vinyl with the most well-known and popular tracks from a certain album or artist (like these songs from A Charlie Brown Christmas) over a full album that may cost upward of $35.

Fun for casual listeners, not for audiophiles

Overall, Tiny Vinyl still offers much of the visceral experience of playing standard vinyl records—the cover art, the liner notes, handling the record as you place it on the turntable—just in miniature. The cost is less than a typical LP, and the weight is significantly less, so there are definite benefits for casual listeners. On the other hand, serious collectors will gravitate toward 12-inch albums and—perhaps less so—7-inch singles. Ironically, the casual listeners the format would most likely appeal to are the least likely to have the equipment to play it. That will limit Tiny Vinyl’s mass-market appeal outside of just being a cool thing to put on the shelf that technically could be played on a turntable.

The Good:

  • Small enough to easily fit in a jacket pocket or the like
  • Use less resources to make and ship
  • With the gatefold jacket, printed inner sleeve, and color vinyl options, these look as cool as most full-size albums
  • Plays fine on manual turntables

The Bad:

  • Sound quality is (unsurprisingly) compromised
  • Price isn’t lower than typical 7-inch singles

The Ugly:

  • Won’t work on automatic-only turntables, like the very popular AT-LP60 series or the very popular suitcase-style turntables that are often an inexpensive “first” turntable for many

We put the new pocket-size vinyl format to the test—with mixed results Read More »

vision-pro-m5-review:-it’s-time-for-apple-to-make-some-tough-choices

Vision Pro M5 review: It’s time for Apple to make some tough choices


A state of the union from someone who actually sort of uses the thing.

The M5 Vision Pro with the Dual Knit Band. Credit: Samuel Axon

With the recent releases of visionOS 26 and newly refreshed Vision Pro hardware, it’s an ideal time to check in on Apple’s Vision Pro headset—a device I was simultaneously amazed and disappointed by when it launched in early 2024.

I still like the Vision Pro, but I can tell it’s hanging on by a thread. Content is light, developer support is tepid, and while Apple has taken action to improve both, it’s not enough, and I’m concerned it might be too late.

When I got a Vision Pro, I used it a lot: I watched movies on planes and in hotel rooms, I walked around my house placing application windows and testing out weird new ways of working. I tried all the neat games and educational apps, and I watched all the immersive videos I could get ahold of. I even tried my hand at developing my own applications for it.

As the months went on, though, I used it less and less. The novelty wore off, and as cool as it remained, practicality beat coolness. By the time Apple sent me the newer model a couple of weeks ago, I had only put the original one on a few times in the prior couple of months. I had mostly stopped using it at home, but I still took it on trips as an entertainment device for hotel rooms now and then.

That’s not an uncommon story. You even see it in the subreddit for Vision Pro owners, which ought to be the home of the device’s most dedicated fans. Even there, people say, “This is really cool, but I have to go out of my way to keep using it.”

Perhaps it would have been easier to bake it into my day-to-day habits if developer and content creator support had been more robust, a classic chicken-and-egg problem.

After a few weeks of using the new Vision Pro hardware refresh daily, it’s clear to me that the platform needs a bigger rethink. As a fan of the device, I’m concerned it won’t get that, because all the rumors point to Apple pouring its future resources into smart glasses, which, to me, are a completely different product category.

What changed in the new model?

For many users, the most notable change here will be something you can buy separately (albeit at great expense) for the old model: A new headband that balances the device’s weight on your head better, making it more comfortable to wear for long sessions.

Dubbed the Dual Knit Band, it comes with an ingeniously simple adjustment knob that can be used to tighten or loosen either the band that goes across the back of your head (similar to the old band) or the one that wraps around the top.

It’s well-designed, and it will probably make the Vision Pro easier to use for many people who found the old model to be too uncomfortable—even though this model is slightly heavier than its predecessor.

The band fit is adjusted with this knob. You can turn it to loosen or tighten one strap, then pull it out and turn it again to adjust the other. Credit: Samuel Axon

I’m one of the lucky few who never had any discomfort problems with the Vision Pro, but I know a bunch of folks who said the pressure the device put on their foreheads was unbearable. That’s exactly what this new band remedies, so it’s nice to see.

The M5 chip offers more than just speed

Whereas the first Vision Pro had Apple’s M2 chip—which was already a little behind the times when it launched—the new one adds the M5. It’s much faster, especially for graphics-processing and machine-learning tasks. We’ve written a lot about the M5 in our articles on other Apple products if you’re interested to learn more about it.

Functionally, this means a lot of little things are a bit faster, like launching certain applications or generating a Persona avatar. I’ll be frank: I didn’t notice any difference that significantly impacted the user experience. I’m not saying I couldn’t tell it was faster sometimes. I’m just saying it wasn’t faster in a way that’s meaningful enough to change any attitudes about the device.

It’s most noticeable with games—both native mixed-reality Vision Pro titles and the iPad versions of demanding games that you can run on a virtual display on the device. Demanding 3D games look and run nicer, in many cases. The M5 also supports more recent graphics advancements like ray tracing and mesh shading, though very few games support them, even in terms of iPad versions.

All this is to say that while I always welcome performance improvements, they are definitely not enough to convince an M2 Vision Pro owner to upgrade, and they won’t tip things over for anyone who has been on the fence about buying one of these things.

The main perk of the new chip is improved efficiency, which is the driving force behind modestly increased battery life. When I first took the M2 Vision Pro on a plane, I tried watching 2021’s Dune. I made it through the movie, but just barely; the battery ran out during the closing credits. It’s not a short movie, but there are longer ones.

Now, the new headset can easily get another 30 or 60 minutes, depending on what you’re doing, which finally puts it in “watch any movie you want” territory.

Given how short battery life was in the original version, even a modest bump like that makes a big difference. That, alongside a marginally increased field of view (about 10 percent) and a new 120 Hz maximum refresh rate for passthrough are the best things about the new hardware. These are nice-to-haves, but they’re not transformational by any means.

We already knew the Vision Pro offered excellent hardware (even if it’s overkill for most users), but the platform’s appeal is really driven by software. Unfortunately, this is where things are running behind expectations.

For content, it’s quality over quantity

When the first Vision Pro launched, I was bullish about the promise of the platform—but a lot of that was contingent on a strong content cadence and third-party developer support.

And as I’ve written since, the content cadence for the first year was a disappointment. Whereas I expected weekly episodes of Apple’s Immersive Videos in the TV app, those short videos arrived with gaps of several months. There’s an enormous wealth of great immersive content outside of Apple’s walled garden, but Apple didn’t seem interested in making that easily accessible to Vision Pro owners. Third-party apps did some of that work, but they lagged behind those on other platforms.

The first-party content cadence picked up after the first year, though. Plus, Apple introduced the Spatial Gallery, a built-in app that aggregates immersive 3D photos and the like. It’s almost TikTok-like in that it lets you scroll through short-form content that leverages what makes the device unique, and it’s exactly the sort of thing that the platform so badly needed at launch.

The Spatial Gallery is sort of like a horizontally-scrolling TikTok for 3D photos and video. Credit: Samuel Axon

The content that is there—whether in the TV app or the Spatial Gallery—is fantastic. It’s beautifully, professionally produced stuff that really leans on the hardware. For example, there is an autobiographical film focused on U2’s Bono that does some inventive things with the format that I had never seen or even imagined before.

Bono, of course, isn’t everybody’s favorite, but if you can stomach the film’s bloviating, it’s worth watching just with an eye to what a spatial video production can or should be.

I still think there’s significant room to grow, but the content situation is better than ever. It’s not enough to keep you entertained for hours a day, but it’s enough to make putting on the headset for a bit once a week or so worth it. That wasn’t there a year ago.

The software support situation is in a similar state.

App support is mostly frozen in the year 2024

Many of us have a suite of go-to apps that are foundational to our individual approaches to daily productivity. For me, primarily a macOS user, they are:

  • Firefox
  • Spark
  • Todoist
  • Obsidian
  • Raycast
  • Slack
  • Visual Studio Code
  • Claude
  • 1Password

As you can see, I don’t use most of Apple’s built-in apps—no Safari, no Mail, no Reminders, no Passwords, no Notes… no Spotlight, even. All that may be atypical, but it has never been a problem on macOS, nor has it been on iOS for a few years now.

Impressively, almost all of these are available on visionOS—but only because it can run iPad apps as flat, virtual windows. Firefox, Spark, Todoist, Obsidian, Slack, 1Password, and even Raycast are all available as supported iPad apps, but surprisingly, Claude isn’t, even though there is a Claude app for iPads. (ChatGPT’s iPad app works, though.) VS Code isn’t available, of course, but I wasn’t expecting it to be.

Not a single one of these applications has a true visionOS app. That’s too bad, because I can think of lots of neat things spatial computing versions could do. Imagine browsing your Obsidian graph in augmented reality! Alas, I can only dream.

You can tell the native apps from the iPad ones: The iPad ones have rectangular icons nested within circles, whereas the native apps fill the whole circle. Credit: Samuel Axon

If you’re not such a huge productivity software geek like me and you use Apple’s built-in apps, things look a little better, but surprisingly, there are still a few apps that you would imagine would have really cool spatial computing features—like Apple Maps—that don’t. Maps, too, is just an iPad app.

Even if you set productivity aside and focus on entertainment, there are still frustrating gaps. Almost two years later, there is still no Netflix or YouTube app. There are decent-enough third-party options for YouTube, but you have to watch Netflix in a browser, which is lower-quality than in a native app and looks horrible on one of the Vision Pro’s big virtual screens.

To be clear, there is a modest trickle of interesting spatial app experiences coming in—most of them games, educational apps, or cool one-off ideas that are fun to check out for a few minutes.

All this is to say that nothing has really changed since February 2024. There was an influx of apps at launch that included a small number of show-stoppers (mostly educational apps), but the rest ranged from “basically the iPad app but with one or two throwaway tech-demo-style spatial features you won’t try more than once” to “basically the iPad app but a little more native-feeling” to “literally just the iPad app.” As far as support from popular, cross-platform apps, it’s mostly the same list today as it was then.

Its killer app is that it’s a killer monitor

Even though Apple hasn’t made a big leap forward in developer support, it has made big strides in making the Vision Pro a nifty companion to the Mac.

From the start, it has had a feature that lets you simply look at a Mac’s built-in display, tap your fingers, and launch a large, resizable virtual monitor. I have my own big, multi-monitor setup at home, but I have used the Vision Pro this way sometimes when traveling.

I had some complaints at the start, though. It could only do one monitor, and that monitor was limited to 60 Hz and a standard widescreen resolution. That’s better than just using a 14-inch MacBook Pro screen, but it’s a far cry from the sort of high-end setup a $3,500 price tag suggests. Furthermore, it didn’t allow you to switch audio between the two devices.

Thanks to both software and hardware updates, that has all changed. visionOS now supports three different monitor sizes: the standard widescreen aspect ratio, a wider one that resembles a standard ultra-wide monitor, and a gigantic, ultra-ultra-wide wrap-around display that I can assure you will leave no one wanting for desktop space. It looks great. Problem solved! Likewise, it will now transfer your Mac audio to the Vision Pro or its Bluetooth headphones automatically.

All of that works not just on the new Vision Pro, but also on the M2 model. The new M5 model exclusively addresses the last of my complaints: You can now achieve higher refresh rates for that virtual monitor than 60 Hz. Apple says it goes “up to 120 Hz,” but there’s no available tool for measuring exactly where it’s landing. Still, I’m happy to see any improvement here.

This is the standard width for the Mac monitor feature… Samuel Axon

Through a series of updates, Apple has turned a neat proof-of-concept feature into something that is genuinely valuable—especially for folks who like ultra-wide or multi-monitor setups but have to travel a lot (like myself) or who just don’t want to invest in the display hardware at home.

You can also play your Mac games on this monitor. I tried playing No Man’s Sky and Cyberpunk 2077 on it with a controller, and it was a fantastic experience.

This, alongside spatial video and watching movies, is the Vision Pro’s current killer app and one of the main areas where Apple has clearly put a lot of effort into improving the platform.

Stop trying to make Personas happen

Strangely, another area where Apple has invested quite a bit to make things better is in the Vision Pro’s usefulness as a communications and meetings device. Personas—the 3D avatars of yourself that you create for Zoom calls and the like—were absolutely terrible when the M2 Vision Pro came out.

There is also EyeSight, which uses your Persona to show a simulacrum of your eyes to people around you in the real world, letting them know you are aware of your surroundings and even allowing them to follow your gaze. I understand the thought behind this feature—Apple doesn’t want mixed reality to be socially isolating—but it sometimes puts your eyes in the wrong place, it’s kind of hard to see, and it honestly seems like a waste of expensive hardware.

Primarily via software updates, I’m pleased to report that Personas are drastically improved. Mine now actually looks like me, and it moves more naturally, too.

I joined a FaceTime call with Apple reps where they showed me how Personas float and emote around each other, and how we could look at the same files and assets together. It was indisputably cool and way better than before, thanks to the improved Personas.

I can’t say as much for EyeSight, which looks the same. It’s hard for me to fathom that Apple has put multiple sensors and screens on this thing to support this feature.

In my view, dropping EyeSight would be the single best thing Apple could do for this headset. Most people don’t like  it, and most people don’t want it, yet there is no question that its inclusion adds a not-insignificant amount to both the price and the weight, the product’s two biggest barriers to adoption.

Likewise, Personas are theoretically cool, and it is a novel and fun experience to join a FaceTime call with people and see how it works and what you could do. But it’s just that: a novel experience. Once you’ve done it, you’ll never feel the need to do it again. I can barely imagine anyone who would rather show up to a call as a Persona than take the headset off for 30 minutes to dial in on their computer.

Much of this headset is dedicated to this idea that it can be a device that connects you with others, but maintaining that priority is simply the wrong decision. Mixed reality is isolating, and Apple is treating that like a problem to be solved, but I consider that part of its appeal.

If this headset were capable of out-in-the-world AR applications, I would not feel that way, but the Vision Pro doesn’t support any application that would involve taking it outside the home into public spaces. A lot of the cool, theoretical AR uses I can think of would involve that, but still no dice here.

The metaverse (it’s telling that this is the first time I’ve typed that word in at least a year) already exists: It’s on our phones, in Instagram and TikTok and WeChat and Fortnite. It doesn’t need to be invented, and it doesn’t need a new, clever approach to finally make it take off. It has already been invented. It’s already in orbit.

Like the iPad and the Apple Watch before it, the Vision Pro needs to stop trying to be a general-purpose device and instead needs to lean into what makes it special.

In doing so, it will become a better user experience, and it will get lighter and cheaper, too. There’s real potential there. Unfortunately, Apple may not go that route if leaks and insider reports are to be believed.

There’s still a ways to go, so hopefully this isn’t a dead end

The M5 Vision Pro was the first of four planned new releases in the product line, according to generally reliable industry analyst Ming-Chi Kuo. Next up, he predicted, would be a full Vision Pro 2 release with a redesign, and a Vision Air, a cheaper, lighter alternative. Those would all precede true smart glasses many years down the road.

I liked that plan: keep the full-featured Vision Pro for folks who want the most premium mixed reality experience possible (but maybe drop EyeSight), and launch a cheaper version to compete more directly with headsets like Meta’s Quest line of products, or the newly announced Steam Frame VR headset from Valve, along with planned competitors by Google, Samsung, and others.

True augmented reality glasses are an amazing dream, but there are serious problems of optics and user experience that we’re still a ways off from solving before those can truly replace the smartphone as Tim Cook once predicted.

All that said, it looks like that plan has been called into question. A Bloomberg report in October claimed that Apple CEO Tim Cook had told employees that the company was redirecting resources from future passthrough HMD products to accelerate work on smart glasses.

Let’s be real: It’s always going to be a once-in-a-while device, not a daily driver. For many people, that would be fine if it cost $1,000. At $3,500, it’s still a nonstarter for most consumers.

I believe there is room for this product in the marketplace. I still think it’s amazing. It’s not going to be as big as the iPhone, or probably even the iPad, but it has already found a small audience that could grow significantly if the price and weight could come down. Removing all the hardware related to Personas and EyeSight would help with that.

I hope Apple keeps working on it. When Apple released the Apple Watch, it wasn’t entirely clear what its niche would be in users’ lives. The answer (health and fitness) became crystal clear over time, and the other ambitions of the device faded away while the company began building on top of what was working best.

You see Apple doing that a little bit with the expanded Mac spatial display functionality. That can be the start of an intriguing journey. But writers have a somewhat crass phrase: “kill your darlings.” It means that you need to be clear-eyed about your work and unsentimentally cut anything that’s not working, even if you personally love it—even if it was the main thing that got you excited about starting the project in the first place.

It’s past time for Apple to start killing some darlings with the Vision Pro, but I truly hope it doesn’t go too far and kill the whole platform.

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Vision Pro M5 review: It’s time for Apple to make some tough choices Read More »

“go-generate-a-bridge-and-jump-off-it”:-how-video-pros-are-navigating-ai

“Go generate a bridge and jump off it”: How video pros are navigating AI


I talked with nine creators about economic pressures and fan backlash.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

In 2016, the legendary Japanese filmmaker Hayao Miyazaki was shown a bizarre AI-generated video of a misshapen human body crawling across a floor.

Miyazaki declared himself “utterly disgusted” by the technology demo, which he considered an “insult to life itself.”

“If you really want to make creepy stuff, you can go ahead and do it,” Miyazaki said. “I would never wish to incorporate this technology into my work at all.”

Many fans interpreted Miyazaki’s remarks as rejecting AI-generated video in general. So they didn’t like it when, in October 2024, filmmaker PJ Accetturo used AI tools to create a fake trailer for a live-action version of Miyazaki’s animated classic Princess Mononoke. The trailer earned him 22 million views on X. It also earned him hundreds of insults and death threats.

“Go generate a bridge and jump off of it,” said one of the funnier retorts. Another urged Accetturo to “throw your computer in a river and beg God’s forgiveness.”

Someone tweeted that Miyazaki “should be allowed to legally hunt and kill this man for sport.”

PJ Accetturo is a director and founder of Genre AI, an AI ad agency. Credit: PJ Accetturo

The development of AI image and video generation models has been controversial, to say the least. Artists have accused AI companies of stealing their work to build tools that put people out of a job. Using AI tools openly is stigmatized in many circles, as Accetturo learned the hard way.

But as these models have improved, they have sped up workflows and afforded new opportunities for artistic expression. Artists without AI expertise might soon find themselves losing work.

Over the last few weeks, I’ve spoken to nine actors, directors, and creators about how they are navigating these tricky waters. Here’s what they told me.

Actors have emerged as a powerful force against AI. In 2023, SAG-AFTRA, the Hollywood actors’ union, had its longest-ever strike, partly to establish more protections for actors against AI replicas.

Actors have lobbied to regulate AI in their industry and beyond. One actor I talked with, Erik Passoja, has testified before the California Legislature in favor of several bills, including for greater protections against pornographic deepfakes. SAG-AFTRA endorsed SB 1047, an AI safety bill regulating frontier models. The union also organized against the proposed moratorium on state AI bills.

A recent flashpoint came in September, when Deadline Hollywood reported that talent agencies were interested in signing “AI actress” Tilly Norwood.

Actors weren’t happy. Emily Blunt told Variety, “This is really, really scary. Come on agencies, don’t do that.”

Natasha Lyonne, star of Russian Doll, posted on an Instagram Story: “Any talent agency that engages in this should be boycotted by all guilds. Deeply misguided & totally disturbed.”

The backlash was partly specific to Tilly Norwood—Lyonne is no AI skeptic, having cofounded an AI studio—but it also reflects a set of concerns around AI common to many in Hollywood and beyond.

Here’s how SAG-AFTRA explained its position:

Tilly Norwood is not an actor, it’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion and, from what we’ve seen, audiences aren’t interested in watching computer-generated content untethered from the human experience. It doesn’t solve any “problem” — it creates the problem of using stolen performances to put actors out of work, jeopardizing performer livelihoods and devaluing human artistry.

This statement reflects three broad criticisms that come up over and over in discussions of AI art:

Content theft: Most leading AI video models have been trained on broad swathes of the Internet, including images and films made by artists. In many cases, companies have not asked artists for permission to use this content, nor compensated them. Courts are still working out whether this is fair use under copyright law. But many people I talked to consider AI companies’ training efforts to be theft of artists’ work.

Job loss:  If AI tools can make passable video quickly or drastically speed up editing tasks, that potentially takes jobs away from actors or film editors. While past technological advancements have also eliminated jobs—the adoption of digital cameras drastically reduced the number of people cutting physical film—AI could have an even broader impact.

Artistic quality:  A lot of people told me they just didn’t think AI-generated content could ever be good art. Tess Dinerstein stars in vertical dramas—episodic programs optimized for viewing on smartphones. She told me that AI is “missing that sort of human connection that you have when you go to a movie theater and you’re sobbing your eyes out because your favorite actor is talking about their dead mom.”

The concern about theft is potentially solvable by changing how models are trained. Around the time Accetturo released the “Princess Mononoke” trailer, he called for generative AI tools to be “ethically trained on licensed datasets.”

Some companies have moved in this direction. For instance, independent filmmaker Gille Klabin told me he “feels pretty good” using Adobe products because the company trains its AI models on stock images that it pays royalties for.

But the other two issues—job losses and artistic integrity—will be harder to finesse. Many creators—and fans—believe that AI-generated content misses the fundamental point of art, which is about creating an emotional connection between creators and viewers.

But while that point is compelling in theory, the details can be tricky.

Dinerstein, the vertical drama actress, told me that she’s “not fundamentally against AI”—she admits “it provides a lot of resources to filmmakers” in specialized editing tasks—but she takes a hard stance against it on social media.

“It’s hard to ever explain gray areas on social media,” she said, and she doesn’t want to “come off as hypocritical.”

Even though she doesn’t think that AI poses a risk to her job—“people want to see what I’m up to”—she does fear people (both fans and vertical drama studios) making an AI representation of her without her permission. And she has found it easiest to just say, “You know what? Don’t involve me in AI.”

Others see it as a much broader issue. Actress Susan Spano told me it was “an issue for humans, not just actors.”

“This is a world of humans and animals,” she said. “Interaction with humans is what makes it fun. I mean, do we want a world of robots?”

It’s relatively easy for actors to take a firm stance against AI because they inherently do their work in the physical world. But things are more complicated for other Hollywood creatives, such as directors, writers, and film editors. AI tools can genuinely make them more productive, and they’re at risk of losing work if they don’t stay on the cutting edge.

So the non-actors I talked to took a range of approaches to AI. Some still reject it. Others have used the tools reluctantly and tried to keep their heads down. Still others have openly embraced the technology.

Kavan Cardoza is a director and AI filmmaker. Credit: Phantom X

Take Kavan Cardoza, for example. He worked as a music video director and photographer for close to a decade before getting his break into filmmaking with AI.

After the image model Midjourney was first released in 2022, Cardoza started playing around with image generation and later video generation. Eventually, he “started making a bunch of fake movie trailers” for existing movies and franchises. In December 2024, he made a fan film in the Batman universe that “exploded on the Internet,” before Warner Bros. took it down for copyright infringement.

Cardoza acknowledges that he re-created actors in former Batman movies “without their permission.” But he insists he wasn’t “trying to be malicious or whatever. It was truly just a fan film.”

Whereas Accetturo received death threats, the response to Cardoza’s fan film was quite positive.

“Every other major studio started contacting me,” Cardoza said. He set up an AI studio, Phantom X, with several of his close friends. Phantom X started by making ads (where AI video is catching on quickest), but Cardoza wanted to focus back on films.

In June, Cardoza made a short film called Echo Hunter, a blend of Blade Runner and The Matrix. Some shots look clearly AI-generated, but Cardoza used motion-capture technology from Runway to put the faces of real actors into his AI-generated world. Overall, the piece pretty much hangs together.

Cardoza wanted to work with real actors because their artistic choices can help elevate the script he’s written: “There’s a lot more levels of creativity to it.” But he needed SAG-AFTRA’s approval to make a film that blends AI techniques with the likenesses of SAG-AFTRA actors. To get it, he had to promise not to reuse the actors’ likenesses in other films.

In Cardoza’s view, AI is “giving voices to creators that otherwise never would have had the voice.”

But Cardoza isn’t wedded to AI. When an interviewer asked him whether he’d make a non-AI film if required to, he responded, “Oh, 100 percent.” Cardoza added that if he had the budget to do it now, “I’d probably still shoot it all live action.”

He acknowledged to me that there will be losers in the transition—“there’s always going to be changes”—but he compares the rise of AI with past technological developments in filmmaking, like the rise of visual effects. This created new jobs making visual effects digitally, but reduced jobs making elaborate physical sets.

Cardoza expressed interest in reducing the amount of job loss. In another interview, Cardoza said that for his film project, “we want to make sure we include as many people as possible,” not just actors, but sound designers, script editors, and other specialized roles.

But he believes that eventually, AI will get good enough to do everyone’s job. “Like I say with tech, it’s never about if, it’s just when.”

Accetturo’s entry into AI was similar. He told me that he worked for 15 years as a filmmaker, “mostly as a commercial director and former documentary director.” During the pandemic, he “raised millions” for an animated TV series, but it got caught up in development hell.

AI gave him a new chance at success. Over the summer of 2024, he started playing around with AI video tools. He realized that he was in the sweet spot to take advantage of AI: experienced enough to make something good, but not so established that he was risking his reputation. After Google released Veo 3 in May, Accetturo released a fake medicine ad that went viral. His studio now produces ads for prominent companies like Oracle and Popeyes.

Accetturo says the backlash against him has subsided: “It truly is nothing compared to what it was.” And he says he’s committed to working on AI: “Everyone understands that it’s the future.”

Between the anti- and pro-AI extremes, there are a lot of editors and artists quietly using AI tools without disclosing it. Unsurprisingly, it’s difficult to find people who will speak about this on the record.

“A lot of people want plausible deniability right now,” according to Ryan Hayden, a Hollywood talent agent. “There is backlash about it.”

But if editors don’t use AI tools, they risk becoming obsolete. Hayden says that he knows a lot of people in the editing field trying to master AI because “there’s gonna be a massive cut” in the total number of editors. Those who know AI might survive.

As one comedy writer involved in an AI project told Wired, “We wanted to be at the table and not on the menu.”

Clandestine AI usage extends into the upper reaches of the industry. Hayden knows an editor who works with a major director who has directed $100 million films. “He’s already using AI, sometimes without people knowing.”

Some artists feel morally conflicted but don’t think they can effectively resist. Vinny Dellay, a storyboard artist who has worked on Marvel films and Super Bowl ads, released a video detailing his views on the ethics of using AI as a working artist. Dellay said that he agrees that “AI being trained off of art found on the Internet without getting permission from the artist, it may not be fair, it may not be honest.” But refusing to use AI products won’t stop their general adoption. Believing otherwise is “just being delusional.”

Instead, Dellay said that the right course is to “adapt like cockroaches after a nuclear war.” If they’re lucky, using AI in storyboarding workflows might even “let a storyboard artist pump out twice the boards in half the time without questioning all your life’s choices at 3 am.”

Gille Klabin is an independent writer, director, and visual effects artist. Credit: Gille Klabin

Gille Klabin is an indie director and filmmaker currently working on a feature called Weekend at the End of the World.

As an independent filmmaker, Klabin can’t afford to hire many people. There are many labor-intensive tasks—like making a pitch deck for his film—that he’d otherwise have to do himself. An AI tool “essentially just liberates us to get more done and have more time back in our life.”

But he’s careful to stick to his own moral lines. Any time he mentioned using an AI tool during our interview, he’d explain why he thought that was an appropriate choice. He said he was fine with AI use “as long as you’re using it ethically in the sense that you’re not copying somebody’s work and using it for your own.”

Drawing these lines can be difficult, however. Hayden, the talent agent, told me that as AI tools make low-budget films look better, it gets harder to make high-budget films, which employ the most people at the highest wage levels.

If anything, Klabin’s AI uptake is limited more by the current capabilities of AI models. Klabin is an experienced visual effects artist, and he finds AI products to generally be “not really good enough to be used in a final project.”

He gave me a concrete example. Rotoscoping is a process in which you trace out the subject of the shot so you can edit the background independently. It’s very labor-intensive—one has to edit every frame individually—so Klabin has tried using Runway’s AI-driven rotoscoping. While it can make for a decent first pass, the result is just too messy to use as a final project.

Klabin sent me this GIF of a series of rotoscoped frames from his upcoming movie. While the model does a decent job of identifying the people in the frame, its boundaries aren’t consistent from frame to frame. The result is noisy.

Current AI tools are full of these small glitches, so Klabin only uses them for tasks that audiences don’t see (like creating a movie pitch deck) or in contexts where he can clean up the result afterward.

Stephen Robles reviews Apple products on YouTube and other platforms. He uses AI in some parts of the editing process, such as removing silences or transcribing audio, but doesn’t see it as disruptive to his career.

Stephen Robles is a YouTuber, podcaster, and creator covering tech, particularly Apple. Credit: Stephen Robles

“I am betting on the audience wanting to trust creators, wanting to see authenticity,” he told me. AI video tools don’t really help him with that and can’t replace the reputation he’s sought to build.

Recently, he experimented with using ChatGPT to edit a video thumbnail (the image used to advertise a video). He got a couple of negative reactions about his use of AI, so he said he “might slow down a little bit” with that experimentation.

Robles didn’t seem as concerned about AI models stealing from creators like him. When I asked him about how he felt about Google training on his data, he told me that “YouTube provides me enough benefit that I don’t think too much about that.”

Professional thumbnail artist Antioch Hwang has a similarly pragmatic view toward using AI. Some channels he works with have audiences that are “very sensitive to AI images.” Even using “an AI upscaler to fix up the edges” can provoke strong negative reactions. For those channels, he’s “very wary” about using AI.

Antioch Hwang is a YouTube thumbnail artist. Credit: Antioch Creative

But for most channels he works for, he’s fine using AI, at least for technical tasks. “I think there’s now been a big shift in the public perception of these AI image generation tools,” he told me. “People are now welcoming them into their workflow.”

He’s still careful with his AI use, though, because he thinks that having human artistry helps in the YouTube ecosystem. “If everyone has all the [AI] tools, then how do you really stand out?” he said.

Recently, top creators have started using more rough-looking thumbnails for their videos. AI has made polished thumbnails too easy to create, so top creators are using what Hwang would call “poorly made thumbnails” to help videos stand out.

Hwang told me something surprising: even as AI makes it easier for creators to make thumbnails themselves, business has never been better for thumbnail artists, even at the lower end. He said that demand has soared because “AI as a whole has lowered the barriers for content creation, and now there’s more creators flooding in.”

Still, Hwang doesn’t expect the good times to last forever. “I don’t see AI completely taking over for the next three-ish years. That’s my estimated timeline.”

Everyone I talked to had different answers to when—if ever—AI would meaningfully disrupt their part of the industry.

Some, like Hwang, were pessimistic. Actor Erik Passoja told me he thought the big movie studios—like Warner Bros. or Paramount—would be gone in three to five years.

But others were more optimistic. Tess Dinerstein, the vertical drama actor, said, “I don’t think that verticals are ever going to go fully AI.” Even if it becomes technologically feasible, she argued, “that just doesn’t seem to be what the people want.”

Gille Klabin, the independent filmmaker, thought there would always be a place for high-quality human films. If someone’s work is “fundamentally derivative,” then they are at risk. But he thinks the best human-created work will still stand out. “I don’t know how AI could possibly replace the borderline divine element of consciousness,” he said.

The people who were most bullish on AI were, if anything, the least optimistic about their own career prospects. “I think at a certain point it won’t matter,” Kavan Cardoza told me. “It’ll be that anyone on the planet can just type in some sentences” to generate full, high-quality videos.

This might explain why Accetturo has become something of an AI evangelist; his newsletter tries to teach other filmmakers how to adapt to the coming AI revolution.

AI “is a tsunami that is gonna wipe out everyone” he told me. “So I’m handing out surfboards—teaching people how to surf. Do with it what you will.”

Kai Williams is a reporter for Understanding AI, a Substack newsletter founded by Ars Technica alum Timothy B. Lee. His work is supported by a Tarbell FellowshipSubscribe to Understanding AI to get more from Tim and Kai.

“Go generate a bridge and jump off it”: How video pros are navigating AI Read More »

stoke-space-goes-for-broke-to-solve-the-only-launch-problem-that-“moves-the-needle”

Stoke Space goes for broke to solve the only launch problem that “moves the needle”


“Does the world really need a 151st rocket company?”

Stoke Space’s full-flow staged combustion is tested in Central Washington in 2024. Credit: Stoke Space

Stoke Space’s full-flow staged combustion is tested in Central Washington in 2024. Credit: Stoke Space

LAUNCH COMPLEX 14, Cape Canaveral, Fla.—The platform atop the hulking steel tower offered a sweeping view of Florida’s rich, sandy coastline and brilliant blue waves beyond. Yet as captivating as the vista might be for an aspiring rocket magnate like Andy Lapsa, it also had to be a little intimidating.

To his right, at Launch Complex 13 next door, a recently returned Falcon 9 booster stood on a landing pad. SpaceX has landed more than 500 large orbital rockets. And next to SpaceX sprawled the launch site operated by Blue Origin. Its massive New Glenn rocket is also reusable, and founder Jeff Bezos has invested tens of billions of dollars into the venture.

Looking to the left, Lapsa saw a graveyard of sorts for commercial startups. Launch Complex 15 was leased to a promising startup, ABL Space, two years ago. After two failed launches, ABL Space pivoted away from commercial launch. Just beyond lies Launch Complex 16, where Relativity Space aims to launch from. The company has already burned through $1.7 billion in its efforts to reach orbit. Had billionaire Eric Schmidt not stepped in earlier this year, Relativity would have gone bankrupt.

Andy Lapsa may be a brainy rocket scientist, but he is not a billionaire. Far from it.

“When you start a company like this, you have no idea how far you’re going to be able to make it, you know?” he admitted.

Lapsa and another aerospace engineer, Tom Feldman, founded Stoke Space a little more than five years ago. Both had worked the better part of a decade at Blue Origin and decided they wanted to make their mark on the industry. It was not an easy choice to start a rocket company at a time when there were dozens of other entrants in the field.

Andy Lapsa speaks at the Space Economy Summit in November 2025.

Credit: The Economist Group

Andy Lapsa speaks at the Space Economy Summit in November 2025. Credit: The Economist Group

“It was a huge question in my head: Does the world really need a 151st rocket company?” he said. “And in order for me to say yes to that question, I had to very systematically go through all the other players, thinking about the economics of launch, about the business plan, about the evolution of these companies over time. It was very non-intuitive to me to start another launch company.”

So why did he do it?

I traveled to Florida in November to answer this question and to see if the world’s 151st rocket company had any chance of success.

Launch Complex 14

It takes a long time to build a launch site. Probably longer than you might think.

Lapsa and Feldman spent much of 2020 working on the basic design of a rocket that would eventually be named Nova and deciding whether they could build a business around it. In December of that year, they closed their seed round of funding, raising $9.1 million. After this, finding somewhere to launch from became a priority.

They zeroed in on Cape Canaveral because it’s where the majority of US launch companies and customers are, as well as the talent to assemble and launch rockets. They learned in 2021 that the US Space Force was planning to lease an old pad, Space Launch Complex 14, to a commercial company. This was not just a good location to launch from; it was truly a historic location—John Glenn launched into orbit from here in 1962 aboard the Friendship 7 spacecraft. It was retired in 1967 and designated a National Historic Landmark.

But in recent years, the Space Force has sought to support the flourishing US commercial space industry, and it has offered Launch Complex 14. After the competition opened in 2021, Stoke Space won the lease a year later. Then began the long and arduous process of conducting an Environmental Assessment. It took nearly two years, and it was not until October 20, 2024, that Stoke was allowed to break ground.

None of the structures on the site were usable, and aside from the historic blockhouse dating to the Mercury program, everything else had to be demolished and cleared before work could begin.

As we walked the large ring encompassing the site, Lapsa explained that all of the tanks and major hardware needed to support a Nova launch were now on site. There is a large launch tower, as well as a launch mount upon which the rocket will be stood up. The company has mostly turned toward integrating all of the ground infrastructure and wiring up the site. A nearby building to assemble rockets and process payloads is well underway.

Lapsa seemed mostly relieved. “A year ago, this was my biggest concern,” he said.

He need not have worried. A few months before the company completed its environmental permitting, a tall, lanky, thickly bearded engineer named Jonathan Lund hired on. A Stanford graduate who got his start with the US Army Corps of Engineers, Lund worked at SpaceX during the second half of the 2010s, helping to lead the reconstruction of one launch pad, the crew tower project at Launch Complex 39A, and a pad at Vandenberg Space Force Base. He also worked on multiple landing sites for the Falcon 9 rocket. Lund arrived to lead the development of Stoke’s site.

This is Lund’s fifth launch pad. Each one presents different challenges. In Florida, for example, the water table lies only a few feet below the ground. But for most rockets, including Nova, a large trench must be dug to allow flames from the rocket engines to be carried away from the vehicle at ignition and liftoff. As we stood in this massive flame diverter, there were a few indications of water seeping in.

Still, the company recently completed a major milestone by testing the water suppression system, which dampens the energy of a rocket at liftoff to protect the launch pad. Essentially, the plume from the rocket’s engines flows downward where it meets a sheet of water, turning it into steam. This creates an insulating barrier of sorts.

Water suppression test at LC-14 complete. ✅ Flowed the diverter and rain birds in a “launch like” scenario. pic.twitter.com/rs1lEloPul

— Stoke Space (@stoke_space) October 21, 2025

The water comes from large pipes running down the flame diverter, each of which has hundreds of holes not unlike a garden sprinkler hose. Lund said the pipes and the frame they rest on were built near where we stood.

“We fabricated these pieces on site, at the north end of the flame trench,” Lund explained. “Then we built this frame in Cocoa Beach and shipped it in four different sections and assembled it on site. Then we set the frame on the ramp, put together this surface (with the pipes), and then Egyptian-style we slide it down the ramp right into position. We used some old-school methods, but simple sometimes works best. Nothing fancy.”

At this point, Lapsa interrupted. “I was pretty nervous,” he said. “The way you’re describing this sounded good on a PowerPoint. But I wasn’t sure it actually would work.”

But it did.

Waiting on Nova

So if the pad is rounding into shape, how’s that rocket coming?

It sounds like Stoke Space is doing the right things. Earlier this year, the company shipped a full-scale version of its second stage to its test site at Moses Lake in central Washington. There, it underwent qualification testing, during which the vehicle is loaded with cryogenic fuels on multiple occasions, pressurized, and put through other exercises. Lapsa said that testing went well.

The company also built a stubby version of its first stage. The tanks and domes had full-size diameters, but the stage was not its full height. That vehicle also underwent qualification testing and passed.

The company has begun building flight hardware for the first Nova rocket. The vehicle’s software is maturing. Work is well underway on the development of an automated flight termination system. “Having a team that’s been through this cycle many times, it’s something we started putting attention on very early,” Lapsa said. “It’s on a good path as well.”

And yet the final, frenetic months leading to a debut launch are crunch time for any rocket company: first assembly of the full vehicle, first time test-firing it all. Things will inevitably go wrong. The question is how bad will the problems be?

For as long as I’ve known Lapsa, he has been cagey about launch dates for Stoke. This is smart because in reality, no one knows. And seasoned industry people (and journalists) know that projected launch dates for new rockets are squishy. The most precise thing Lapsa will say is that Stoke is targeting “next year” for Nova’s debut.

The company has a customer for the first flight. If all goes well, its first mission will sail to the asteroid belt. Asteroid mining startup AstroForge has signed on for Nova 1.

Stoke Space isn’t shooting for the Moon. It’s shooting for something 1 million times farther.

Too good to believe it’s true?

Stoke Space is far from the first company to start with grand ambitions. And when rocket startups think too big, it can be their undoing.

A little more than a decade ago, Firefly Space Systems in Texas based the design of its Alpha rocket on an aerospike engine, a technology that had never been flown to space before. Although this was theoretically a more efficient engine design, it also brought more technical risk and proved a bridge too far. By 2017, the company was bankrupt. When Ukrainian investor Max Polyakov rescued Firefly later that year, he demanded that Alpha have a more conventional rocket engine design.

Around the same time that Firefly struggled with its aerospike engine, another launch company, Relativity Space, announced its intent to 3D-print the entirety of its rockets. The company finally launched its Terran 1 rocket after eight years. But it struggled with additively manufacturing rockets. Relativity was on the brink of bankruptcy before a former Google executive, Eric Schmidt, stepped in to rescue the company financially. Relativity is now focused on a traditionally manufactured rocket, the Terran R.

Stoke Space’s Hopper 2 takes to the skies in September 2023 in Moses Lake, Washington.

Credit: Stoke Space

Stoke Space’s Hopper 2 takes to the skies in September 2023 in Moses Lake, Washington. Credit: Stoke Space

So what to make of Stoke Space, which has an utterly novel design for its second stage? The stage is powered by a ring of 24 thrusters, an engine collectively named Andromeda. Stoke has also eschewed a tile-based heat shield to protect the vehicle during atmospheric reentry in favor of a regeneratively cooled design.

In this, there are echoes of Firefly, Relativity, and other companies with grand plans that had to be abandoned in favor of simpler designs to avoid financial ruin. After all, it’s hard enough to reach orbit with a conventional rocket.

But the company has already done a lot of testing of this design. Its first iteration of Andromeda even completed a hop test back in 2023.

“Andromeda is wildly new,” Lapsa said. “But the question of can it work, in my opinion, is a resounding yes.”

The engineering team had all manner of questions when designing Andromeda several years ago. How will all of those thrusters and their plumbing interact with one another? Will there be feedback? Is the heat shield idea practical?

“Those are the kind of unknowns that we knew we were walking into from an engineering perspective,” Lapsa said. “We knew there should be an answer in there, but we didn’t know exactly what it would be. It’s very hard to model all that stuff in the transient. So you just had to get after it, and do it, and we were able to do that. So can it work? Absolutely yes. Will it work out of the box? That’s a different question.”

First stage, too

Stoke’s ambitions did not stop with the upper stage. Early on, Lapsa, Feldman, and the small engineering team also decided to develop a full-flow staged combustion engine. This, Lapsa acknowledges, was a “risky” decision for the company. But it was a necessary one, he believes.

Full-flow staged combustion engines had been tested before this decade but were never flown. From an engineering standpoint, they are significantly more complex than a traditional staged combustion engine in that the oxidizer and propellant—which began as cryogenic liquids—arrive in the combustion chamber in a fully gaseous state. This interaction between two gases is more efficient and produces less wear and tear on turbines within the engine.

“You want to get the highest efficiency you can without driving the turbine temperature to a place where you have a short lifetime,” Lapsa said. “Full-flow is the right answer for that. If you do anything else, it’s a distraction.”

Stoke Space successfully tests its advanced full-flow staged combustion rocket engine, designed to power the Nova launch vehicle’s first stage.

Credit: Stoke Space

Stoke Space successfully tests its advanced full-flow staged combustion rocket engine, designed to power the Nova launch vehicle’s first stage. Credit: Stoke Space

It was also massively unproven. When Stoke Space was founded in 2020, no full-flow staged combustion engine had ever gotten close to space. SpaceX was developing the Raptor engine using the technology, but it would not make its first “spaceflight” until the spring of 2023 on the Super Heavy rocket that powers Starship. Multiple Raptors failed shortly after ignition.

But for a company choosing full reusability of its rocket, as SpaceX sought to do with Starship, there ultimately is no choice.

“Anything you build for full and rapid reuse needs to find margin somewhere in the system,” Lapsa said. “And really that’s fuel efficiency. It makes fuel efficiency a very strong, very important driver.”

In June 2024, Stoke Space announced it had just completed a successful hot fire test of its full-flow, staged combustion engine for Nova’s first stage. The propulsion team had, Lapsa said at the time, “worked tirelessly” to reach that point.

Not just another launch company?

Stoke Space got to the party late. After SpaceX’s success with the first Falcon 9 in 2010, a wave of new entrants entered the field over the next decade. They were drawing down billions in venture capital funding, and some were starting to go public at huge valuations as special purpose acquisition companies. But by 2020, the market seemed saturated. The gold rush for new launch companies was nearing the cops-arrive-to-bust-up-the-festivities stage.

Every new company seemed to have its own spin on how to conquer low-Earth orbit.

“There were a lot of other business plans being proposed and tried,” Lapsa said. “There were low-cost, mass-produced disposable rockets. There were rockets under the wings of aircraft. There were rocket engine companies that were going to sell to 150 launch companies. All of those ideas raised big money and deserve to be considered. The question is, which one is the winner in the end?”

And that’s the question he was trying to answer in his own mind. He was in his 30s. He had a family. And he was looking to commit his best years, professionally, to solving a major launch problem.

“What’s the thing that fundamentally moves the needle on what’s out there already today?” he said. “The only thing, in my opinion, is rapid reuse. And once you get it, the economics are so powerful that nothing else matters. That’s the thing I couldn’t get out of my head. That’s the only problem I wanted to work on, and so we started a company in order to work on it.”

Stoke was one of many launch companies five years ago. But in the years since, the field has narrowed considerably. Some promising companies, such as Virgin Orbit and ABL Space, launched a few times and folded. Others never made it to the launch pad. Today, by my count, there are fewer than 10 serious commercial launch companies in the United States, Stoke among them. The capital markets seem convinced. In October, Stoke announced a massive $510 million Series D funding round. That was a lot of money in a challenging time to raise launch firm funding.

So Stoke has the money it needs. It has a team of sharp engineers and capable technicians. It has a launch pad and qualified hardware. That’s all good because this is the point in the journey for a launch startup where things start to get very, very difficult.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Stoke Space goes for broke to solve the only launch problem that “moves the needle” Read More »

us-spy-satellites-built-by-spacex-send-signals-in-the-“wrong-direction”

US spy satellites built by SpaceX send signals in the “wrong direction”


Spy satellites emit surprising signals

It seems US didn’t coordinate Starshield’s unusual spectrum use with other countries.

Image of a satellite in space and the Earth in the background.

Image of a Starshield satellite from SpaceX’s website. Credit: SpaceX

Image of a Starshield satellite from SpaceX’s website. Credit: SpaceX

About 170 Starshield satellites built by SpaceX for the US government’s National Reconnaissance Office (NRO) have been sending signals in the wrong direction, a satellite researcher found.

The SpaceX-built spy satellites are helping the NRO greatly expand its satellite surveillance capabilities, but the purpose of these signals is unknown. The signals are sent from space to Earth in a frequency band that’s allocated internationally for Earth-to-space and space-to-space transmissions.

There have been no public complaints of interference caused by the surprising Starshield emissions. But the researcher who found them says they highlight a troubling lack of transparency in how the US government manages the use of spectrum and a failure to coordinate spectrum usage with other countries.

Scott Tilley, an engineering technologist and amateur radio astronomer in British Columbia, discovered the signals in late September or early October while working on another project. He found them in various parts of the 2025–2110 MHz band, and from his location, he was able to confirm that 170 satellites were emitting the signals over Canada, the United States, and Mexico. Given the global nature of the Starshield constellation, the signals may be emitted over other countries as well.

“This particular band is allocated by the ITU [International Telecommunication Union], the United States, and Canada primarily as an uplink band to spacecraft on orbit—in other words, things in space, so satellite receivers will be listening on these frequencies,” Tilley told Ars. “If you’ve got a loud constellation of signals blasting away on the same frequencies, it has the potential to interfere with the reception of ground station signals being directed at satellites on orbit.”

In the US, users of the 2025–2110 MHz portion of the S-Band include NASA and the National Oceanic and Atmospheric Administration (NOAA), as well as nongovernmental users like TV news broadcasters that have vehicles equipped with satellite dishes to broadcast from remote locations.

Experts told Ars that the NRO likely coordinated with the US National Telecommunications and Information Administration (NTIA) to ensure that signals wouldn’t interfere with other spectrum users. A decision to allow the emissions wouldn’t necessarily be made public, they said. But conflicts with other governments are still possible, especially if the signals are found to interfere with users of the frequencies in other countries.

Surprising signals

A man standing outdoors in front of two large antennas.

Scott Tilley and his antennas.

Credit: Scott Tilley

Scott Tilley and his antennas. Credit: Scott Tilley

Tilley previously made headlines in 2018 when he located a satellite that NASA had lost contact with in 2005. For his new discovery, Tilley published data and a technical paper describing the “strong wideband S-band emissions,” and his work was featured by NPR on October 17.

Tilley’s technical paper said emissions were detected from 170 satellites out of the 193 known Starshield satellites. Emissions have since been detected from one more satellite, making it 171 out of 193, he told Ars. “The apparent downlink use of an uplink-allocated band, if confirmed by authorities, warrants prompt technical and regulatory review to assess interference risk and ensure compliance” with ITU regulations, Tilley’s paper said.

Tilley said he uses a mix of omnidirectional antennas and dish antennas at his home to receive signals, along with “software-defined radios and quite a bit of proprietary software I’ve written or open source software that I use for analysis work.” The signals did not stop when the paper was published. Tilley said the emissions are powerful enough to be received by “relatively small ground stations.”

Tilley’s paper said that Starshield satellites emit signals with a width of 9 MHz and signal-to-noise (SNR) ratios of 10 to 15 decibels. “A 10 dB SNR means the received signal power is ten times greater than the noise power in the same bandwidth,” while “20 dB means one hundred times,” Tilley told Ars.

Other Starshield signals that were 4 or 5 MHz wide “have been observed to change frequency from day to day with SNR exceeding 20dB,” his paper said. “Also observed from time to time are other weaker wide signals from 2025–2110 MHz what may be artifacts or actual intentional emissions.”

The 2025–2110 MHz band is used by NASA for science missions and by other countries for similar missions, Tilley noted. “Any other radio activity that’s occurring on this band is intentionally limited to avoid causing disruption to its primary purpose,” he said.

The band is used for some fully terrestrial, non-space purposes. Mobile service is allowed in 2025–2110 MHz, but ITU rules say that “administrations shall not introduce high-density mobile systems” in these frequencies. The band is also licensed in the US for non-federal terrestrial services, including the Broadcast Auxiliary Service, Cable Television Relay Service, and Local Television Transmission Service.

While Earth-based systems using the band, such as TV links from mobile studios, have legal protection against interference, Tilley noted that “they normally use highly directional and local signals to link a field crew with a studio… they’re not aimed into space but at a terrestrial target with a very directional antenna.” A trade group representing the US broadcast industry told Ars that it hasn’t observed any interference from Starshield satellites.

“There without anybody knowing it”

Spectrum consultant Rick Reaser told Ars that Starshield’s space-to-Earth transmissions likely haven’t caused any interference problems. “You would not see this unless you were looking for it, or if it turns out that your receiver looks for everything, which most receivers aren’t going to do,” he said.

Reaser said it appears that “whatever they’re doing, they’ve come up with a way to sort of be there without anybody knowing it,” or at least until Tilley noticed the signals.

“But then the question is, can somebody prove that that’s caused a problem?” Reaser said. Other systems using the same spectrum in the correct direction probably aren’t pointed directly at the Starshield satellites, he said.

Reaser’s extensive government experience includes managing spectrum for the Defense Department, negotiating a spectrum-sharing agreement with the European Union, and overseeing the development of new signals for GPS. Reaser said that Tilley’s findings are interesting because the signals would be hard to discover.

“It is being used in the wrong direction, if they’re coming in downlink, that’s supposed to be an uplink,” Reaser said. As for what the signals are being used for, Reaser said he doesn’t know. “It could be communication, it could be all sorts of things,” he said.

Tilley’s paper said the “results raise questions about frequency-allocation compliance and the broader need for transparent coordination among governmental, commercial, and scientific stakeholders.” He argues that international coordination is becoming more important because of the ongoing deployment of large constellations of satellites that could cause harmful interference.

“Cooperative disclosure—without compromising legitimate security interests—will be essential to balance national capability with the shared responsibility of preserving an orderly and predictable radio environment,” his paper said. “The findings presented here are offered in that spirit: not as accusation, but as a public-interest disclosure grounded in reproducible measurement and open analysis. The data, techniques, and references provided enable independent verification by qualified parties without requiring access to proprietary or classified information.”

While Tilley doesn’t know exactly what the emissions are for, his paper said the “signal characteristics—strong, coherent, and highly predictable carriers from a large constellation—create the technical conditions under which opportunistic or deliberate PNT exploitation could occur.”

PNT refers to Positioning, Navigation, and Timing (PNT) applications. “While it is not suggested that the system was designed for that role, the combination of wideband data channels and persistent carrier tones in a globally distributed or even regionally operated network represents a practical foundation for such use, either by friendly forces in contested environments or by third parties seeking situational awareness,” the paper said.

Emissions may have been approved in secret

Tilley told us that a few Starshield satellites launched just recently, in late September, have not emitted signals while moving toward their final orbits. He said this suggests the emissions are for an “operational payload” and not merely for telemetry, tracking, and control (TT&C).

“This could mean that [the newest satellites] don’t have this payload or that the emissions are not part of TT&C and may begin once these satellites achieve their place within the constellation,” Tilley told Ars. “If these emissions are TT&C, you would expect them to be active especially during the early phases of the mission, when the satellites are actively being tested and moved into position within the constellation.”

Whatever they’re for, Reaser said the emissions were likely approved by the NTIA and that the agency would likely have consulted with the Federal Communications Commission. For federal spectrum use, these kinds of decisions aren’t necessarily made public, he said.

“NRO would have to coordinate that through the NTIA to make sure they didn’t have an interference problem,” Reaser said. “And by the way, this happens a lot. People figure out a way [to transmit] on what they call a non-interference basis, and that’s probably how they got this approved. They say, ‘listen, if somebody reports interference, then you have to shut down.’”

Tilley said it’s clear that “persistent S-band emissions are occurring in the 2025–2110 MHz range without formal ITU coordination.” Claims that the downlink use was approved by the NTIA in a non-public decision “underscore, rather than resolve, the transparency problem,” he told Ars.

An NTIA spokesperson declined to comment. The NRO and FCC did not provide any comment in response to requests from Ars.

SpaceX just “a contractor for the US government”

Randall Berry, a Northwestern University professor of electrical and computer engineering, agreed with Reaser that it’s likely the NTIA approved the downlink use of the band and that this decision was not made public. Getting NTIA clearance is “the proper way this should be done,” he said.

“It would be surprising if NTIA was not aware, as Starshield is a government-operated system,” Berry told Ars. While NASA and other agencies use the band for Earth-to-space transmissions, “they may have been able to show that the Starshield space-to-Earth signals do not create harmful interference with these Earth-to-space signals,” he said.

There is another potential explanation that is less likely but more sinister. Berry said it’s possible that “SpaceX did not make this known to NTIA when the system was cleared for federal use.” Berry said this would be “surprising and potentially problematic.”

Digital rendering of a satellite in space.

SpaceX rendering of a Starshield satellite.

Credit: SpaceX

SpaceX rendering of a Starshield satellite. Credit: SpaceX

Tilley doesn’t think SpaceX is responsible for the emissions. While Starshield relies on technology built for the commercial Starlink broadband system of low Earth orbit satellites, Elon Musk’s space company made the Starshield satellites in its role as a contractor for the US government.

“I think [SpaceX is] just operating as a contractor for the US government,” Tilley said. “They built a satellite to the government specs provided for them and launched it for them. And from what I understand, the National Reconnaissance Office is the operator.”

SpaceX did not respond to a request for comment.

TV broadcasters conduct interference analysis

TV broadcasters with news trucks that use the same frequencies “protect their band vigorously” and would have reported interference if it was affecting their transmissions, Reaser said. This type of spectrum use is known as Electronic News Gathering (ENG).

The National Association of Broadcasters told Ars that it “has been closely tracking recent reports concerning satellite downlink operation in the 2025–2110 MHz frequency band… While it’s not clear that satellite downlink operations are authorized by international treaty in this range, such operations are uncommon, and we are not aware of any interference complaints related to downlink use.”

The NAB investigated after Tilley’s report. “When the Tilley report first surfaced, NAB conducted an interference analysis—based on some assumptions given that Starshield’s operating parameters have not been publicly disclosed,” the group told us. “That analysis found that interference with ENG systems is unlikely. We believe the proposed downlink operations are likely compatible with broadcaster use of the band, though coordination issues with the International Telecommunication Union (ITU) could still arise.”

Tilley said that a finding of interference being unlikely “addresses only performance, not legality… coordination conducted only within US domestic channels does not meet international requirements under the ITU Radio Regulations. This deployment is not one or two satellites, it is a distributed constellation of hundreds of objects with potential global implications.”

Canada agency: No coordination with ITU or US

When contacted by Ars, an ITU spokesperson said the agency is “unable to provide any comment or additional information on the specific matter referenced.” The ITU said that interference concerns “can be formally raised by national administrations” and that the ITU’s Radio Regulations Board “carefully examines the specifics of the case and determines the most appropriate course of action to address it in line with ITU procedures.”

The Canadian Space Agency (CSA) told Ars that its “missions operating within the frequency band have not yet identified any instances of interference that negatively impact their operations and can be attributed to the referenced emissions.” The CSA indicated that there hasn’t been any coordination with the ITU or the US over the new emissions.

“To date, no coordination process has been initiated for the satellite network in question,” the CSA told Ars. “Coordination of satellite networks is carried out through the International Telecommunication Union (ITU) Radio Regulation, with Innovation, Science and Economic Development Canada (ISED) serving as the responsible national authority.”

The European Space Agency also uses the 2025–2100 band for TT&C. We contacted the agency but did not receive any comment.

The lack of coordination “remains the central issue,” Tilley told Ars. “This band is globally allocated for Earth-to-space uplinks and limited space-to-space use, not continuous space-to-Earth transmissions.”

NASA needs protection from interference

An NTIA spectrum-use report updated in 2015 said NASA “operates earth stations in this band for tracking and command of manned and unmanned Earth-orbiting satellites and space vehicles either for Earth-to-space links for satellites in all types of orbits or through space-to-space links using the Tracking Data and Relay Satellite System (TDRSS). These earth stations control ninety domestic and international space missions including the Space Shuttle, the Hubble Space Telescope, and the International Space Station.”

Additionally, the NOAA “operates earth stations in this band to control the Geostationary Operational Environmental Satellite (GOES) and Polar Operational Environmental Satellite (POES) meteorological satellite systems,” which collect data used by the National Weather Service. We contacted NASA and NOAA, but neither agency provided comment to Ars.

NASA’s use of the band has increased in recent years. The NTIA told the FCC in 2021 that 2025–2110 MHz is “heavily used today and require[s] extensive coordination even among federal users.” The band “has seen dramatically increased demand for federal use as federal operations have shifted from federal bands that were repurposed to accommodate new commercial wireless broadband operations.”

A 2021 NASA memo included in the filing said that NASA would only support commercial launch providers using the band if their use was limited to sending commands to launch vehicles for recovery and retrieval purposes. Even with that limit, commercial launch providers would cause “significant interference” for existing federal operations in the band if the commercial use isn’t coordinated through the NTIA, the memo said.

“NASA makes extensive use of this band (i.e., currently 382 assignments) for both transmissions from earth stations supporting NASA spacecraft (Earth-to-space) and transmissions from NASA’s Tracking and Data Relay Satellite System (TDRSS) to user spacecraft (space-to-space), both of which are critical to NASA operations,” the memo said.

In 2024, the FCC issued an order allowing non-federal space launch operations to use the 2025–2110 MHz band on a secondary basis. The allocation is “limited to space launch telecommand transmissions and will require commercial space launch providers to coordinate with non-Federal terrestrial licensees… and NTIA,” the FCC order said.

International non-interference rules

While US agencies may not object to the Starshield emissions, that doesn’t guarantee there will be no trouble with other countries. Article 4.4 of ITU regulations says that member nations may not assign frequencies that conflict with the Table of Frequency Allocations “except on the express condition that such a station, when using such a frequency assignment, shall not cause harmful interference to, and shall not claim protection from harmful interference caused by, a station operating in accordance with the provisions.”

Reaser said that under Article 4.4, entities that are caught interfering with other spectrum users are “supposed to shut down.” But if the Starshield users were accused of interference, they would probably “open negotiations with the offended party” instead of immediately stopping the emissions, he said.

“My guess is they were allowed to operate on a non-interference basis and if there is an interference issue, they’d have to go figure a way to resolve them,” he said.

Tilley told Ars that Article 4.4 allows for non-interference use domestically but “is not a blank check for continuous, global downlinks from a constellation.” In that case, “international coordination duties still apply,” he said.

Tilley pointed out that under the Convention on Registration of Objects Launched into Outer Space, states must report the general function of a space object. “Objects believed to be part of the Starshield constellation have been registered with UNOOSA [United Nations Office for Outer Space Affairs] under the broad description: ‘Spacecraft engaged in practical applications and uses of space technology such as weather or communications,’” his paper said.

Tilley told Ars that a vague description such as this “may satisfy the letter of filing requirements, but it contradicts the spirit” of international agreements. He contends that filings should at least state whether a satellite is for military purposes.

“The real risk is that we are no longer dealing with one or two satellites but with massive constellations that, by their very design, are global in scope,” he told Ars. “Unilateral use of space and spectrum affects every nation. As the examples of US and Chinese behavior illustrate, we are beginning from uncertain ground when it comes to large, militarily oriented mega-constellations, and, at the very least, this trend distorts the intent and spirit of international law.”

China’s constellation

Tilley said he has tracked China’s Guowang constellation and its use of “spectrum within the 1250–1300 MHz range, which is not allocated for space-to-Earth communications.” China, he said, “filed advance notice and coordination requests with the ITU for this spectrum but was not granted protection for its non-compliant use. As a result, later Chinese filings notifying and completing due diligence with the ITU omit this spectrum, yet the satellites are using it over other nations. This shows that the Chinese government consulted internationally and proceeded anyway, while the US government simply did not consult at all.”

By contrast, Canada submitted “an unusual level of detail” to the ITU for its military satellite Sapphire and coordinated fully with the ITU, he said.

Tilley said he reported his findings on Starshield emissions “directly to various western space agencies and the Canadian government’s spectrum management regulators” at the ISED.

“The Canadian government has acknowledged my report, and it has been disseminated within their departments, according to a senior ISED director’s response to me,” Tilley said, adding that he is continuing to collaborate “with other researchers to assist in the gathering of more data on the scope and impact of these emissions.”

The ISED told Ars that it “takes any reports of interference seriously and is not aware of any instances or complaints in these bands. As a general practice, complaints of potential interference are investigated to determine both the cause and possible resolutions. If it is determined that the source of interference is not Canadian, ISED works with its regulatory counterparts in the relevant administration to resolve the issue. ISED has well-established working arrangements with counterparts in other countries to address frequency coordination or interference matters.”

Accidental discovery

Two pictures of large antennas set up outdoors.

Antennas used by Scott Tilley.

Credit: Scott Tilley

Antennas used by Scott Tilley. Credit: Scott Tilley

Tilley’s discovery of Starshield signals happened because of “a clumsy move at the keyboard,” he told NPR. “I was resetting some stuff, and then all of a sudden, I’m looking at the wrong antenna, the wrong band,” he said.

People using the spectrum for Earth-to-space transmissions generally wouldn’t have any reason to listen for transmissions on the same frequencies, Tilley told Ars. Satellites using 2025–2100 MHz for Earth-to-space transmissions have their downlink operations on other frequencies, he said.

“The whole reason why I publicly revealed this rather than just quietly sit on it is to alert spacecraft operators that don’t normally listen on this band… that they should perform risk assessments and assess whether their missions have suffered any interference or could suffer interference and be prepared to deal with that,” he said.

A spacecraft operator may not know “a satellite is receiving interference unless the satellite is refusing to communicate with them or asking for the ground station to repeat the message over and over again,” Tilley said. “Unless they specifically have a reason to look or it becomes particularly onerous for them, they may not immediately realize what’s going on. It’s not like they’re sitting there watching the spectrum to see unusual signals that could interfere with the spacecraft.”

While NPR paraphrased Tilley as saying that the transmissions could be “designed to hide Starshield’s operations,” he told Ars that this characterization is “maybe a bit strongly worded.”

“It’s certainly an unusual place to put something. I don’t want to speculate about what the real intentions are, but it certainly could raise a question in one’s mind as to why they would choose to emit there. We really don’t know and probably never will know,” Tilley told us.

How amateurs track Starshield

After finding the signals, Tilley determined they were being sent by Starshield satellites by consulting data collected by amateurs on the constellation. SpaceX launches the satellites into what Tilley called classified orbits, but the space company distributes some information that can be used to track their locations.

For safety reasons, SpaceX publishes “a notice to airmen and sailors that they’re going to be dropping boosters and debris in hazard areas… amateurs use those to determine the orbital plane the launch is going to go into,” Tilley said. “Once we know that, we just basically wait for optical windows when the lighting is good, and then we’re able to pick up the objects and start tracking them and then start cataloguing them and generating orbits. A group of us around the world do that. And over the last year and a half or so since they started launching the bulk of this constellation, the amateurs have amassed considerable body of orbital data on this constellation.”

After accidentally discovering the emissions, Tilley said he used open source software to “compare the Doppler signal I was receiving to the orbital elements… and immediately started coming back with hits to Starshield and nothing else.” He said this means that “the tens of thousands of other objects in orbit didn’t match the radio Doppler characteristics that these objects have.”

Tilley is still keeping an eye on the transmissions. He told us that “I’m continuing to hear the signals, record them, and monitor developments within the constellation.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

US spy satellites built by SpaceX send signals in the “wrong direction” Read More »

what-if-the-aliens-come-and-we-just-can’t-communicate?

What if the aliens come and we just can’t communicate?


Ars chats with particle physicist Daniel Whiteson about his new book Do Aliens Speak Physics?

Science fiction has long speculated about the possibility of first contact with an alien species from a distant world and how we might be able to communicate with them. But what if we simply don’t have enough common ground for that to even be possible? An alien species is bound to be biologically very different, and their language will be shaped by their home environment, broader culture, and even how they perceive the universe. They might not even share the same math and physics. These and other fascinating questions are the focus of an entertaining new book, Do Aliens Speak Physics? And Other Questions About Science and the Nature of Reality.

Co-author Daniel Whiteson is a particle physicist at the University of California, Irvine, who has worked on the ATLAS collaboration at CERN’s Large Hadron Collider. He’s also a gifted science communicator who previously co-authored two books with cartoonist Jorge Cham of PhD Comics fame: 2018’s We Have No Idea and 2021’s Frequently Asked Questions About the Universe. (The pair also co-hosted a podcast from 2018 to 2024, Daniel and Jorge Explain the Universe.) This time around, cartoonist Andy Warner provided the illustrations, and Whiteson and Warner charmingly dedicate their book to “all the alien scientists we have yet to meet.”

Whiteson has long been interested in the philosophy of physics. “I’m not the kind of physicist who’s like, ‘whatever, let’s just measure stuff,’” he told Ars. “The thing that always excited me about physics was this implicit promise that we were doing something universal, that we were learning things that were true on other planets. But the more I learned, the more concerned I became that this might have been oversold. None are fundamental, and we don’t understand why anything emerges. Can we separate the human lens from the thing we’re looking at? We don’t know in the end how much that lens is distorting what we see or defining what we’re looking at. So that was the fundamental question I always wanted to explore.”

Whiteson initially pitched his book idea to his 14-year-old son, who inherited his father’s interest in science. But his son thought Whiteson’s planned discussion of how physics might not be universal was, well, boring. “When you ask for notes, you’ve got to listen to them,” said Whiteson. “So I came back and said, ‘Well, what about a book about when aliens arrive? Will their science be similar to ours? Can we collaborate together?’” That pitch won the teen’s enthusiastic approval: same ideas, but couched in the context of alien contact to make them more concrete.

As for Warner’s involvement as illustrator, “I cold-emailed him and said, ‘Want to write a book about aliens? You get to draw lots of weird aliens,’” said Whiteson. Who could resist that offer?

Ars caught up with Whiteson to learn more.

cartoon exploring possible outcomes of first alien contact

Credit: Andy Warner

Ars Technica: You open each chapter with fictional hypothetical scenarios. Why?

Daniel Whiteson: I sent an early version of the book to my friend, [biologist] Matt Giorgianni, who appears in cartoon form in the book. He said, “The book is great, but it’s abstract. Why don’t you write out a hypothetical concrete scenario for each chapter to show us what it would be like and how we would stumble.”

All great ideas seem obvious once you hear them. I’ve always been a huge science fiction fan. It’s thoughtful and creative and exploratory about the way the universe could be and might be. So I jumped at the opportunity to write a little bit of science fiction. Each one was challenging because you have to come up with a specific example that illustrates the concepts in that chapter, the issues that you might run into, but also be believable and interesting. We went through a lot of drafts. But I had a lot of fun writing them.

Ars Technica: The Voyager Golden Record is perhaps the best known example of humans attempting to communicate with an alien species, spearheaded by the late Carl Sagan, among others. But what are the odds that, despite our best efforts, any aliens will ever be able to decipher our “message in a bottle”?

Daniel Whiteson: I did an informal experiment where I printed out a picture of the Pioneer plaque and showed it to a bunch of grad students who were young enough to not have seen it before. This is Sagan’s audience: biological humans, same brain, same culture, physics grad students—none of them had any idea what any of that was supposed to refer to. NASA gave him two weeks to come up with that design. I don’t know that I would’ve done any better. It’s easy to criticize.

Those folks, they were doing their best. They were trying to step away from our culture. They didn’t use English, they didn’t even use mathematical symbols. They understood that those things are arbitrary, and they were reaching for something they hoped was going to be universal. But in the end, nothing can be universal because language is always symbols, and the choice of those symbols is arbitrary and cultural. It’s impossible to choose a symbol that can only be interpreted in one way.

Fundamentally, the book is trying to poke at our assumptions. It’s so inspiring to me that the history of physics is littered with times when we have had to abandon an assumption that we clung to desperately, until we were shown otherwise with enough data. So we’ve got to be really open-minded about whether these assumptions hold true, whether it’s required to do science, to be technological, or whether there is even a single explanation for reality. We could be very well surprised by what we discover.

Ars Technica: It’s often assumed that math and physics are the closest thing we have to a universal language. You challenge that assumption, probing such questions as “what does it even mean to ‘count’”? 

Daniel Whiteson: At an initial glance, you’re like, well, of course mathematics is required, and of course numbers are universal. But then you dig into it and you start to realize there are fuzzy issues here. So many of the assumptions that underlie our interpretation of what we learned about physics are that way. I had this experience that’s probably very common among physics undergrads in quantum mechanics, learning about those calculations where you see nine decimal places in the theory and nine decimal places in the experiment, and you go, “Whoa, this isn’t just some calculational tool. This is how the universe decides what happens to a particle.”

I literally had that moment. I’m not a religious person, but it was almost spiritual. For many years, I believed that deeply, and I thought it was obvious. But to research this book, I read Science Without Numbers by Hartry Field. I was lucky—here at Irvine, we happen to have an amazing logic and philosophy of science department, and those folks really helped me digest [his ideas] and understand how you can pull yourself away from things like having a number line. It turns out you don’t need numbers; you can just think about relationships. It was really eye-opening, both how essential mathematics seems to human science and how obvious it is that we’re making a bunch of assumptions that we don’t know how to justify.

There are dotted lines humans have drawn because they make sense to us. You don’t have to go to plasma swirls and atmospheres to imagine a scenario where aliens might not have the same differentiation between their identities, me and you, here’s where I end, and here’s where you begin. That’s complicated for aliens that are plasma swirls, but also it’s not even very well-defined for us. How do I define the edge of my body? Is it where my skin ends? What about the dead skin, the hair, my personal space? There’s no obvious definition. We just have a cultural sense for “this is me, this is not me.” In the end, that’s a philosophical choice.

Cartoon of an alien emerging from a spaceship and a human saying

Credit: Andy Warner

Ars Technica: You raise another interesting question in the book: Would aliens even need physics theory or a deeper understanding of how the universe works? Perhaps they could invent, say, warp drive through trial and error. You suggest that our theory is more like a narrative framework. It’s the story we tell, and that is very much prone to bias.

Daniel Whiteson: Absolutely. And not just bias, but emotion and curiosity. We put energy into certain things because we think they’re important. Physicists spend our lives on this because we think, among the many things we could spend our time on, this is an important question. That’s an emotional choice. That’s a personal subjective thing.

Other people find my work dead boring and would hate to have my life, and I would feel the same way about theirs. And that’s awesome. I’m glad that not everybody wants to be a particle physicist or a biologist or an economist. We have a diversity of curiosity, which we all benefit from. People have an intuitive feel for certain things. There’s something in their minds that naturally understands how the system works and reflects it, and they probably can’t explain it. They might not be able to design a better car, but they can drive the heck out of it.

This is maybe the biggest stumbling block for people who are just starting to think about this for the first time. “Obviously aliens are scientific.” Well, how do we know? What do we mean by scientific? That concept has evolved over time, and is it really required? I felt that that was a big-picture thing that people could wrap their minds around but also a shock to the system. They were already a little bit off-kilter and might realize that some things they assumed must be true maybe not have to be.

Ars Technica: You cite the 2016 film Arrival as an example of first contact and the challenge of figuring out how to communicate with an alien species. They had to learn each other’s cultural context before they had any chance of figuring out what their respective symbols meant. 

Daniel Whiteson:  I think that is really crucial. Again, how you choose to represent your ideas is, in a sense, arbitrary. So if you’re on the other side of that, and you have to go from symbols to ideas, you have to know something about how they made those choices in order to reverse-engineer a message, in order to figure out what it means. How do you know if you’ve done it correctly? Say we get a message from aliens and we spend years of supercomputer time cranking on it. Something we rely on for decoding human messages is that you can tell when you’ve done it right because you have something that makes sense.

cartoon about the

Credit: Andy Warner

How do we know when we’ve done that for an alien text? How do you know the difference between nonsense and things that don’t yet make sense to you? It’s essentially impossible. I spoke to some philosophers of language who convinced me that if we get a message from aliens, it might be literally impossible to decode it without their help, without their context. There aren’t enough examples of messages from aliens. We have the “Wow!” signal—who knows what that means? Maybe it’s a great example of getting a message and having no idea how to decode it.

As in many places in the book, I turned to human history because it’s the one example we have. I was shocked to discover how many human languages we haven’t decoded. Not to mention, we haven’t decoded whale. We know whales are talking to each other. Maybe they’re talking to us and we can’t decode it. The lesson, again and again, is culture.

With the Egyptians, the only reason we were able to decode hieroglyphics is because we had the Rosetta Stone, but that still took 20 years there. How does it take 20 years when you have examples and you know what it’s supposed to say? And the answer is that we made cultural assumptions. We assumed pictograms reflected the ideas in the pictures. If there’s a bird in it, it’s about birds. And we were just wrong. That’s maybe why we’ve been struggling with Etruscan and other languages we’ve never decoded.

Even when we do have a lot of culture in common, it’s very, very tricky. So the idea that we would get a message from space and have no cultural clues at all and somehow be able to decode it—I think that only works in the scenario where aliens have been listening to us for a long time and they’re essentially writing to us in something like our language. I’m a big fan of SETI. They host these conferences where they listen to philosophers and linguists and anthropologists. I don’t mean to say that they’ve thought too narrowly, but I think it’s not widely enough appreciated how difficult it is to decode another language with no culture in common.

Ars Technica: You devote a chapter to the possibility that aliens might be able to, say, “taste” electrons. That drives home your point that physiologically, they will probably be different, biologically they will be different, and their experiences are likely to be different. So their notion of culture will also be different.

Daniel Whiteson: Something I really wanted to get across was that perception determines your sense of intuition, which defines in many ways the answers you find acceptable. I read Ed Yong’s amazing book, An Immense World, about animal perception, the diversity of animal experience or animal interaction with the environment. What is it like to be an octopus with a distributed mind? What is it like to be a bird that senses magnetic fields? If you’re an alien and you have very different perceptions, you could also have a different intuitive language in which to understand the universe. What is a photon? Is it a particle? Is it a wave?  Maybe we just struggle with the concept because our intuitive language is limited.

For an alien that can see photons in superposition, this is boring. They could be bored by things we’re fascinated by, or we could explain our theories to them and they could be unsatisfied because to them, it doesn’t translate into their intuitive language. So even though we’ve transcended our biological limitations in many ways, we can detect gravitational waves and infrared photons, we’re still, I think, shaped by those original biological senses that frame our experience, the models we build. What it’s like to be a human in the world could be very, very different from what it’s like to be an alien in the world.

Ars Technica:  Your very last line reads, “Our theories may reveal the patterns of our thoughts as much as the patterns of nature.” Why did you choose to end your book on that note?

Daniel Whiteson: That’s a message to myself. When I started this journey, I thought physics is universal. That’s what makes it beautiful. That implies that if the aliens show up and they do physics in a very different way, it would be a crushing disappointment. Not only is what we’ve learned just another human science, but also it means maybe we can’t benefit from their work or we can’t have that fantasy of a galactic scientific conference. But that actually might be the best-case scenario.

I mean, the reason that you want to meet aliens is the reason you go traveling. You want to expand your horizons and learn new things. Imagine how boring it would be if you traveled around the world to some new country and they just had all the same food. It might be comforting in some way, but also boring. It’s so much more interesting to find out that they have spicy fish soup for breakfast. That’s the moment when you learn not just about what to have for breakfast but about yourself and your boundaries.

So if aliens show up and they’re so weirdly different in a way we can’t possibly imagine—that would be deeply revealing. It’ll give us a chance to separate the reality from the human lens. It’s not just questions of the universe. We are interesting, and we should learn about our own biases. I anticipate that when the aliens do come, their culture and their minds and their science will be so alien, it’ll be a real challenge to make any kind of connection. But if we do, I think we’ll learn a lot, not just about the universe but about ourselves.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

What if the aliens come and we just can’t communicate? Read More »

you-won’t-believe-the-excuses-lawyers-have-after-getting-busted-for-using-ai

You won’t believe the excuses lawyers have after getting busted for using AI


I got hacked; I lost my login; it was a rough draft; toggling windows is hard.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Amid what one judge called an “epidemic” of fake AI-generated case citations bogging down courts, some common excuses are emerging from lawyers hoping to dodge the most severe sanctions for filings deemed misleading.

Using a database compiled by French lawyer and AI researcher Damien Charlotin, Ars reviewed 23 cases where lawyers were sanctioned for AI hallucinations. In many, judges noted that the simplest path to avoid or diminish sanctions was to admit that AI was used as soon as it’s detected, act humble, self-report the error to relevant legal associations, and voluntarily take classes on AI and law. But not every lawyer takes the path of least resistance, Ars’ review found, with many instead offering excuses that no judge found credible. Some even lie about their AI use, judges concluded.

Since 2023—when fake AI citations started being publicized—the most popular excuse has been that the lawyer didn’t know AI was used to draft a filing.

Sometimes that means arguing that you didn’t realize you were using AI, as in the case of a California lawyer who got stung by Google’s AI Overviews, which he claimed he took for typical Google search results. Most often, lawyers using this excuse tend to blame an underling, but clients have been blamed, too. A Texas lawyer this month was sanctioned after deflecting so much that the court had to eventually put his client on the stand after he revealed she played a significant role in drafting the aberrant filing.

“Is your client an attorney?” the court asked.

“No, not at all your Honor, just was essentially helping me with the theories of the case,” the lawyer said.

Another popular dodge comes from lawyers who feign ignorance that chatbots are prone to hallucinating facts.

Recent cases suggest this excuse may be mutating into variants. Last month, a sanctioned Oklahoma lawyer admitted that he didn’t expect ChatGPT to add new citations when all he asked the bot to do was “make his writing more persuasive.” And in September, a California lawyer got in a similar bind—and was sanctioned a whopping $10,000, a fine the judge called “conservative.” That lawyer had asked ChatGPT to “enhance” his briefs, “then ran the ‘enhanced’ briefs through other AI platforms to check for errors,” neglecting to ever read the “enhanced” briefs.

Neither of those tired old excuses hold much weight today, especially in courts that have drawn up guidance to address AI hallucinations. But rather than quickly acknowledge their missteps, as courts are begging lawyers to do, several lawyers appear to have gotten desperate. Ars found a bunch citing common tech issues as the reason for citing fake cases.

When in doubt, blame hackers?

For an extreme case, look to a New York City civil court, where a lawyer, Innocent Chinweze, first admitted to using Microsoft Copilot to draft an errant filing, then bizarrely pivoted to claim that the AI citations were due to malware found on his computer.

Chinweze said he had created a draft with correct citations but then got hacked, allowing bad actors “unauthorized remote access” to supposedly add the errors in his filing.

The judge was skeptical, describing the excuse as an “incredible and unsupported statement,” particularly since there was no evidence of the prior draft existing. Instead, Chinweze asked to bring in an expert to testify that the hack had occurred, requesting to end the proceedings on sanctions until after the court weighed the expert’s analysis.

The judge, Kimon C. Thermos, didn’t have to weigh this argument, however, because after the court broke for lunch, the lawyer once again “dramatically” changed his position.

“He no longer wished to adjourn for an expert to testify regarding malware or unauthorized access to his computer,” Thermos wrote in an order issuing sanctions. “He retreated” to “his original position that he used Copilot to aid in his research and didn’t realize that it could generate fake cases.”

Possibly more galling to Thermos than the lawyer’s weird malware argument, though, was a document that Chinweze filed on the day of his sanctions hearing. That document included multiple summaries preceded by this text, the judge noted:

Some case metadata and case summaries were written with the help of AI, which can produce inaccuracies. You should read the full case before relying on it for legal research purposes.

Thermos admonished Chinweze for continuing to use AI recklessly. He blasted the filing as “an incoherent document that is eighty-eight pages long, has no structure, contains the full text of most of the cases cited,” and “shows distinct indications that parts of the discussion/analysis of the cited cases were written by artificial intelligence.”

Ultimately, Thermos ordered Chinweze to pay $1,000, the most typical fine lawyers received in the cases Ars reviewed. The judge then took an extra non-monetary step to sanction Chinweze, referring the lawyer to a grievance committee, “given that his misconduct was substantial and seriously implicated his honesty, trustworthiness, and fitness to practice law.”

Ars could not immediately reach Chinweze for comment.

Toggling windows on a laptop is hard

In Alabama, an attorney named James A. Johnson made an “embarrassing mistake,” he said, primarily because toggling windows on a laptop is hard, US District Judge Terry F. Moorer noted in an October order on sanctions.

Johnson explained that he had accidentally used an AI tool that he didn’t realize could hallucinate. It happened while he was “at an out-of-state hospital attending to the care of a family member recovering from surgery.” He rushed to draft the filing, he said, because he got a notice that his client’s conference had suddenly been “moved up on the court’s schedule.”

“Under time pressure and difficult personal circumstance,” Johnson explained, he decided against using Fastcase, a research tool provided by the Alabama State Bar, to research the filing. Working on his laptop, he opted instead to use “a Microsoft Word plug-in called Ghostwriter Legal” because “it appeared automatically in the sidebar of Word while Fastcase required opening a separate browser to access through the Alabama State Bar website.”

To Johnson, it felt “tedious to toggle back and forth between programs on [his] laptop with the touchpad,” and that meant he “unfortunately fell victim to the allure of a new program that was open and available.”

Moorer seemed unimpressed by Johnson’s claim that he understood tools like ChatGPT were unreliable but didn’t expect the same from other AI legal tools—particularly since “information from Ghostwriter Legal made it clear that it used ChatGPT as its default AI program,” Moorer wrote.

The lawyer’s client was similarly put off, deciding to drop Johnson on the spot, even though that risked “a significant delay of trial.” Moorer noted that Johnson seemed shaken by his client’s abrupt decision, evidenced by “his look of shock, dismay, and display of emotion.”

And switching to a new lawyer could eat up more of that money. Moorer further noted that Johnson seemingly let AI do his homework while working on behalf of the government. But as the judge noted, “public funds for appointed counsel are not a bottomless well and are limited resource.”

“It has become clear that basic reprimands and small fines are not sufficient to deter this type of misconduct because if it were, we would not be here,” Moorer concluded.

Ruling that Johnson’s reliance on AI was “tantamount to bad faith,” Moorer imposed a $5,000 fine. The judge also would have “considered potential disqualification, but that was rendered moot” since Johnson’s client had already dismissed him.

Asked for comment, Johnson told Ars that “the court made plainly erroneous findings of fact and the sanctions are on appeal.”

Plagued by login issues

As a lawyer in Georgia tells it, sometimes fake AI citations may be filed because a lawyer accidentally filed a rough draft instead of the final version.

Other lawyers claim they turn to AI as needed when they have trouble accessing legal tools like Westlaw or LexisNexis.

For example, in Iowa, a lawyer told an appeals court that she regretted relying on “secondary AI-driven research tools” after experiencing “login issues her with her Westlaw subscription.” Although the court was “sympathetic to issues with technology, such as login issues,” the lawyer was sanctioned, primarily because she only admitted to using AI after the court ordered her to explain her mistakes. In her case, however, she got to choose between paying a minimal $150 fine or attending “two hours of legal ethics training particular to AI.”

Less sympathetic was a lawyer who got caught lying about the AI tool she blamed for inaccuracies, a Louisiana case suggested. In that case, a judge demanded to see the research history after a lawyer claimed that AI hallucinations came from “using Westlaw Precision, an AI-assisted research tool, rather than Westlaw’s standalone legal database.”

It turned out that the lawyer had outsourced the research, relying on a “currently suspended” lawyer’s AI citations, and had only “assumed” the lawyer’s mistakes were from Westlaw’s AI tool. It’s unclear what tool was actually used by the suspended lawyer, who likely lost access to a Westlaw login, but the judge ordered a $1,000 penalty after the lawyer who signed the filing “agreed that Westlaw did not generate the fabricated citations.”

Judge warned of “serial hallucinators”

Another lawyer, William T. Panichi in Illinois, has been sanctioned at least three times, Ars’ review found.

In response to his initial penalties ordered in July, he admitted to being tempted by AI while he was “between research software.”

In that case, the court was frustrated to find that the lawyer had contradicted himself, and it ordered more severe sanctions as a result.

Panichi “simultaneously admitted to using AI to generate the briefs, not doing any of his own independent research, and even that he ‘barely did any personal work [him]self on this appeal,’” the court order said, while also defending charging a higher fee—supposedly because this case “was out of the ordinary in terms of time spent” and his office “did some exceptional work” getting information.

The court deemed this AI misuse so bad that Panichi was ordered to disgorge a “payment of $6,925.62 that he received” in addition to a $1,000 penalty.

“If I’m lucky enough to be able to continue practicing before the appellate court, I’m not going to do it again,” Panichi told the court in July, just before getting hit with two more rounds of sanctions in August.

Panichi did not immediately respond to Ars’ request for comment.

When AI-generated hallucinations are found, penalties are often paid to the court, the other parties’ lawyers, or both, depending on whose time and resources were wasted fact-checking fake cases.

Lawyers seem more likely to argue against paying sanctions to the other parties’ attorneys, hoping to keep sanctions as low as possible. One lawyer even argued that “it only takes 7.6 seconds, not hours, to type citations into LexisNexis or Westlaw,” while seemingly neglecting the fact that she did not take those precious seconds to check her own citations.

The judge in the case, Nancy Miller, was clear that “such statements display an astounding lack of awareness of counsel’s obligations,” noting that “the responsibility for correcting erroneous and fake citations never shifts to opposing counsel or the court, even if they are the first to notice the errors.”

“The duty to mitigate the harms caused by such errors remains with the signor,” Miller said. “The sooner such errors are properly corrected, either by withdrawing or amending and supplementing the offending pleadings, the less time is wasted by everyone involved, and fewer costs are incurred.”

Texas US District Judge Marina Garcia Marmolejo agreed, explaining that even more time is wasted determining how other judges have responded to fake AI-generated citations.

“At one of the busiest court dockets in the nation, there are scant resources to spare ferreting out erroneous AI citations in the first place, let alone surveying the burgeoning caselaw on this subject,” she said.

At least one Florida court was “shocked, shocked” to find that a lawyer was refusing to pay what the other party’s attorneys said they were owed after misusing AI. The lawyer in that case, James Martin Paul, asked to pay less than a quarter of the fees and costs owed, arguing that Charlotin’s database showed he might otherwise owe penalties that “would be the largest sanctions paid out for the use of AI generative case law to date.”

But caving to Paul’s arguments “would only benefit serial hallucinators,” the Florida court found. Ultimately, Paul was sanctioned more than $85,000 for what the court said was “far more egregious” conduct than other offenders in the database, chastising him for “repeated, abusive, bad-faith conduct that cannot be recognized as legitimate legal practice and must be deterred.”

Paul did not immediately respond to Ars’ request to comment.

Michael B. Slade, a US bankruptcy judge in Illinois, seems to be done weighing excuses, calling on all lawyers to stop taking AI shortcuts that are burdening courts.

“At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud,” Slade wrote.

This story was updated on November 11 to clarify a judge’s comments on misuse of public funds.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

You won’t believe the excuses lawyers have after getting busted for using AI Read More »

nasa-is-kind-of-a-mess:-here-are-the-top-priorities-for-a-new-administrator

NASA is kind of a mess: Here are the top priorities for a new administrator


“He inevitably will have to make tough calls.”

Jared Isaacman, right, led the crew of Polaris Dawn, which performed the first private spacewalk. Credit: Polaris Dawn

Jared Isaacman, right, led the crew of Polaris Dawn, which performed the first private spacewalk. Credit: Polaris Dawn

After a long summer and fall of uncertainty, private astronaut Jared Isaacman has been renominated to lead NASA, and there appears to be momentum behind getting him confirmed quickly as the space agency’s 15th administrator. It is possible, although far from a lock, the Senate could finalize his nomination before the end of this year.

It cannot happen soon enough.

The National Aeronautics and Space Administration is, to put it bluntly, kind of a mess. This is not meant to disparage the many fine people who work at NASA. But years of neglect, changing priorities, mismanagement, creeping bureaucracy, meeting bloat, and other factors have taken their toll. NASA is still capable of doing great things. It still inspires. But it needs a fresh start.

“Jared has already garnered tremendous support from nearly everyone in the space community,” said Lori Garver, who served as NASA’s deputy administrator under President Obama. “This should give him a tail wind as he inevitably will have to make tough calls.”

Garver worked for a Democratic administration, and it’s notable that Isaacman has admirers from across the political spectrum, from left-leaning space advocates to right-wing influencers. A decade and a half ago, Garver led efforts to get NASA to more fully embrace commercial space. In some ways, Isaacman will seek to further this legacy, and Garver knows all too well how difficult it is to change the sprawling space agency and beat back entrenched contractors.

“Expectations are high, yet the challenge of marrying outsized goals to greatly reduced budget guidance from his administration remains,” Garver said. “It will be difficult to deliver on accelerating Artemis, transitioning to commercial LEO destinations, starting a serious nuclear electric propulsion program for Mars transportation, and attracting non-government funding for science missions. He’s coming in with a lot of support, which he will need in the current divisive political environment.”

Here’s a rundown of some of the challenges Isaacman must overcome to be a successful administrator.

A shrunken NASA

At the beginning of this year, the civil servant workforce at the space agency numbered about 18,000 people. NASA said that about 3,870 employees exited this year under various deferred resignation, early retirement, or buyout programs. After subtracting another 500 employees who left through normal attrition, NASA’s headcount will be down by 20 to 25 percent by the end of this year.

The question is how impactful these losses are. A number of the departures were from senior positions, leaving important divisions—such as Astrophysics—with acting directors and interim people in key positions. Some people who left were nearing retirement, and this may ultimately benefit the space agency by allowing younger people to bring new energy to the mission.

Yet there are very real concerns about NASA’s ability to retain its best people. As the commercial space industry grows around some of its key centers, including Alabama, Florida, and Texas, these companies cherry-pick the best NASA engineers by offering higher salaries and stock options. These engineers, in turn, know who to hire at the local field centers who are most promising.

This brain drain diminishes the engineering excellence at NASA. Can Isaacman do more with less?

Very low morale

Isaacman also arrives after what has essentially been a lost year for NASA.

Imagine you’re a NASA employee. You came to the agency to lead exploration of the Solar System and beyond. Then the second Trump administration shows up and demands widespread workforce cuts. The White House subsequently also proposes a 25 percent hit to the space agency’s budget and draconian cuts for NASA’s science programs.

Then, to cap off the spring of 2025, Isaacman’s nomination was pulled for purely political reasons. Not everyone at NASA liked Isaacman. There was genuine concern that he would shake things up and rattle cages. But Isaacman was also perceived as young, dynamic, and well-liked by the broader space community. He genuinely wanted to see NASA succeed. And then—poof—he’s gone. This only exacerbated uncertainty about the agency’s future.

Interim NASA Administrator Sean Duffy provides remarks at a briefing prior to the Crew 11 launch in August.

Credit: NASA

Interim NASA Administrator Sean Duffy provides remarks at a briefing prior to the Crew 11 launch in August. Credit: NASA

Isaacman’s de-nomination was followed by the appointment of Sean Duffy, a former reality TV star serving as the Secretary of Transportation, to lead NASA on an interim basis. Duffy was a wild card, but it soon became clear he saw NASA as a vehicle to further his political career. And even if Duffy had been focused on solutions, he knew little about space and already had a full-time job leading the Department of Transportation. NASA employees are not fools. They saw this and understood this move’s implications.

Finally, in a coup de grâce, the government shut down on October 1. The majority of NASA’s civil servant workforce has been sitting at home for six weeks, not getting paid, not exploring, and wondering just what the hell they’re doing working for NASA.

Arte-miss?

As NASA has struggled this year, China has made demonstrable progress in its lunar program. It is now probable that China’s Lanyue lander will put humans on the lunar surface by or before the year 2030, likely beating NASA in its return to the Moon with the Artemis Program.

NASA’s lunar program was created during the first Trump administration, but then NASA leader Jim Bridenstine was unable to secure enough funding (remember the whole Pell Grant fiasco?) before he left office in early 2021. This left NASA without the resources it needed to build a management team to lead the program and support key elements, including a lander and lunar spacesuits.

These problems more or less persisted under President Joe Biden and his NASA Administrator, Bill Nelson. From 2021 to 2024, the leaders of NASA essentially said everything was fine and that a lunar landing by 2026 was on track. When reporters, including myself, would ask the leaders of the Artemis Program, we were effectively shouted down.

For example, in January 2024, I pressed NASA’s chief of deep space exploration, Jim Free, about the non-viability of a 2026 human landing date.

“It’s interesting because we have 11 people in industry on here that have signed contracts to meet those dates,” Free replied during a teleconference, which included representatives from SpaceX, Axiom, and the other companies. “So from my perspective, the people in industry are here today saying we support it. We’ve signed contracts to those dates on the government side based on the technical details that they’ve given us, that our technical teams have come forward with.”

A shorter version of that might be: “Shut up, we know what we’re doing.”

NASA has already delayed the lunar landing officially to 2027. And no one believes that date is real. One of Isaacman’s first jobs will be to conduct an honest assessment of where the Artemis Program truly is and to rapidly take steps to get it on track. I think we can be confident he will do so with eyes wide open.

Human Landing System

So what will he do about this? The biggest challenge involves the Human Landing System (HLS), a necessary component to get humans to the surface from lunar orbit and back.

Ars explored how NASA found itself in this predicament in a long article published in early October. As for what to do now, NASA basically has two realistic options going forward. It can light a fire under SpaceX to prioritize the HLS component of its Starship program, and possibly adopt a simplified architecture. Or it can work with Blue Origin to develop to a human system using its Blue Moon Mk. 1 lander (originally intended for cargo) and a modified Mk. 1 lander for ascent purposes. (Blue says it is game). Beyond that, there is no hardware in work that could possibly accommodate a landing before 2030.

Duffy initially blustered about American capabilities. Repeatedly, he said, “We are going to beat the Chinese to the Moon.” It sounded good, but it underlined his inexperience with spaceflight because it was just not true.

Less than a month ago, Duffy changed his tune. He blamed SpaceX and its Starship vehicle for delays to Artemis, and he said he was “opening up” the lander competition. The problem is that Duffy’s solution was to raise the prospect of a “government option” lunar lander. He had been having discussions with Lockheed Martin, Northrop Grumman, and others about the possibility of issuing a cost-plus contract to build a smaller lunar lander in 30 months.

An artist’s illustration of multiple Starships on the lunar surface, with a Moon base in the background.

Credit: SpaceX

An artist’s illustration of multiple Starships on the lunar surface, with a Moon base in the background. Credit: SpaceX

Duffy should have known that this timeline was completely unrealistic. Moreover, a rapidly built lunar lander (think five years, at a bare minimum) would likely cost on the order of $20 billion, which NASA did not have. But no one in his inner circle, including Amit Kshatriya, NASA’s associate administrator, was telling him that. They were encouraging him.

Isaacman is not going to be snowed under by this kind of (preposterous) proposal. Most likely, he will push SpaceX to prioritize HLS and be eager to work with Blue Origin to develop a human lander based on Mk. 1 technology.

His first call as administrator may well be to Blue Origin founder Jeff Bezos.

Commercial LEO Destinations

Another looming problem involves commercial space stations in low-Earth orbit, which are supposed to be flying before the end of 2030 when the International Space Station is due to be retired.

There is much uncertainty over whether the primary companies involved in this effort—be it for financial, technical, regulatory, or other reasons—will be able to launch and test space stations by 2030 in order to allow NASA to maintain a continuous presence in low-Earth orbit. The main contractors are Axiom Space, Voyager Technologies, Blue Origin, and Vast Space.

This is one area in which Duffy took action. In August, he signed a document that implemented major changes to the Commercial LEO Destinations program. One of the biggest shifts was a lowering of the minimum requirements. Instead of fully operational stations, the new directive required only the capability to support four astronauts for 1-month increments in low-Earth orbit.

However, it is unclear that Duffy fully understood what he was signing, because there was an immediate pushback. Moreover, prior to the government shutdown, there was a lot of discussion about ripping up the directive and reverting to the old rules for commercial space stations. Everyone in the industry is scratching their heads about what comes next.

In the meantime, the space station companies are trying to raise funds, design stations for uncertain requirements, and prepare for competition for the next phase of NASA awards. This program needs more funding, clarity, and urgency for it to be successful.

Earth science

In recent days, there has been some excellent reporting about the fate of Earth science at NASA, which is part of the space agency’s core mission. Space.com published a long feature article about the Trump administration’s efforts to undermine Maryland’s Goddard Space Flight Center, which is NASA’s oldest field center.

Goddard houses the largest Earth science workforce at the agency, and its study of climate change is at odds with the policy positions of the Trump administration and many members of a Republican-controlled Congress. The result has been steep funding cuts, canceled missions, and closed buildings.

One of Isaacman’s most challenging jobs will be to balance support for Earth science while also placating an administration that frankly does not want to publish reports about how human activity is warming the planet.

In remarks on the social media site X, Isaacman recently said he wanted to expand commercial partnerships to science missions. “Better to have 10 x $100 million missions and a few fail than a single overdue and costly $1B+ mission,” he wrote. Isaacman said NASA should also buy more Earth data from providers like Planet and BlackSky, which already have satellites in orbit.

“Why build bespoke satellites at greater cost and delay when you could pay for the data as needed from existing providers?” he asked.

Planetary science

Another area of concern is planetary science. When one picks apart Trump’s budget priorities, there are two clear and disturbing trends.

The first is that there are no significant planetary science missions in the pipeline after the ambitious Dragonfly mission, which is scheduled to launch to Titan in July 2028. It becomes difficult to escape the reality that this administration is not prioritizing any mission that launches after Trump leaves office in January 2029. As a result, after Dragonfly, the planetary pipeline is running low.

Another major concern is the fate of the famed Jet Propulsion Laboratory in California. The lab laid off 550 people last month, which followed previous cuts. The center director, Laurie Leshin, stepped down on June 1. With the Mars Sample Return mission on hold, and quite possibly canceled, the future of NASA’s premier planetary science mission center is cloudy.

A view of the control room at NASA’s Jet Propulsion Laboratory in California.

Credit: NASA

A view of the control room at NASA’s Jet Propulsion Laboratory in California. Credit: NASA

Isaacman has said he has never “remotely suggested” that NASA could do without the Jet Propulsion Laboratory.

“Personally, I have publicly defended programs like the Chandra X-ray Observatory, offered to fund a Hubble reboost mission, and anything suggesting that I am anti-science or want to outsource that responsibility is simply untrue,” he wrote on X.

That is likely true, but charting a bright course for the future of planetary science, on a limited budget, will be a major challenge for the new administrator.

New initiatives

All of the above concerns NASA’s existing challenges. But Isaacman will certainly want to make his own mark. This is likely to involve a spaceflight technology he considers to be the missing link in charting a course for humans to explore the Solar System beyond the Moon: nuclear electric propulsion.

As he explained to Ars earlier this year, Isaacman’s signature issue was going to be a full-bore push into nuclear electric propulsion.

“We would have gone right to a 100-kilowatt test vehicle that we would send somewhere inspiring with some great cameras,” he said. “Then we are going right to megawatt class, inside of four years, something you could dock a human-rated spaceship to, or drag a telescope to a Lagrange point and then return, big stuff like that. The goal was to get America underway in space on nuclear power.”

Another key element of this plan is that it would give some of NASA’s field centers, including Marshall Space Flight Center, important work to do after the seemingly inevitable cancellation of the Space Launch System rocket.

Standing up new programs, and battling against existing programs that have strong backing in Congress and industry, will require all of the diplomatic skill and force of personality Isaacman can muster.

We will soon find out if he has the right stuff.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

NASA is kind of a mess: Here are the top priorities for a new administrator Read More »

mark-zuckerberg’s-illegal-school-drove-his-neighbors-crazy

Mark Zuckerberg’s illegal school drove his neighbors crazy


Neighbors complained about noise, security guards, and hordes of traffic.

An entrance to Mark Zuckerberg’s compound in Palo Alto, California. Credit: Loren Elliott/Redux

The Crescent Park neighborhood of Palo Alto, California, has some of the best real estate in the country, with a charming hodgepodge of homes ranging in style from Tudor revival to modern farmhouse and contemporary Mediterranean. It also has a gigantic compound that is home to Mark Zuckerberg, his wife Priscilla Chan, and their daughters Maxima, August, and Aurelia. Their land has expanded to include 11 previously separate properties, five of which are connected by at least one property line.

The Zuckerberg compound’s expansion first became a concern for Crescent Park neighbors as early as 2016, due to fears that his purchases were driving up the market. Then, about five years later, neighbors noticed that a school appeared to be operating out of the Zuckerberg compound. This would be illegal under the area’s residential zoning code without a permit. They began a crusade to shut it down that did not end until summer 2025.

WIRED obtained 1,665 pages of documents about the neighborhood dispute—including 311 records, legal filings, construction plans, and emails—through a public record request filed to the Palo Alto Department of Planning and Development Services. (Mentions of “Zuckerberg” or “the Zuckerbergs” appear to have been redacted. However, neighbors and separate public records confirm that the property in question belongs to the family. The names of the neighbors who were in touch with the city were also redacted.)

The documents reveal that the school may have been operating as early as 2021 without a permit to operate in the city of Palo Alto. As many as 30 students might have enrolled, according to observations from neighbors. These documents also reveal a wider problem: For almost a decade, the Zuckerbergs’ neighbors have been complaining to the city about noisy construction work, the intrusive presence of private security, and the hordes of staffers and business associates causing traffic and taking up street parking.

Over time, neighbors became fed up with what they argued was the city’s lack of action, particularly with respect to the school. Some believed that the delay was because of preferential treatment to the Zuckerbergs. “We find it quite remarkable that you are working so hard to meet the needs of a single billionaire family while keeping the rest of the neighborhood in the dark,” reads one email sent to the city’s Planning and Development Services Department in February. “Just as you have not earned our trust, this property owner has broken many promises over the years, and any solution which depends on good faith behavioral changes from them is a failure from the beginning.”

Palo Alto spokesperson Meghan Horrigan-Taylor told WIRED that the city “enforces zoning, building, and life safety rules consistently, without regard to who owns a property.” She also refuted the claim that neighbors were kept in the dark, claiming that the city’s approval of construction projects at the Zuckerberg properties “were processed the same way they are for any property owner.” She added that, though some neighbors told the city they believe the Zuckerbergs received “special treatment,” that is not accurate.

“Staff met with residents, conducted site visits, and provided updates by phone and email while engaging the owner’s representative to address concerns,” Horrigan-Taylor said. “These actions were measured and appropriate to abate the unpermitted use and responsive to neighborhood issues within the limits of local and state law.”

According to The New York Times, which first reported on the school’s existence, it was called “Bicken Ben School” and shared a name with one of the Zuckerbergs’ chickens. The listing for Bicken Ben School, or BBS for short, in a California Department of Education directory claims the school opened on October 5, 2022. This, however, is the year after neighbors claim to have first seen it operating. It’s also two and a half years after Sara Berge—the school’s point of contact, per documents WIRED obtained from the state via public record request—claims to have started her role as “head of school” for a “Montessori pod” at a “private family office” according to her LinkedIn profile, which WIRED viewed in September and October. Berge did not respond to a request to comment.

Between 2022 and 2025, according to the documents Bicken Ben filed to the state, the school grew from nine to 14 students ranging from 5 to 10 years old. Neighbors, however, estimated that they observed 15 to 30 students. Berge similarly claimed on her LinkedIn profile to have overseen “25 children” in her job. In a June 2025 job listing for “BBS,” the school had a “current enrollment of 35–40 students and plans for continued growth,” which the listing says includes a middle school.

In order for the Zuckerbergs to run a private school on their land, which is in a residential zone, they need a “conditional use” permit from the city. However, based on the documents WIRED obtained, and Palo Alto’s public database of planning applications, the Zuckerbergs do not appear to have ever applied for or received this permit.

Per emails obtained by WIRED, Palo Alto authorities told a lawyer working with the Zuckerbergs in March 2025 that the family had to shut down the school on its compound by June 30. A state directory lists BBS, the abbreviation for Bicken Ben School, as having operated until August 18, and three of Zuckerberg’s neighbors—who all requested anonymity due to the high-profile nature of the family—confirmed to WIRED in late September that they had not seen or heard students being dropped off and picked up on weekdays in recent weeks.

However, Zuckerberg family spokesperson Brian Baker tells WIRED that the school didn’t close, per se. It simply moved. It’s not clear where it is now located, or whether the school is operating under a different name.

In response to a detailed request for comment, Baker provided WIRED with an emailed statement on behalf of the Zuckerbergs. “Mark, Priscilla and their children have made Palo Alto their home for more than a decade,” he said. “They value being members of the community and have taken a number of steps above and beyond any local requirements to avoid disruption in the neighborhood.”

“Serious and untenable”

By the fall of 2024, Zuckerberg’s neighbors were at their breaking point. At some point in mid-2024, according to an email from then mayor Greer Stone, a group of neighbors had met with Stone to air their grievances about the Zuckerberg compound and the illegal school they claimed it was operating. They didn’t arrive at an immediate resolution.

In the years prior, the city had received several rounds of complaints about the Zuckerberg compound. Complaints for the address of the school were filed to 311, the nationwide number for reporting local non-emergency issues, in February 2019, September 2021, January 2022, and April 2023. They all alleged that the property was operating illegally under city code. Both were closed by the planning department, which found no rule violations. An unknown number of additional complaints, mentioned in emails among city workers, were also made between 2020 and 2024—presumably delivered via phone calls, in person, or to city departments not included in WIRED’s public record request.

In December 2020, building inspection manager Korwyn Peck wrote to code enforcement officer Brian Reynolds about an inspection he attempted to conduct around the Zuckerberg compound, in response to several noise and traffic complaints from neighbors. He described that several men in SUVs had gathered to watch him, and a tense conversation with one of them had ensued. “This appears to be a site that we will need to pay attention to,” Peck wrote to Reynolds.

“We have all been accused of ‘not caring,’ which of course is not true,” Peck added. “It does appear, however, with the activity I observed tonight, that we are dealing with more than four simple dwellings. This appears to be more than a homeowner with a security fetish.”

In a September 11, 2024, email to Jonathan Lait, Palo Alto’s director of planning and development services and Palo Alto city attorney Molly Stump, one of Zuckerberg’s neighbors alleged that since 2021, “despite numerous neighborhood complaints” to the city of Palo Alto, including “multiple code violation reports,” the school had continued to grow. They claimed that a garage at the property had been converted into another classroom, and that an increasing number of children were arriving each day. Lait and Stump did not respond to a request to comment.

“The addition of daily traffic from the teachers and parents at the school has only exacerbated an already difficult situation,” they said in the email, noting that the neighborhood has been dealing with an “untenable traffic” situation for more than eight years.

They asked the city to conduct a formal investigation into the school on Zuckerberg’s property, adding that their neighbors are also “extremely concerned” about the school, and “are willing to provide eyewitness accounts in support of this complaint.”

Over the next week, another neighbor forwarded this note to all six Palo Alto city council members, as well as then mayor Stone. One of these emails described the situation as “serious” and “untenable.”

“We believe the investigation should be swift and should yield a cease and desist order,” the neighbor wrote.

Lait responded to the neighbor who sent the original complaint on October 15, claiming that he’d had an “initial call” with a “representative” of the property owners and that he was directing the city’s code enforcement staff to reexamine the property.

On December 11, 2024, the neighbor claimed that since one of their fellow neighbors had spoken to a Zuckerberg representative, and the representative had allegedly admitted that there was a school on the property, “it seems like an open and shut case.”

“Our hope is that there is an equal process in place for all residents of Palo Alto regardless of wealth or stature,” the neighbor wrote. “It is hard to imagine that this kind of behavior would be ignored in any other circumstance.”

That same day, Lait told Christine Wade, a partner at SSL Law Firm—who, in an August 2024 email thread, said she was “still working with” the Zuckerberg family—that the Zuckerbergs lacked the required permit to run a school in a residential zone.

“Based on our review of local and state law, we believe this use constitutes a private school use in a residential zone requiring a conditional use permit,” Lait wrote in an email to Wade. “We also have not found any state preemptions that would exclude a use like this from local zoning requirements.” Lait added that a “next step,” if a permit was not obtained, would be sending a cease and desist to the property owner.

According to several emails, Wade, Lait, and Mark Legaspi, CEO of the Zuckerberg family office called West 10, went on to arrange an in-person meeting at City Hall on January 9. (This is the first time that the current name of the Zuckerberg family office, West 10, has been publicly disclosed. The office was previously called West Street.) Although WIRED did not obtain notes from the meeting, Lait informed the neighbor on January 10 that he had told the Zuckerbergs’ “representative” that the school would need to shut down if it didn’t get a conditional use permit or apply for that specific permit.

Lait added that the representative would clarify what the family planned to do in about a week; however, he noted that if the school were to close, the city may give the school a “transition period” to wind things down. Wade did not respond to a request for comment.

“At a minimum, give us extended breaks”

There was another increasingly heated conversation happening behind the scenes. On February 3 of this year, at least one neighbor met with Jordan Fox, an employee of West 10.

It’s unclear exactly what happened at this meeting, or if the neighbor who sent the September 11 complaint was in attendance. But a day after the meeting with Fox, two additional neighbors added their names to the September 11 complaint, per an email to Lait.

On February 12, a neighbor began an email chain with Fox. This email was forwarded to Planning Department officials two months later. The neighbor, who seemingly attended the meeting, said they had “connected” with fellow neighbors “to review and revise” an earlier list of 14 requests that had been reportedly submitted to the Zuckerbergs at some previous point. The note does not specify the contents of this original list of requests, but of the 19 neighbors who originally contributed to it, they claimed that 15 had contributed to the revised list.

The email notes that the Zuckerbergs had been “a part of our neighborhood for many years,” and that they “hope that this message will start an open and respectful dialogue,” built upon the “premise of how we all wish to be treated as neighbors.”

“Our top requests are to minimize future disruption to the neighborhood and proactively manage the impact of the many people who are affiliated with you,” the email says. This includes restricting parking by “security guards, contractors, staff, teachers, landscapers, visitors, etc.” In the event of major demolitions, concrete pours, or large parties, the email asks for advance notice, and for dedicated efforts to “monitor and mitigate noise.”

The email also asks the Zuckerbergs to, “ideally stop—but at a minimum give us extended breaks from—the acquisition, demolition and construction cycle to let the neighborhood recover from the last eight years of disruption.”

At this point, the email requests that the family “abide by both the letter and the spirit of Palo Alto” by complying with city code about residential buildings.

Specifically, it asks the Zuckerbergs to get a use permit for the compound’s school and to hold “a public hearing for transparency.” It also asks the family to not expand its compound any further. “We hope this will help us get back the quiet, attractive residential neighborhood that we all loved so much when we chose to move here.”

In a follow-up on March 4, Fox acknowledged the “unusual” effects that come with being neighbors with Mark Zuckerberg and his family.

“I recognize and understand that the nature of our residence is unique given the profile and visibility of the family,” she wrote. “I hope that as we continue to grow our relationship with you over time, you will increasingly enjoy the benefits of our proximity—e.g., enhanced safety and security, shared improvements, and increased property values.”

Fox said that the Zuckerbergs instituted “a revised parking policy late last year” that should address their concerns, and promised to double down on efforts to give advanced notice about construction, parties, and other potential disruptions.

However, Fox did not directly address the unpermitted school and other nonresidential activities happening at the compound. She acknowledged that the compound has “residential support staff” including “childcare, culinary, personal assistants, property management, and security,” but said that they have “policies in place to minimize their impact on the neighborhood.”

It’s unclear if the neighbor responded to Fox.

“You have not earned our trust”

While these conversations were happening between Fox and Zuckerberg’s neighbors, Lait and others at the city Planning Department were scrambling to find a solution for the neighbor who complained on September 11, and a few other neighbors who endorsed the complaint in September and February.

Starting in February, one of these neighbors took the lead on following up with Lait. They asked him for an update on February 11, and heard back a few days later. He didn’t have any major updates, “but after conversations with the family’s representatives, he said he was exploring whether a “subset of children” could continue to come to the school sometimes for “ancillary” uses.

“I also believe a more nuanced solution is warranted in this case,” Lait added. Ideally, such a solution would respond to the neighbors’ complaints, but allow the Zuckerbergs to “reasonably be authorized by the zoning code.”

The neighbor wasn’t thrilled. The next day, they replied and called the city’s plan “unsatisfactory.”

“The city’s ‘nuanced solution’ in dealing with this serial violator has led to the current predicament,” they said (referring to the nuanced solution Lait mentioned in his last email.)

Horrigan-Taylor, the Palo Alto spokesperson, told WIRED that Lait’s mention of a “nuanced” solution referred to “resolving, to the extent permissible by law, neighborhood impacts and otherwise permitted use established by state law and local zoning.”

“Would I, or any other homeowner, be given the courtesy of a ‘nuanced solution’ if we were in violation of city code for over four years?” they added.

“Please know that you have not earned our trust and that we will take every opportunity to hold the city accountable if your solution satisfies a single [redacted] property owner over the interests of an entire neighborhood,” they continued.

“If you somehow craft a ‘nuanced solution’ based on promises,” the neighbor said, “the city will no doubt once again simply disappear and the damage to the neighborhood will continue.”

Lait did not respond right away. The neighbor followed up on March 13, asking if he had “reconsidered” his plan to offer a “‘nuanced solution’ for resolution of these ongoing issues by a serial code violator.” They asked when the neighborhood could “expect relief from the almost decade long disruptions.”

Behind the scenes, Zuckerberg’s lawyers were fighting to make sure the school could continue to operate. In a document dated March 14, Wade argues that she believed the activities at “the Property” “represent an appropriate residential use based on established state law as well as constitutional principles.”

Wade said that “the Family” was in the process of obtaining a “Large Family Daycare” license for the property, which is legal for a cohort of 14 or fewer children all under the age of 10.

“We consistently remind our vendors, guests, etc. to minimize noise, not loiter anywhere other than within the Family properties, and to keep areas clean,” Wade added in the letter. Wade also attached an adjusted lease corresponding with the address of the illicit school, which promises that the property will be used for only one purpose. The exact purpose is redacted.

On March 25, Lait told the neighbor that the city’s June 30 deadline for the Zuckerbergs to shut down the school had not changed. However, the family’s representative said that they were pursuing a daycare license. These licenses are granted by the state, not the city of Palo Alto.

The subtext of this email was that if the state gave them a daycare licence, there wasn’t much the city could do. Horrigan-Taylor confirmed with WIRED that “state licensed large family day care homes” do not require city approval, adding that the city also “does not regulate homeschooling.”

“Thanks for this rather surprising information,” the neighbor replied about a week later. “We have repeatedly presented ideas to the family over the past 8 years with very little to show for it, so from our perspective, we need to understand the city’s willingness to act or not to act.”

Baker told WIRED that the Zuckerbergs never ended up applying for a daycare license, a claim that corresponds with California’s public registry of daycare centers. (There are only two registered daycare centers in Palo Alto, and neither belongs to the Zuckerbergs. The Zuckerbergs’ oldest child, Maxima, will also turn 10 in December and consequently age out of any daycare legally operating in California.)

Horrigan-Taylor said that a representative for the Zuckerbergs told the city that the family wanted to move the school to “another location where private schools are permitted by right.”

In a school administrator job listing posted to the Association Montessori International website in July 2022 for “BBS,” Bicken Ben head of school Berge claims that the school had four distinct locations, and that applicants must be prepared to travel six to eight weeks per year. The June 2025 job listing also says that the “year-round” school spans “across multiple campuses,” but the main location of the job is listed as Palo Alto. It’s unclear where the other sites are located.

Most of the Zuckerbergs’ neighbors did not respond to WIRED’s request for comment. However, the ones that did clearly indicated that they would not be forgetting the Bicken Ben saga, or the past decade of disruption, anytime soon.

“Frankly I’m not sure what’s going on,” one neighbor said, when reached by WIRED via landline. “Except for noise and construction debris.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Mark Zuckerberg’s illegal school drove his neighbors crazy Read More »

how-to-declutter,-quiet-down,-and-take-the-ai-out-of-windows-11-25h2

How to declutter, quiet down, and take the AI out of Windows 11 25H2


A new major Windows 11 release means a new guide for cleaning up the OS.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

It’s that time of year again—temperatures are dropping, leaves are changing color, and Microsoft is gradually rolling out another major yearly update to Windows 11.

The Windows 11 25H2 update is relatively minor compared to last year’s 24H2 update (the “25” here is a reference to the year the update was released, while the “H2” denotes that it was released in the second half of the year, a vestigial suffix from when Microsoft would release two major Windows updates per year). The 24H2 update came with some major under-the-hood overhauls of core Windows components and significant performance improvements for the Arm version; 25H2 is largely 24H2, but with a rolled-over version number to keep it in line with Microsoft’s timeline for security updates and tech support.

But Microsoft’s continuous update cadence for Windows 11 means that even the 24H2 version as it currently exists isn’t the same one Microsoft released a year ago.

To keep things current, we’ve combed through our Windows cleanup guide, updating it for the current build of Windows 11 25H2 (26200.7019) to help anyone who needs a fresh Windows install or who is finally updating from Windows 10 now that Microsoft is winding down support for it. We’ll outline dozens of individual steps you can take to clean up a “clean install” of Windows 11, which has taken an especially user-hostile attitude toward advertising and forcing the use of other Microsoft products.

As before, this is not a guide about creating an extremely stripped-down, telemetry-free version of Windows; we stick to the things that Microsoft officially supports turning off and removing. There are plenty of experimental hacks and scripts that take it a few steps farther, and/or automate some of the steps we outline here—NTDev’s Tiny11 project is one—but removing built-in Windows components can cause unexpected compatibility and security problems, and Tiny11 has historically had issues with basic table-stakes stuff like “installing security updates.”

These guides capture moments in time, and regular monthly Windows patches, app updates downloaded through the Microsoft Store, and other factors all can and will cause small variations from our directions. You may also see apps or drivers specific to your PC’s manufacturer. This guide also doesn’t cover the additional bloatware that may come out of the box with a new PC, starting instead with a freshly installed copy of Windows from a USB drive.

Table of Contents

Starting with Setup: Avoiding Microsoft account sign-in

The most contentious part of Windows 11’s setup process relative to earlier Windows versions is that it mandates a Microsoft account sign-in, with none of the readily apparent “limited account” fallbacks that existed in Windows 10. As of Windows 11 22H2, that’s true of both the Home and Pro editions.

There are two reasons I can think of not to sign in with a Microsoft account. The first is that you want nothing to do with a Microsoft account, thank you very much. Signing in makes Windows bombard you with more Microsoft 365, OneDrive, and Game Pass subscription upsells since all you need to do is add them to an account that already exists, and Windows setup will offer subscriptions to each if you sign in first.

The second—which describes my situation—is that you do use a Microsoft account because it offers some handy benefits like automated encryption of your local drive (having those encryption keys tied to my account has saved me a couple of times) or syncing of browser info and some preferences. But you don’t want to sign in at setup, either because you don’t want to be bothered with the extra upsells or you prefer your user folder to be located at “C:UsersAndrew” rather than “C:Users.”

Regardless of your reasoning, if you don’t want to bother with sign-in at setup, you have a few different options:

Use the command line

During Windows 11 Setup, after selecting a language and keyboard layout but before connecting to a network, hit Shift+F10 to open the command prompt (depending on your keyboard, you may also need to hit the Fn key before pressing F10). Type OOBEBYPASSNRO, hit Enter, and wait for the PC to reboot.

When it comes back, click “I don’t have Internet” on the network setup screen, and you’ll have recovered the option to use “limited setup” (aka a local account) again, like older versions of Windows 10 and 11 offered.

This option has been removed from some Windows 11 testing builds, but it still works as of this writing in 25H2. We may see this option removed in a future update to Windows.

For Windows 11 Pro

For Windows 11 Pro users, there’s a command-line-free workaround you can take advantage of.

Proceed through the Windows 11 setup as you normally would, including connecting to a network and allowing the system to check for updates. Eventually, you’ll be asked whether you’re setting your PC up for personal use or for “work or school.”

Select the “work or school” option, then “sign-in options,” at which point you’ll finally be given a button that says “domain join instead.” Click this to indicate you’re planning to join the PC to a corporate domain (even though you aren’t), and you’ll see the normal workflow for creating a “limited” local account.

The downside is that you’re starting your relationship with your new Windows install by lying to it. But hey, if you’re using the AI features, your computer is probably going to lie to you, too. It all balances out.

Using the Rufus tool

Credit: Andrew Cunningham

The Rufus tool can streamline a few of the more popular tweaks and workarounds for Windows 11 install media. Rufus is a venerable open source app for creating bootable USB media for both Windows and Linux. If you find yourself doing a lot of Windows 11 installs and don’t want to deal with Microsoft accounts, Rufus lets you tweak the install media itself so that the “limited setup” options always appear, no matter which edition of Windows you’re using.

To start, grab Rufus and then a fresh Windows 11 ISO file from Microsoft. You’ll also want an 8GB or larger USB drive; I’d recommend a 16GB or larger drive that supports USB 3.0 speeds, both to make things go a little faster and to leave yourself extra room for drivers, app installers, and anything else you might want to set a new PC up for the first time. (I also like this SanDisk drive that has a USB-C connector on one end and a USB-A connector on the other to ensure compatibility with all kinds of PCs.)

Fire up Rufus, select your USB drive and the Windows ISO, and hit Start to copy over all of the Windows files. After you hit Start, you’ll be asked if you want to disable some system requirements checks, remove the Microsoft account requirement, or turn off all the data collection settings that Windows asks you about the first time you set it up. What you do here is up to you; I usually turn off the sign-in requirement, but disabling the Secure Boot and TPM checks doesn’t stop those features from working once Windows is installed and running.

The rest of Windows 11 setup

The main thing I do here, other than declining any and all Microsoft 365 or Game Pass offers, is turn all the toggles on the privacy settings screen to “no.” This covers location services, the Find My Device feature, and four toggles that collectively send a small pile of usage and browsing data to Microsoft that it uses “to enhance your Microsoft experiences.” Pro tip: Use the Tab key and spacebar to quickly toggle these without clicking or scrolling.

Of these, I can imagine enabling Find My Device if you’re worried about theft or location services if you want Windows and apps to be able to access your location. But I tend not to send any extra telemetry or browsing data other than the basics (the only exception being on machines I enroll in the Windows Insider Preview program for testing, since Microsoft requires you to send more detailed usage data from those machines to help it test its beta software). If you want to change any of these settings after setup, they’re all in the Settings app under Privacy & Security.

If you have signed in with a Microsoft account during setup, you can expect to see several additional setup screens that aren’t offered when you’re signing in with a local account, including attempts to sell Microsoft 365, OneDrive, and Xbox Game Pass subscriptions. Accept or decline these offers as desired.

Cleaning up Windows 11

Reboot once this is done, and you’ll be at the Windows desktop. Start by installing any drivers you need, plus Windows updates.

When you first connect to the Internet, Windows may or may not decide to automatically pull down a few extraneous third-party apps and app shortcuts, things like Spotify or Grammarly—this has happened to me consistently in most Windows 11 installs I’ve done over the years, though it hasn’t generally happened on the 24H2 and 25H2 PCs I’ve set up.

Open the Start menu and right-click each of the apps you don’t want to remove the icons for and/or uninstall. Some of these third-party apps are just stubs that won’t actually be installed to your computer until you try to run them, so removing them directly from the Start menu will get rid of them entirely.

Right-clicking and uninstalling the unwanted apps that are pinned to the Start menu is the fastest (and, for some, the only) way to get rid of them.

Credit: Andrew Cunningham

Right-clicking and uninstalling the unwanted apps that are pinned to the Start menu is the fastest (and, for some, the only) way to get rid of them. Credit: Andrew Cunningham

The other apps and services included in a fresh Windows install generally at least have the excuse of being first-party software, though their usefulness will be highly user-specific: Xbox, the new Outlook app, Clipchamp, and LinkedIn are the ones that stand out, plus the ad-driven free-to-play version of the Solitaire suite that replaced the simple built-in version during the Windows 8 era.

Rather than tell you what I remove, I’ll tell you everything that can be removed from the Installed Apps section of the Settings app (also quickly accessible by right-clicking the Start button in the taskbar). You can make your own decisions here; I generally leave the in-box versions of classic Windows apps like Sound Recorder and Calculator while removing things I don’t use, like To Do or Clipchamp.

This list should be current for a fresh, fully updated install of Windows 11 25H2, at least in the US, but it doesn’t include any apps that might be specific to your hardware, like audio or GPU settings apps. Some individual apps may or may not appear as part of your Windows install.

  • Calculator
  • Camera
  • Clock (may also appear as Windows Clock)
  • Copilot
  • Family
  • Feedback Hub
  • Game Assist
  • Media Player
  • Microsoft 365 Copilot
  • Microsoft Clipchamp
  • Microsoft OneDrive: Removing this, if you don’t use it, should also get rid of notifications about OneDrive and turning on Windows Backup.
  • Microsoft Teams
  • Microsoft To Do
  • News
  • Notepad
  • Outlook for Windows
  • Paint
  • Photos
  • Power Automate
  • Quick Assist
  • Remote Desktop Connection
  • Snipping Tool
  • Solitaire & Casual Games
  • Sound Recorder
  • Sticky Notes
  • Terminal
  • Weather
  • Web Media Extensions
  • Xbox
  • Xbox Live

In Windows 11 23H2, Microsoft moved almost all of Windows’ non-removable apps to a System Components section, where they can be configured but not removed; this is where things like Phone Link, the Microsoft Store, Dev Home, and the Game Bar have ended up. The exception is Edge and its associated updater and WebView components; these are not removable, but they aren’t listed as “system components” for some reason, either.

Start, Search, Taskbar, and lock screen decluttering

Microsoft has been on a yearslong crusade against unused space in the Start menu and taskbar, which means there’s plenty here to turn off.

  • Right-click an empty space on the desktop, click Personalize, and click any of the other built-in Windows themes to turn off the Windows Spotlight dynamic wallpapers and the “Learn about this picture” icon.
  • Right-click the Taskbar and click Taskbar settings. I usually disable the Widgets board; you can leave this if you want to keep the little local weather icon in the lower-left corner of your screen, but this space is also sometimes used to present junky news articles from the Microsoft Start service.
    • If you want to keep Widgets enabled but clean it up a bit, open the Widgets menu, click the Settings gear in the top-right corner, scroll to “Show or hide feeds,” and turn the feed off. This will keep the weather, local sports scores, stocks, and a few other widgets, but it will get rid of the spammy news articles.
  • Also in the Taskbar settings, I usually change the Search field to “search icon only” to get rid of the picture in the search field and reduce the amount of space it takes up. Toggle the different settings until you find one you like.
  • Open Settings > Privacy & Security > Recommendations & offers and disable “Personalized offers,” “Improve Start and search results,” “Show notifications in Settings,” “Recommendations and offers in Settings,” and “Advertising ID” (some of these may already be turned off). These settings mostly either send data to Microsoft or clutter up the Settings app with various recommendations and ads.
  • Open Settings > Privacy & Security > Diagnostics & feedback, scroll down to “Feedback frequency,” and select “Never” to turn off all notifications requesting feedback about various Windows features.
  • Open Settings > Privacy & Security, click Search and disable “Show search highlights.” This cleans up the Search menu quite a bit, focusing it on searches you’ve done yourself and locally installed apps.

  • Open Settings > Personalization > Lock screen. Under “Personalize your lock screen,” switch from “Windows spotlight” to either Picture or Slideshow to use local images for your lock screen, and then uncheck the “get fun facts, tips, tricks, and more” box that appears. This will hide the other text boxes and clickable elements that Windows automatically adds to the lock screen in Spotlight mode. Under “Lock screen status,” select “none” to hide the weather widget and other stocks and news widgets from your lock screen.
  • If you own a newer Windows PC with a dedicated Copilot key, you can navigate to Settings > Personalization > Text input and scroll down to remap the key. Unfortunately, its usefulness is still limited—you can reassign it to the Search function or to the built-in Microsoft 365 app, but by default, Windows doesn’t give you the option to reassign it to open any old app.

Credit: Andrew Cunningham

By default, the Start menu will occasionally make “helpful” suggestions about third-party Microsoft Store apps to grab. These can and should be turned off.

  • Open Settings > Personalization > Start. Turn off “Show recommendations for tips, shortcuts, new apps, and more.” This will disable a feature where Microsoft Store apps you haven’t installed can show up in Recommendations along with your other files. You can also decide whether you want to be able to see more pinned apps or more recent/recommended apps and files on the Start menu, depending on what you find more useful.
  • On the same page, disable “show account-related notifications” to reduce the number of reminders and upsell notifications you see related to your Microsoft account.

Credit: Andrew Cunningham

  • Open Settings > System > Notifications, scroll down, and expand the additional settings section. Uncheck all three boxes here, which should get rid of all the “finish setting up your PC” prompts, among other things.
  • Also feel free to disable notifications from any specific apps you don’t want to hear from.

In-app AI features

Microsoft has steadily been adding image and text generation capabilities to some of the bedrock in-box Windows apps, from Paint and Photos to Notepad.

Exactly which AI features you’re offered will depend on whether you’ve signed in with a Microsoft account or not or whether you’re using a Copilot+ PC with access to more AI features that are executed locally on your PC rather than in the cloud (more on those in a minute).

But the short version is that it’s usually not possible to turn off or remove these AI features without uninstalling the entire app. Apps like Notepad and Edge do have toggles for shutting off Copilot and other related features, but no such toggles exist in Paint, for example.

Even if you can find some Registry key or another backdoor way to shut these things off, there’s no guarantee the settings will stick as these apps are updated; it’s probably easier to just try to ignore any AI features within these apps that you don’t plan to use.

Removing Recall, and other extra steps for Copilot+ PCs

So far, everything we’ve covered has been applicable to any PC that can run Windows 11. But new PCs with the Copilot+ branding—anything with a Qualcomm Snapdragon X chip in it or things with certain Intel Core Ultra or AMD Ryzen AI CPUs—get extra features that other Windows 11 PCs don’t have. Given that these are their own unique subclass of PCs, it’s worth exploring what’s included and what can be turned off.

Removing Recall will be possible, though it’s done through a relatively obscure legacy UI rather than the Settings app. Credit: Andrew Cunningham

One Copilot+ feature that can be fully removed, in part because of the backlash it initially caused, is the data-scraping Recall feature. Recall won’t be enabled on your Copilot+ system unless you’re signed in with a Microsoft account and you explicitly opt in. But if fully removing the feature gives you extra peace of mind, then by all means, remove it.

  • If you just want to make sure Recall isn’t active, navigate to Settings > Privacy & security > Recall & snapshots. This is where you adjust Recall’s settings and verify whether it’s turned on or off.
  • To fully remove Recall, open Settings > System > Optional Features, scroll down to the bottom of this screen, and click More Windows features. This will open the old “Turn Windows features on or off” Control Panel applet used to turn on or remove some legacy or power-user-centric components, like old versions of the .NET Framework or Hyper-V. It’s arranged alphabetically.
  • In Settings > Privacy & security > Click to Do, you’ll also find a toggle to disable Click to Do, a Copilot+ feature that takes a screenshot of your desktop and tries to make recommendations or suggest actions you might perform (copying and pasting text or an image, for example).

Apps like Paint or Photos may also prompt you to install an extension for AI-powered image generation from the Microsoft Store. This extension—which weighs in at well over a gigabyte as of this writing—is not installed by default. If you have installed it, you can remove it by opening Settings > Apps > Installed apps and removing “ImageCreationHostApp.”

Bonus: Cleaning up Microsoft Edge

I use Edge out of pragmatism rather than love—”the speed, compatibility, and extensions ecosystem of Chrome, backed by the resources of a large company that isn’t Google” is still a decent pitch. But Edge has become steadily less appealing as Microsoft has begun pushing its own services more aggressively and stuffing the browser with AI features. In a vacuum, Firefox aligns better with what I want from a browser, but it just doesn’t respond well to my normal tab-monster habits despite several earnest attempts to switch—things bog down and RAM runs out. I’ve also had mixed experience with the less-prominent Chromium clones, like Opera, Vivaldi, and Brave. So Edge it is, at least for now.

The main problem with Edge on a new install of Windows is that even more than Windows, it exists in a universe where no one would ever want to switch search engines or shut off any of Microsoft’s “value-added features” except by accident. Case in point: Signing in with a Microsoft account will happily sync your bookmarks, extensions, and many kinds of personal data. But many settings for search engine changes or for opting out of Microsoft services do not sync between systems and require a fresh setup each time.

Below are the Edge settings I change to maximize the browser’s usefulness (and usable screen space) while minimizing annoying distractions; it involves turning off most of the stuff Microsoft has added to the Chromium version of Edge since it entered public preview many years ago. Here’s a list of things to tweak, whether you sign in with a Microsoft account or not.

  • On the Start page when you first open the browser, hit the Settings gear in the upper-right corner. Turn off “Quick links” (or if you leave them on, turn off “Show sponsored links”) and then turn off “show content.” Whether you leave the custom background or the weather widget is up to you.
  • Click the “your privacy choices” link at the bottom of the menu and turn off the “share my data with third parties for personalized ads” toggle.

Edge has scattered some of the settings we change over the last year, but the browser is still full of toggles we prefer to keep turned off. Andrew Cunningham

  • In the Edge UI, click the ellipsis icon near the upper-right corner of the screen and click Settings.
  • Click Profiles in the left Settings sidebar. Click Microsoft Rewards, and then turn it off.
  • Click Privacy, Search, & Services in the Settings sidebar.
    • In Tracking prevention, I set tracking prevention to “strict,” though if you use some other kind of content blocker, this may be redundant; it can also occasionally prompt “it looks like you’re using an ad-blocker” pop-up from sites even if you aren’t.
    • In Privacy, if they’re enabled, disable the toggles under “Optional diagnostic data,” “Help improve Microsoft products,” and “Allow Microsoft to save your browsing activity.”
    • In Search and connected experiences, disable the “Suggest similar sites when a website can’t be found,” “Save time and money with Shopping in Microsoft Edge,” and “Organize your tabs” toggles.
      • If you want to switch from Bing, click “Address bar and search” and switch to your preferred engine, whether that’s Google, DuckDuckGo, or something else. Then click “Search suggestions and filters” and disable “Show me search and site suggestions using my typed characters.”

These settings retain basic spellcheck without any of the AI-related additions. Credit: Andrew Cunningham

  • Click Appearance in the left-hand Settings sidebar, and scroll down to Copilot and sidebar
    • Turn the sidebar off, and turn off the “Personalize my top sites in customize sidebar” and “Allow sidebar apps to show notifications” toggles.
    • Click Copilot under App specific settings. Turn off “Show Copilot button on the toolbar.” Then, back in the Copilot and sidebar settings, turn off the “Show sidebar button” toggle that has just appeared.
  • Click Languages in the left-hand navigation. Disable “Use Copilot for writing on the web.” Turn off “use text prediction” if you want to prevent things you type from being sent to Microsoft, and switch the spellchecker from Microsoft Editor to Basic. (I don’t actually mind Microsoft Editor, but it’s worth remembering if you’re trying to minimize the amount of data Edge sends back to the company.)

Windows-as-a-nuisance

The most time-consuming part of installing a fresh, direct-from-Microsoft copy of Windows XP or Windows 7 was usually reinstalling all the apps you wanted to run on your PC, from your preferred browser to Office, Adobe Reader, Photoshop, and the VLC player. You still need to do all of that in a new Windows 11 installation. But now more than ever, most people will want to go through the OS and turn off a bunch of stuff to make the day-to-day experience of using the operating system less annoying.

That’s more relevant now that Microsoft has formally ended support for Windows 10. Yes, Windows 10 users can get an extra year of security updates relatively easily, but many who have been putting off the Windows 11 upgrade will be taking the plunge this year.

The settings changes we’ve recommended here may not fix everything, but they can at least give you some peace, shoving Microsoft into the background and allowing you to do what you want with your PC without as much hassle. Ideally, Microsoft would insist on respectful, user-friendly defaults itself. But until that happens, these changes are the best you can do.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

How to declutter, quiet down, and take the AI out of Windows 11 25H2 Read More »

internet-archive’s-legal-fights-are-over,-but-its-founder-mourns-what-was-lost

Internet Archive’s legal fights are over, but its founder mourns what was lost


“We survived, but it wiped out the library,” Internet Archive’s founder says.

Internet Archive founder Brewster Kahle celebrates 1 trillion web pages on stage with staff. Credit: via the Internet Archive

Last month, the Internet Archive’s Wayback Machine archived its trillionth webpage, and the nonprofit invited its more than 1,200 library partners and 800,000 daily users to join a celebration of the moment. To honor “three decades of safeguarding the world’s online heritage,” the city of San Francisco declared October 22 to be “Internet Archive Day.” The Archive was also recently designated a federal depository library by Sen. Alex Padilla (D-Calif.), who proclaimed the organization a “perfect fit” to expand “access to federal government publications amid an increasingly digital landscape.”

The Internet Archive might sound like a thriving organization, but it only recently emerged from years of bruising copyright battles that threatened to bankrupt the beloved library project. In the end, the fight led to more than 500,000 books being removed from the Archive’s “Open Library.”

“We survived,” Internet Archive founder Brewster Kahle told Ars. “But it wiped out the Library.”

An Internet Archive spokesperson confirmed to Ars that the archive currently faces no major lawsuits and no active threats to its collections. Kahle thinks “the world became stupider” when the Open Library was gutted—but he’s moving forward with new ideas.

History of the Internet Archive

Kahle has been striving since 1996 to transform the Internet Archive into a digital Library of Alexandria—but “with a better fire protection plan,” joked Kyle Courtney, a copyright lawyer and librarian who leads the nonprofit eBook Study Group, which helps states update laws to protect libraries.

When the Wayback Machine was born in 2001 as a way to take snapshots of the web, Kahle told The New York Times that building free archives was “worth it.” He was also excited that the Wayback Machine had drawn renewed media attention to libraries.

At the time, law professor Lawrence Lessig predicted that the Internet Archive would face copyright battles, but he also believed that the Wayback Machine would change the way the public understood copyright fights.

”We finally have a clear and tangible example of what’s at stake,” Lessig told the Times. He insisted that Kahle was “defining the public domain” online, which would allow Internet users to see ”how easy and important” the Wayback Machine “would be in keeping us sane and honest about where we’ve been and where we’re going.”

Kahle suggested that IA’s legal battles weren’t with creators or publishers so much as with large media companies that he thinks aren’t “satisfied with the restriction you get from copyright.”

“They want that and more,” Kahle said, pointing to e-book licenses that expire as proof that libraries increasingly aren’t allowed to own their collections. He also suspects that such companies wanted the Wayback Machine dead—but the Wayback Machine has survived and proved itself to be a unique and useful resource.

The Internet Archive also began archiving—and then lending—e-books. For a decade, the Archive had loaned out individual e-books to one user at a time without triggering any lawsuits. That changed when IA decided to temporarily lift the cap on loans from its Open Library project to create a “National Emergency Library” as libraries across the world shut down during the early days of the COVID-19 pandemic. The project eventually grew to 1.4 million titles.

But lifting the lending restrictions also brought more scrutiny from copyright holders, who eventually sued the Archive. Litigation went on for years. In 2024, IA lost its final appeal in a lawsuit brought by book publishers over the Archive’s Open Library project, which used a novel e-book lending model to bypass publishers’ licensing fees and checkout limitations. Damages could have topped $400 million, but publishers ultimately announced a “confidential agreement on a monetary payment” that did not bankrupt the Archive.

Litigation has continued, though. More recently, the Archive settled another suit over its Great 78 Project after music publishers sought damages of up to $700 million. A settlement in that case, reached last month, was similarly confidential. In both cases, IA’s experts challenged publishers’ estimates of their losses as massively inflated.

For Internet Archive fans, a group that includes longtime Internet users, researchers, students, historians, lawyers, and the US government, the end of the lawsuits brought a sigh of relief. The Archive can continue—but it can’t run one of its major programs in the same way.

What the Internet Archive lost

To Kahle, the suits have been an immense setback to IA’s mission.

Publishers had argued that the Open Library’s lending harmed the e-book market, but IA says its vision for the project was not to frustrate e-book sales (which it denied its library does) but to make it easier for researchers to reference e-books by allowing Wikipedia to link to book scans. Wikipedia has long been one of the most visited websites in the world, and the Archive wanted to deepen its authority as a research tool.

“One of the real purposes of libraries is not just access to information by borrowing a book that you might buy in a bookstore,” Kahle said. “In fact, that’s actually the minority. Usually, you’re comparing and contrasting things. You’re quoting. You’re checking. You’re standing on the shoulders of giants.”

Meredith Rose, senior policy counsel for Public Knowledge, told Ars that the Internet Archive’s Wikipedia enhancements could have served to surface information that’s often buried in books, giving researchers a streamlined path to source accurate information online.

But Kahle said the lawsuits against IA showed that “massive multibillion-dollar media conglomerates” have their own interests in controlling the flow of information. “That’s what they really succeeded at—to make sure that Wikipedia readers don’t get access to books,” Kahle said.

At the heart of the Open Library lawsuit was publishers’ market for e-book licenses, which libraries complain provide only temporary access for a limited number of patrons and cost substantially more than the acquisition of physical books. Some states are crafting laws to restrict e-book licensing, with the aim of preserving library functions.

“We don’t want libraries to become Hulu or Netflix,” said Courtney of the eBook Study Group, posting warnings to patrons like “last day to check out this book, August 31st, then it goes away forever.”

He, like Kahle, is concerned that libraries will become unable to fulfill their longtime role—preserving culture and providing equal access to knowledge. Remote access, Courtney noted, benefits people who can’t easily get to libraries, like the elderly, people with disabilities, rural communities, and foreign-deployed troops.

Before the Internet Archive cases, libraries had won some important legal fights, according to Brandon Butler, a copyright lawyer and executive director of Re:Create, a coalition of “libraries, civil libertarians, online rights advocates, start-ups, consumers, and technology companies” that is “dedicated to balanced copyright and a free and open Internet.”

But the Internet Archive’s e-book fight didn’t set back libraries, Butler said, because the loss didn’t reverse any prior court wins. Instead, IA had been “exploring another frontier” beyond the Google Books ruling, which deemed Google’s searchable book excerpts a transformative fair use, hoping that linking to books from Wikipedia would also be deemed fair use. But IA “hit the edge” of what courts would allow, Butler said.

IA basically asked, “Could fair use go this much farther?” Butler said. “And the courts said, ‘No, this is as far as you go.’”

To Kahle, the cards feel stacked against the Internet Archive, with courts, lawmakers, and lobbyists backing corporations seeking “hyper levels of control.” He said IA has always served as a research library—an online destination where people can cross-reference texts and verify facts, just like perusing books at a local library.

“We’re just trying to be a library,” Kahle said. “A library in a traditional sense. And it’s getting hard.”

Fears of big fines may delay digitization projects

President Donald Trump’s cuts to the federal Institute of Museum and Library Services have put America’s public libraries at risk, and reduced funding will continue to challenge libraries in the coming years, ALA has warned. Butler has also suggested that under-resourced libraries may delay digitization efforts for preservation purposes if they worry that publishers may threaten costly litigation.

He told Ars he thinks courts are getting it right on recent fair use rulings. But he noted that libraries have fewer resources for legal fights because copyright law “has this provision that says, well, if you’re a copyright holder, you really don’t have to prove that you suffered any harm at all.”

“You can just elect [to receive] a massive payout based purely on the fact that you hold a copyright and somebody infringed,” Butler said. “And that’s really unique. Almost no other country in the world has that sort of a system.”

So while companies like AI firms may be able to afford legal fights with rights holders, libraries must be careful, even when they launch projects that seem “completely harmless and innocuous,” Butler said. Consider the Internet Archive’s Great 78 Project, which digitized 400,000 old shellac records, known as 78s, that were originally pressed from 1898 to the 1950s.

“The idea that somebody’s going to stream a 78 of an Elvis song instead of firing it up on their $10-a-month Spotify subscription is silly, right?” Butler said. “It doesn’t pass the laugh test, but given the scale of the project—and multiply that by the statutory damages—and that makes this an extremely dangerous project all of a sudden.”

Butler suggested that statutory damages could disrupt the balance that ensures the public has access to knowledge, creators get paid, and human creativity thrives, as AI advances and libraries’ growth potentially stalls.

“It sets the risk so high that it may force deals in situations where it would be better if people relied on fair use. Or it may scare people from trying new things because of the stakes of a copyright lawsuit,” Butler said.

Courtney, who co-wrote a whitepaper detailing the legal basis for different forms of “controlled digital lending” like the Open Library project uses, suggested that Kahle may be the person who’s best prepared to push the envelope on copyright.

When asked how the Internet Archive managed to avoid financial ruin, Courtney said it survived “only because their leader” is “very smart and capable.” Of all the “flavors” of controlled digital lending (CDL) that his paper outlined, Kahle’s methodology for the Open Library Project was the most “revolutionary,” Courtney said.

Importantly, IA’s loss did not doom other kinds of CDL that other archives use, he noted, nor did it prevent libraries from trying new things.

“Fair use is a case-by-case determination” that will be made as urgent preservation needs arise, Courtney told Ars, and “libraries have a ton of stuff that aren’t going to make the jump to digital unless we digitize them. No one will have access to them.”

What’s next for the Internet Archive?

The lawsuits haven’t dampened Kahle’s resolve to expand IA’s digitization efforts, though. Moving forward, the group will be growing a project called Democracy’s Library, which is “a free, open, online compendium of government research and publications from around the world” that will be conveniently linked in Wikipedia articles to help researchers discover them.

The Archive is also collecting as many physical materials as possible to help preserve knowledge, even as “the library system is largely contracting,” Kahle said. He noted that libraries historically tend to grow in societies that prioritize education and decline in societies where power is being concentrated, and he’s worried about where the US is headed. That makes it hard to predict if IA—or any library project—will be supported in the long term.

With governments globally partnering with the biggest tech companies to try to win the artificial intelligence race, critics have warned of threats to US democracy, while the White House has escalated its attack on libraries, universities, and science over the past year.

Meanwhile, AI firms face dozens of lawsuits from creators and publishers, which Kahle thinks only the biggest tech companies can likely afford to outlast. The momentum behind AI risks giving corporations even more control over information, Kahle said, and it’s uncertain if archives dedicated to preserving the public memory will survive attacks from multiple fronts.

“Societies that are [growing] are the ones that need to educate people” and therefore promote libraries, Kahle said. But when societies are “going down,” such as in times of war, conflict, and social upheaval, libraries “tend to get destroyed by the powerful. It used to be king and church, and it’s now corporations and governments.” (He recommended The Library: A Fragile History as a must-read to understand the challenges libraries have always faced.)

Kahle told Ars he’s not “black and white” on AI, and he even sees some potential for AI to enhance library services.

He’s more concerned that libraries in the US are losing support and may soon cease to perform classic functions that have always benefited civilizations—like buying books from small publishers and local authors, supporting intellectual endeavors, and partnering with other libraries to expand access to diverse collections.

To prevent these cultural and intellectual losses, he plans to position IA as a refuge for displaced collections, with hopes to digitize as much as possible while defending the early dream that the Internet could equalize access to information and supercharge progress.

“We want everyone [to be] a reader,” Kahle said, and that means “we want lots of publishers, we want lots of vendors, booksellers, lots of libraries.”

But, he asked, “Are we going that way? No.”

To turn things around, Kahle suggested that copyright laws be “re-architected” to ensure “we have a game with many winners”—where authors, publishers, and booksellers get paid, library missions are respected, and progress thrives. Then society can figure out “what do we do with this new set of AI tools” to keep the engine of human creativity humming.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Internet Archive’s legal fights are over, but its founder mourns what was lost Read More »

“unexpectedly,-a-deer-briefly-entered-the-family-room”:-living-with-gemini-home

“Unexpectedly, a deer briefly entered the family room”: Living with Gemini Home


60 percent of the time, it works every time

Gemini for Home unleashes gen AI on your Nest camera footage, but it gets a lot wrong.

Google Home with Gemini

The Google Home app has Gemini integration for paying customers. Credit: Ryan Whitwam

The Google Home app has Gemini integration for paying customers. Credit: Ryan Whitwam

You just can’t ignore the effects of the generative AI boom.

Even if you don’t go looking for AI bots, they’re being integrated into virtually every product and service. And for what? There’s a lot of hand-wavey chatter about agentic this and AGI that, but what can “gen AI” do for you right now? Gemini for Home is Google’s latest attempt to make this technology useful, integrating Gemini with the smart home devices people already have. Anyone paying for extended video history in the Home app is about to get a heaping helping of AI, including daily summaries, AI-labeled notifications, and more.

Given the supposed power of AI models like Gemini, recognizing events in a couple of videos and answering questions about them doesn’t seem like a bridge too far. And yet Gemini for Home has demonstrated a tenuous grasp of the truth, which can lead to some disquieting interactions, like periodic warnings of home invasion, both human and animal.

It can do some neat things, but is it worth the price—and the headaches?

Does your smart home need a premium AI subscription?

Simply using the Google Home app to control your devices does not turn your smart home over to Gemini. This is part of Google’s higher-tier paid service, which comes with extended camera history and Gemini features for $20 per month. That subscription pipes your video into a Gemini AI model that generates summaries for notifications, as well as a “Daily Brief” that offers a rundown of everything that happened on a given day. The cheaper $10 plan provides less video history and no AI-assisted summaries or notifications. Both plans enable Gemini Live on smart speakers.

According to Google, it doesn’t send all of your video to Gemini. That would be a huge waste of compute cycles, so Gemini only sees (and summarizes) event clips. Those summaries are then distilled at the end of the day to create the Daily Brief, which usually results in a rather boring list of people entering and leaving rooms, dropping off packages, and so on.

Importantly, the Gemini model powering this experience is not multimodal—it only processes visual elements of videos and does not integrate audio from your recordings. So unusual noises or conversations captured by your cameras will not be searchable or reflected in AI summaries. This may be intentional to ensure your conversations are not regurgitated by an AI.

Gemini smart home plans

Credit: Google

Paying for Google’s AI-infused subscription also adds Ask Home, a conversational chatbot that can answer questions about what has happened in your home based on the status of smart home devices and your video footage. You can ask questions about events, retrieve video clips, and create automations.

There are definitely some issues with Gemini’s understanding of video, but Ask Home is quite good at creating automations. It was possible to set up automations in the old Home app, but the updated AI is able to piece together automations based on your natural language request. Perhaps thanks to the limited set of possible automation elements, the AI gets this right most of the time. Ask Home is also usually able to dig up past event clips, as long as you are specific about what you want.

The Advanced plan for Gemini Home keeps your videos for 60 days, so you can only query the robot on clips from that time period. Google also says it does not retain any of that video for training. The only instance in which Google will use security camera footage for training is if you choose to “lend” it to Google via an obscure option in the Home app. Google says it will keep these videos for up to 18 months or until you revoke access. However, your interactions with Gemini (like your typed prompts and ratings of outputs) are used to refine the model.

The unexpected deer

Every generative AI bot makes the occasional mistake, but you’ll probably not notice every one. When the AI hallucinates about your daily life, however, it’s more noticeable. There’s no reason Google should be confused by my smart home setup, which features a couple of outdoor cameras and one indoor camera—all Nest-branded with all the default AI features enabled—to keep an eye on my dogs. So the AI is seeing a lot of dogs lounging around and staring out the window. One would hope that it could reliably summarize something so straightforward.

One may be disappointed, though.

In my first Daily Brief, I was fascinated to see that Google spotted some indoor wildlife. “Unexpectedly, a deer briefly entered the family room,” Gemini said.

Home Brief with deer

Dogs and deer are pretty much the same thing, right? Credit: Ryan Whitwam

Gemini does deserve some credit for recognizing that the appearance of a deer in the family room would be unexpected. But the “deer” was, naturally, a dog. This was not a one-time occurrence, either. Gemini sometimes identifies my dogs correctly, but many event clips and summaries still tell me about the notable but brief appearance of deer around the house and yard.

This deer situation serves as a keen reminder that this new type of AI doesn’t “think,” although the industry’s use of that term to describe simulated reasoning could lead you to believe otherwise. A person looking at this video wouldn’t even entertain the possibility that they were seeing a deer after they’ve already seen the dogs loping around in other videos. Gemini doesn’t have that base of common sense, though. If the tokens say deer, it’s a deer. I will say, though, Gemini is great at recognizing car models and brand logos. Make of that what you will.

The animal mix-up is not ideal, but it’s not a major hurdle to usability. I didn’t seriously entertain the possibility that a deer had wandered into the house, and it’s a little funny the way the daily report continues to express amazement that wildlife is invading. It’s a pretty harmless screw-up.

“Overall identification accuracy depends on several factors, including the visual details available in the camera clip for Gemini to process,” explains a Google spokesperson. “As a large language model, Gemini can sometimes make inferential mistakes, which leads to these misidentifications, such as confusing your dog with a cat or deer.”

Google also says that you can tune the AI by correcting it when it screws up. This works sometimes, but the system still doesn’t truly understand anything—that’s beyond the capabilities of a generative AI model. After telling Gemini that it’s seeing dogs rather than deer, it sees wildlife less often. However, it doesn’t seem to trust me all the time, causing it to report the appearance of a deer that is “probably” just a dog.

A perfect fit for spooky season

Gemini’s smart home hallucinations also have a less comedic side. When Gemini mislabels an event clip, you can end up with some pretty distressing alerts. Imagine that you’re out and about when your Gemini assistant hits you with a notification telling you, “A person was seen in the family room.”

A person roaming around the house you believed to be empty? That’s alarming. Is it an intruder, a hallucination, a ghost? So naturally, you check the camera feed to find… nothing. An Ars Technica investigation confirms AI cannot detect ghosts. So a ghost in the machine?

Oops, we made you think someone broke into your house.

Credit: Ryan Whitwam

Oops, we made you think someone broke into your house. Credit: Ryan Whitwam

On several occasions, I’ve seen Gemini mistake dogs and totally empty rooms (or maybe a shadow?) for a person. It may be alarming at first, but after a few false positives, you grow to distrust the robot. Now, even if Gemini correctly identified a random person in the house, I’d probably ignore it. Unfortunately, this is the only notification experience for Gemini Home Advanced.

“You cannot turn off the AI description while keeping the base notification,” a Google spokesperson told me. They noted, however, that you can disable person alerts in the app. Those are enabled when you turn on Google’s familiar faces detection.

Gemini often twists reality just a bit instead of creating it from whole cloth. A person holding anything in the backyard is doing yardwork. One person anywhere, doing anything, becomes several people. A dog toy becomes a cat lying in the sun. A couple of birds become a raccoon. Gemini likes to ignore things, too, like denying there was a package delivery even when there’s a video tagged as “person delivers package.”

Gemini misses package

Gemini still refused to admit it was wrong.

Credit: Ryan Whitwam

Gemini still refused to admit it was wrong. Credit: Ryan Whitwam

At the end of the day, Gemini is labeling most clips correctly and therefore produces mostly accurate, if sometimes unhelpful, notifications. The problem is the flip side of “mostly,” which is still a lot of mistakes. Some of these mistakes compel you to check your cameras—at least, before you grow weary of Gemini’s confabulations. Instead of saving time and keeping you apprised of what’s happening at home, it wastes your time. For this thing to be useful, inferential errors cannot be a daily occurrence.

Learning as it goes

Google says its goal is to make Gemini for Home better for everyone. The team is “investing heavily in improving accurate identification” to cut down on erroneous notifications. The company also believes that having people add custom instructions is a critical piece of the puzzle. Maybe in the future, Gemini for Home will be more honest, but it currently takes a lot of hand-holding to move it in the right direction.

With careful tuning, you can indeed address some of Gemini for Home’s flights of fancy. I see fewer deer identifications after tinkering, and a couple of custom instructions have made the Home Brief waste less space telling me when people walk into and out of rooms that don’t exist. But I still don’t know how to prompt my way out of Gemini seeing people in an empty room.

Nest Cam 2025

Gemini AI features work on all Nest cams, but the new 2025 models are “designed for Gemini.”

Credit: Ryan Whitwam

Gemini AI features work on all Nest cams, but the new 2025 models are “designed for Gemini.” Credit: Ryan Whitwam

Despite its intention to improve Gemini for Home, Google is releasing a product that just doesn’t work very well out of the box, and it misbehaves in ways that are genuinely off-putting. Security cameras shouldn’t lie about seeing intruders, nor should they tell me I’m lying when they fail to recognize an event. The Ask Home bot has the standard disclaimer recommending that you verify what the AI says. You have to take that warning seriously with Gemini for Home.

At launch, it’s hard to justify paying for the $20 Advanced Gemini subscription. If you’re already paying because you want the 60-day event history, you’re stuck with the AI notifications. You can ignore the existence of Daily Brief, though. Stepping down to the $10 per month subscription gets you just 30 days of event history with the old non-generative notifications and event labeling. Maybe that’s the smarter smart home bet right now.

Gemini for Home is widely available for those who opted into early access in the Home app. So you can avoid Gemini for the time being, but it’s only a matter of time before Google flips the switch for everyone.

Hopefully it works better by then.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

“Unexpectedly, a deer briefly entered the family room”: Living with Gemini Home Read More »