Features

10m-people-watched-a-youtuber-shim-a-lock;-the-lock-company-sued-him-bad-idea.

10M people watched a YouTuber shim a lock; the lock company sued him. Bad idea.


It’s still legal to pick locks, even when you swing your legs.

“Opening locks” might not sound like scintillating social media content, but Trevor McNally has turned lock-busting into online gold. A former US Marine Staff Sergeant, McNally today has more than 7 million followers and has amassed more than 2 billion views just by showing how easy it is to open many common locks by slapping, picking, or shimming them.

This does not always endear him to the companies that make the locks.

On March 3, 2025, a Florida lock company called Proven Industries released a social media promo video just begging for the McNally treatment. The video was called, somewhat improbably, “YOU GUYS KEEP SAYING YOU CAN EASILY BREAK OFF OUR LATCH PIN LOCK.” In it, an enthusiastic man in a ball cap says he will “prove a lot of you haters wrong.” He then goes hard at Proven’s $130 model 651 trailer hitch lock with a sledgehammer, bolt cutters, and a crowbar.

Naturally, the lock hangs tough.

An Instagram user brought the lock to McNally’s attention by commenting, “Let’s introduce it to the @mcnallyofficial poke.” Someone from Proven responded, saying that McNally only likes “the cheap locks lol because they are easy and fast.” Proven locks were said to be made of sterner stuff.

But on April 3, McNally posted a saucy little video to social media platforms. In it, he watches the Proven promo video while swinging his legs and drinking a Juicy Juice. He then hops down from his seat, goes over to a Proven trailer hitch lock, and opens it in a matter of seconds using nothing but a shim cut from a can of Liquid Death. He says nothing during the entire video, which has been viewed nearly 10 million times on YouTube alone.

Despite practically begging people to attempt this, Proven Industries owner Ron Lee contacted McNally on Instagram. “Just wanted to say thanks and be prepared!” he wrote. McNally took this as a threat.

(Oddly enough, Proven’s own homepage features a video in which the company trashes competing locks and shows just how easy it is to defeat them. And its news pages contain articles and videos on “The Hidden Flaws of Master Locks” and other brands. Why it got so upset about McNally’s video is unclear.)

The next day, Lee texted McNally’s wife. The message itself was apparently Lee’s attempt to de-escalate things; he says he thought the number belonged to McNally, and the message itself was unobjectionable. But after the “be prepared!” notice of the day before, and given the fact that Lee already knew how to contact him on Instagram, McNally saw the text as a way “to intimidate me and my family.” That feeling was cemented when McNally found out that Lee was a triple felon—and that in one case, Lee had hired someone “to throw a brick through the window of his ex-wife.”

Concerned about losing business, Lee kept trying to shut McNally down. Proven posted a “response video” on April 6 and engaged with numerous social media commenters, telling them that things were “going to get really personal” for McNally. Proven employees alleged publicly that McNally was deceiving people about all the prep work he had done to make a “perfectly cut out” shim. Without extensive experience, long prep work, and precise measurements, it was said, Proven’s locks were in little danger of being opened by rogue actors trying to steal your RV.

“Sucks to see how many people take everything they see online for face value,” one Proven employee wrote. “Sounds like a bunch of liberals lol.”

Proven also had its lawyers file “multiple” DMCA takedown notices against the McNally video, claiming that its use of Proven’s promo video was copyright infringement.

McNally didn’t bow to the pressure, though, instead uploading several more videos showing him opening Proven locks. In one of them, he takes aim at Proven’s claims about his prep work by retrieving a new lock from an Amazon delivery kiosk, taking it outside—and popping it in seconds using a shim he cuts right on camera, with no measurements, from an aluminum can.

Help us write more stories like this—while ditching ads

Ars subscribers support our independent journalism, which they can read ad-free and with enhanced privacy protections. And it’s only a few bucks a month.

Ars Pro

$5 / month

Subscribe

  • No ads
  • No tracking
  • Enhanced experience

58.3333% off!

Ars Pro

$25 / year

Subscribe

  • Best value
  • Still no ads
  • Still no tracking

Ars Pro++

$50 / year

Subscribe

  • All Ars Pro features
  • Support journalism
  • Special ++ badge

On May 1, Proven filed a federal lawsuit against McNally in the Middle District of Florida, charging him with a huge array of offenses: (1) copyright infringement, (2) defamation by implication, (3) false advertising, (4) violating the Florida Deceptive and Unfair Trade Practices Act, (5) tortious interference with business relationships, (6) unjust enrichment, (7) civil conspiracy, and (8) trade libel. Remarkably, the claims stemmed from a video that all sides admit was accurate and in which McNally himself said nothing.

Screenshot of a social media exchange.

In retrospect, this was probably not a great idea.

Don’t mock me, bro

How can you defame someone without even speaking? Proven claimed “defamation by implication,” arguing that the whole setup of McNally’s videos was unfair to the company and its product. McNally does not show his prep work, which (Proven argued) conveys to the public the false idea that Proven’s locks are easy to bypass. While the shimming does work, Proven argued that it would be difficult for an untrained user to perform.

But what Proven really, really didn’t like was being mocked. McNally’s decision to drink—and shake!—a juice box on video comes up in court papers a mind-boggling number of times. Here’s a sample:

McNally appears swinging his legs and sipping from an apple juice box, conveying to the purchasing public that bypassing Plaintiff’s lock is simple, trivial, and even comical…

…showing McNally drinking from, and shaking, a juice box, all while swinging his legs, and displaying the Proven Video on a mobile device…

The tone, posture, and use of the juice box prop and childish leg swinging that McNally orchestrated in the McNally Video was intentional to diminish the perceived seriousness of Proven Industries…

The use of juvenile imagery, such as sipping from a juice box while casually applying the shim, reinforces the misleading impression that the lock is inherently insecure and marketed deceptively…

The video then abruptly shifts to Defendant in a childlike persona, sipping from a juice box and casually applying a shim to the lock…

In the end, Proven argued that the McNally video was “for commercial entertainment and mockery,” produced for the purpose of “humiliating Plaintiff.” McNally, it was said, “will not stop until he destroys Proven’s reputation.” Justice was needed. Expensive, litigious justice.

But the proverbially level-headed horde of Internet users does not always love it when companies file thermonuclear lawsuits against critics. Sometimes, in fact, the level-headed horde disregards everything taught by that fount of judicial knowledge, The People’s Court, and they take the law into their own hands.

Proven was soon the target of McNally fans. The company says it was “forced to disable comments on posts and product videos due to an influx of mocking and misleading replies furthering the false narrative that McNally conveyed to the viewers.” The company’s customer service department received such an “influx of bogus customer service tickets… that it is experiencing difficulty responding to legitimate tickets.”

Screenshot of a social media post from Proven Industries.

Proven was quite proud of its lawsuit… at first.

Someone posted Lee’s personal phone number to the comment section of a McNally video, which soon led to “a continuous stream of harassing phone calls and text messages from unknown numbers at all hours of the day and night,” which included “profanity, threats, and racially charged language.”

Lest this seem like mere high spirits and hijinks, Lee’s partner and his mother both “received harassing messages through Facebook Messenger,” while other messages targeted Lee’s son, saying things like “I would kill your f—ing n—– child” and calling him a “racemixing pussy.”

This is clearly terrible behavior; it also has no obvious connection to McNally, who did not direct or condone the harassment. As for Lee’s phone number, McNally said that he had nothing to do with posting it and wrote that “it is my understanding that the phone number at issue is publicly available on the Better Business Bureau website and can be obtained through a simple Google search.”

And this, with both sides palpably angry at each other, is how things stood on June 13 at 9: 09 am, when the case got a hearing in front of the Honorable Mary Scriven, an extremely feisty federal judge in Tampa. Proven had demanded a preliminary injunction that would stop McNally from sharing his videos while the case progressed, but Proven had issues right from the opening gavel:

LAWYER 1: Austin Nowacki on behalf of Proven industries.

THE COURT: I’m sorry. What is your name?

LAWYER 1: Austin Nowacki.

THE COURT: I thought you said Austin No Idea.

LAWYER 2: That’s Austin Nowacki.

THE COURT: All right.

When Proven’s lead lawyer introduced a colleague who would lead that morning’s arguments, the judge snapped, “Okay. Then you have a seat and let her speak.”

Things went on this way for some time, as the judge wondered, “Did the plaintiff bring a lock and a beer can?” (The plaintiff did not.) She appeared to be quite disappointed when it was clear there would be no live shimming demonstration in the courtroom.

Then it was on to the actual arguments. Proven argued that the 15 seconds of its 90-second promo video used by McNally were not fair use, that McNally had defamed the company by implication, and that shimming its locks was actually quite difficult. Under questioning, however, one of Proven’s employees admitted that he had been able to duplicate McNally’s technique, leading to the question from McNally’s lawyer: “When you did it yourself, did it occur to you for one moment that maybe the best thing to do, instead of file a lawsuit, was to fix [the lock]?”

At the end of several hours of wrangling, the judge stepped in, saying that she “declines to grant the preliminary injunction motion.” For her to do so, Proven would have to show that it was likely to win at trial, among other things; it had not.

As for the big copyright infringement claim, of which Proven had made so much hay, the judge reached a pretty obvious finding: You’re allowed to quote snippets of copyrighted videos in order to critique them.

“The purpose and character of the use to which Mr. McNally put the alleged infringed work is transformative, artistic, and a critique,” said the judge. “He is in his own way challenging and critiquing Proven’s video by the use of his own video.”

As for the amount used, it was “substantial enough but no more than is necessary to make the point that he is trying to critique Proven’s video, and I think that’s fair game and a nominative fair use circumstance.”

While Proven might convince her otherwise after a full trial, “the copyright claim fails as a basis for a demand for preliminary injunctive relief.”

As for “tortious interference” and “defamation by implication,” the judge was similarly unimpressed.

“The fact that you might have a repeat customer who is dissuaded to buy your product due to a criticism of the product is not the type of business relationship the tortious interference with business relationship concept is intended to apply,” she said.

In the end, the judge said she would see the case through to its end, if that was really what everyone wanted, but “I will pray that you all come to a resolution of the case that doesn’t require all of this. This is a capitalist market and people say what they say. As long as it’s not false, they say what they say.”

She gave Proven until July 7 to amend its complaint if it wished.

On July 7, the company dismissed the lawsuit against McNally instead.

Proven also made a highly unusual request: Would the judge please seal almost the entire court record—including the request to seal?

Court records are presumptively public, but Proven complained about a “pattern of intimidation and harassment by individuals influenced by Defendant McNally’s content.” According to the company, a key witness had already backed out of the case, saying, “Is there a way to leave my name and my companies name out of this due to concerns of potential BLOW BACK from McNally or others like him?” Another witness, who did submit a declaration, wondered, “Is this going to be public? My concern is that there may be some backlash from the other side towards my company.”

McNally’s lawyer laid into this seal request, pointing out that the company had shown no concern over these issues until it lost its bid for a preliminary injunction. Indeed, “Proven boasted to its social media followers about how it sued McNally and about how confident it was that it would prevail. Proven even encouraged people to search for the lawsuit.” Now, however, the company “suddenly discover[ed] a need for secrecy.”

The judge has not yet ruled on the request to seal.

Another way

The strange thing about the whole situation is that Proven actually knew how to respond constructively to the first McNally video. Its own response video opened with a bit of humor (the presenter drinks a can of Liquid Death), acknowledged the issue (“we’ve had a little bit of controversy in the last couple days”), and made clear that Proven could handle criticism (“we aren’t afraid of a little bit of feedback”).

The video went on to show how their locks work and provided some context on shimming attacks and their likelihood of real-world use. It ended by showing how users concerned about shimming attacks could choose more expensive but more secure lock cores that should resist the technique.

Quick, professional, non-defensive—a great way to handle controversy.

But it was all blown apart by the company’s angry social media statements, which were unprofessional and defensive, and the litigation, which was spectacularly ill-conceived as a matter of both law and policy. In the end, the case became a classic example of the Streisand Effect, in which the attempt to censor information can instead call attention to it.

Judging from the number of times the lawsuit talks about 1) ridicule and 2) harassment, it seems like the case quickly became a personal one for Proven’s owner and employees, who felt either mocked or threatened. That’s understandable, but being mocked is not illegal and should never have led to a lawsuit or a copyright claim. As for online harassment, it remains a serious and unresolved issue, but launching a personal vendetta—and on pretty flimsy legal grounds—against McNally himself was patently unwise. (Doubly so given that McNally had a huge following and had already responded to DMCA takedowns by creating further videos on the subject; this wasn’t someone who would simply be intimidated by a lawsuit.)

In the end, Proven’s lawsuit likely cost the company serious time and cash—and generated little but bad publicity.

Photo of Nate Anderson

10M people watched a YouTuber shim a lock; the lock company sued him. Bad idea. Read More »

google-has-a-useful-quantum-algorithm-that-outperforms-a-supercomputer

Google has a useful quantum algorithm that outperforms a supercomputer


An approach it calls “quantum echoes” takes 13,000 times longer on a supercomputer.

Image of a silvery plate labeled with

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

A few years back, Google made waves when it claimed that some of its hardware had achieved quantum supremacy, performing operations that would be effectively impossible to simulate on a classical computer. That claim didn’t hold up especially well, as mathematicians later developed methods to help classical computers catch up, leading the company to repeat the work on an improved processor.

While this back-and-forth was unfolding, the field became less focused on quantum supremacy and more on two additional measures of success. The first is quantum utility, in which a quantum computer performs computations that are useful in some practical way. The second is quantum advantage, in which a quantum system completes calculations in a fraction of the time it would take a typical computer. (IBM and a startup called Pasqual have published a useful discussion about what would be required to verifiably demonstrate a quantum advantage.)

Today, Google and a large collection of academic collaborators are publishing a paper describing a computational approach that demonstrates a quantum advantage compared to current algorithms—and may actually help us achieve something useful.

Out of time

Google’s latest effort centers on something it’s calling “quantum echoes.” The approach could be described as a series of operations on the hardware qubits that make up its machine. These qubits hold a single bit of quantum information in a superposition between two values, with probabilities of finding the qubit in one value or the other when it’s measured. Each qubit is entangled with its neighbors, allowing its probability to influence those of all the qubits around it. The operations that allow computation, called gates, are ways of manipulating these probabilities. Most current hardware, including Google’s, perform manipulations on one or two qubits at a time (termed one- and two-qubit gates, respectively.

For quantum echoes, the operations involved performing a set of two-qubit gates, altering the state of the system, and later performing the reverse set of gates. On its own, this would return the system to its original state. But for quantum echoes, Google inserts single-qubit gates performed with a randomized parameter. This alters the state of the system before the reverse operations take place, ensuring that the system won’t return to exactly where it started. That explains the “echoes” portion of the name: You’re sending an imperfect copy back toward where things began, much like an echo involves the imperfect reversal of sound waves.

That’s what the process looks like in terms of operations performed on the quantum hardware. But it’s probably more informative to think of it in terms of a quantum system’s behavior. As Google’s Tim O’Brien explained, “You evolve the system forward in time, then you apply a small butterfly perturbation, and then you evolve the system backward in time.” The forward evolution is the first set of two qubit gates, the small perturbation is the randomized one qubit gate, and the second set of two qubit gates is the equivalent of sending the system backward in time.

Because this is a quantum system, however, strange things happen. “On a quantum computer, these forward and backward evolutions, they interfere with each other,” O’Brien said. One way to think about that interference is in terms of probabilities. The system has multiple paths between its start point and the point of reflection—where it goes from evolving forward in time to evolving backward—and from that reflection point back to a final state. Each of those paths has a probability associated with it. And since we’re talking about quantum mechanics, those paths can interfere with each other, increasing some probabilities at the expense of others. That interference ultimately determines where the system ends up.

(Technically, these are termed “out of time order correlations,” or OTOCs. If you read the Nature paper describing this work, prepare to see that term a lot.)

Demonstrating advantage

So how do you turn quantum echoes into an algorithm? On its own, a single “echo” can’t tell you much about the system—the probabilities ensure that any two runs might show different behaviors. But if you repeat the operations multiple times, you can begin to understand the details of this quantum interference. And performing the operations on a quantum computer ensures that it’s easy to simply rerun the operations with different random one-qubit gates and get many instances of the initial and final states—and thus a sense of the probability distributions involved.

This is also where Google’s quantum advantage comes from. Everyone involved agrees that the precise behavior of a quantum echo of moderate complexity can be modeled using any leading supercomputer. But doing so is very time-consuming, so repeating those simulations a few times becomes unrealistic. The paper estimates that a measurement that took its quantum computer 2.1 hours to perform would take the Frontier supercomputer approximately 3.2 years. Unless someone devises a far better classical algorithm than what we have today, this represents a pretty solid quantum advantage.

But is it a useful algorithm? The repeated sampling can act a bit like the Monte Carlo sampling done to explore the behavior of a wide variety of physical systems. Typically, however, we don’t view algorithms as modeling the behavior of the underlying hardware they’re being run on; instead, they’re meant to model some other physical system we’re interested in. That’s where Google’s announcement stands apart from its earlier work—the company believes it has identified an interesting real-world physical system with behaviors that the quantum echoes can help us understand.

That system is a small molecule in a Nuclear Magnetic Resonance (NMR) machine. In a second draft paper being published on the arXiv later today, Google has collaborated with a large collection of NMR experts to explore that use.

From computers to molecules

NMR is based on the fact that the nucleus of every atom has a quantum property called spin. When nuclei are held near to each other, such as when they’re in the same molecule, these spins can influence one another. NMR uses magnetic fields and photons to manipulate these spins and can be used to infer structural details, like how far apart two given atoms are. But as molecules get larger, these spin networks can extend for greater distances and become increasingly complicated to model. So NMR has been limited to focusing on the interactions of relatively nearby spins.

For this work, though, the researchers figured out how to use an NMR machine to create the physical equivalent of a quantum echo in a molecule. The work involved synthesizing the molecule with a specific isotope of carbon (carbon-13) in a known location in the molecule. That isotope could be used as the source of a signal that propagates through the network of spins formed by the molecule’s atoms.

“The OTOC experiment is based on a many-body echo, in which polarization initially localized on a target spin migrates through the spin network, before a Hamiltonian-engineered time-reversal refocuses to the initial state,” the team wrote. “This refocusing is sensitive to perturbations on distant butterfly spins, which allows one to measure the extent of polarization propagation through the spin network.”

Naturally, something this complicated needed a catchy nickname. The team came up with TARDIS, or Time-Accurate Reversal of Dipolar InteractionS. While that name captures the “out of time order” aspect of OTOC, it’s simply a set of control pulses sent to the NMR sample that starts a perturbation of the molecule’s network of nuclear spins. A second set of pulses then reflects an echo back to the source.

The reflections that return are imperfect, with noise coming from two sources. The first is simply imperfections in the control sequence, a limitation of the NMR hardware. But the second is the influence of fluctuations happening in distant atoms along the spin network. These happen at a certain frequency at random, or the researchers could insert a fluctuation by targeting a specific part of the molecule with randomized control signals.

The influence of what’s going on in these distant spins could allow us to use quantum echoes to tease out structural information at greater distances than we currently do with NMR. But to do so, we need an accurate model of how the echoes will propagate through the molecule. And again, that’s difficult to do with classical computations. But it’s very much within the capabilities of quantum computing, which the paper demonstrates.

Where things stand

For now, the team stuck to demonstrations on very simple molecules, making this work mostly a proof of concept. But the researchers are optimistic that there are many ways the system could be used to extract structural information from molecules at distances that are currently unobtainable using NMR. They list a lot of potential upsides that should be explored in the discussion of the paper, and there are plenty of smart people who would love to find new ways of using their NMR machines, so the field is likely to figure out pretty quickly which of these approaches turns out to be practically useful.

The fact that the demonstrations were done with small molecules, however, means that the modeling run on the quantum computer could also have been done on classical hardware (it only required 15 hardware qubits). So Google is claiming both quantum advantage and quantum utility, but not at the same time. The sorts of complex, long-distance interactions that would be out of range of classical simulation are still a bit beyond the reach of the current quantum hardware. O’Brien estimated that the hardware’s fidelity would have to improve by a factor of three or four to model molecules that are beyond classical simulation.

The quantum advantage issue should also be seen as a work in progress. Google has collaborated with enough researchers at enough institutions that there’s unlikely to be a major improvement in algorithms that could allow classical computers to catch up. Until the community as a whole has some time to digest the announcement, though, we shouldn’t take that as a given.

The other issue is verifiability. Some quantum algorithms will produce results that can be easily verified on classical hardware—situations where it’s hard to calculate the right result but easy to confirm a correct answer. Quantum echoes isn’t one of those, so we’ll need another quantum computer to verify the behavior Google has described.

But Google told Ars nothing is up to the task yet. “No other quantum processor currently matches both the error rates and number of qubits of our system, so our quantum computer is the only one capable of doing this at present,” the company said. (For context, Google says that the algorithm was run on up to 65 qubits, but the chip has 105 qubits total.)

There’s a good chance that other companies would disagree with that contention, but it hasn’t been possible to ask them ahead of the paper’s release.

In any case, even if this claim proves controversial, Google’s Michel Devoret, a recent Nobel winner, hinted that we shouldn’t have long to wait for additional ones. “We have other algorithms in the pipeline, so we will hopefully see other interesting quantum algorithms,” Devoret said.

Nature, 2025. DOI: 10.1038/s41586-025-09526-6  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google has a useful quantum algorithm that outperforms a supercomputer Read More »

should-an-ai-copy-of-you-help-decide-if-you-live-or-die?

Should an AI copy of you help decide if you live or die?

“It would combine demographic and clinical variables, documented advance-care-planning data, patient-recorded values and goals, and contextual information about specific decisions,” he said.

“Including textual and conversational data could further increase a model’s ability to learn why preferences arise and change, not just what a patient’s preference was at a single point in time,” Starke said.

Ahmad suggested that future research could focus on validating fairness frameworks in clinical trials, evaluating moral trade-offs through simulations, and exploring how cross-cultural bioethics can be combined with AI designs.

Only then might AI surrogates be ready to be deployed, but only as “decision aids,” Ahmad wrote. Any “contested outputs” should automatically “trigger [an] ethics review,” Ahmad wrote, concluding that “the fairest AI surrogate is one that invites conversation, admits doubt, and leaves room for care.”

“AI will not absolve us”

Ahmad is hoping to test his conceptual models at various UW sites over the next five years, which would offer “some way to quantify how good this technology is,” he said.

“After that, I think there’s a collective decision regarding how as a society we decide to integrate or not integrate something like this,” Ahmad said.

In his paper, he warned against chatbot AI surrogates that could be interpreted as a simulation of the patient, predicting that future models may even speak in patients’ voices and suggesting that the “comfort and familiarity” of such tools might blur “the boundary between assistance and emotional manipulation.”

Starke agreed that more research and “richer conversations” between patients and doctors are needed.

“We should be cautious not to apply AI indiscriminately as a solution in search of a problem,” Starke said. “AI will not absolve us from making difficult ethical decisions, especially decisions concerning life and death.”

Truog, the bioethics expert, told Ars he “could imagine that AI could” one day “provide a surrogate decision maker with some interesting information, and it would be helpful.”

But a “problem with all of these pathways… is that they frame the decision of whether to perform CPR as a binary choice, regardless of context or the circumstances of the cardiac arrest,” Truog’s editorial said. “In the real world, the answer to the question of whether the patient would want to have CPR” when they’ve lost consciousness, “in almost all cases,” is “it depends.”

When Truog thinks about the kinds of situations he could end up in, he knows he wouldn’t just be considering his own values, health, and quality of life. His choice “might depend on what my children thought” or “what the financial consequences would be on the details of what my prognosis would be,” he told Ars.

“I would want my wife or another person that knew me well to be making those decisions,” Truog said. “I wouldn’t want somebody to say, ‘Well, here’s what AI told us about it.’”

Should an AI copy of you help decide if you live or die? Read More »

yes,-everything-online-sucks-now—but-it-doesn’t-have-to

Yes, everything online sucks now—but it doesn’t have to


from good to bad to nothing

Ars chats with Cory Doctorow about his new book Enshittification.

We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It.

As Doctorow tells it, he was on vacation in Puerto Rico, staying in a remote cabin nestled in a cloud forest with microwave Internet service—i.e., very bad Internet service, since microwave signals struggle to penetrate through clouds. It was a 90-minute drive to town, but when they tried to consult TripAdvisor for good local places to have dinner one night, they couldn’t get the site to load. “All you would get is the little TripAdvisor logo as an SVG filling your whole tab and nothing else,” Doctorow told Ars. “So I tweeted, ‘Has anyone at TripAdvisor ever been on a trip? This is the most enshittified website I’ve ever used.’”

Initially, he just got a few “haha, that’s a funny word” responses. “It was when I married that to this technical critique, at a moment when things were quite visibly bad to a much larger group of people, that made it take off,” Doctorow said. “I didn’t deliberately set out to do it. I bought a million lottery tickets and one of them won the lottery. It only took two decades.”

Yes, people sometimes express regret to him that the term includes a swear word. To which he responds, “You’re welcome to come up with another word. I’ve tried. ‘Platform decay’ just isn’t as good.” (“Encrapification” and “enpoopification” also lack a certain je ne sais quoi.)

In fact, it’s the sweariness that people love about the word. While that also means his book title inevitably gets bleeped on broadcast radio, “The hosts, in my experience, love getting their engineers to creatively bleep it,” said Doctorow. “They find it funny. It’s good radio, it stands out when every fifth word is ‘enbeepification.’”

People generally use “enshittification” colloquially to mean “the degradation in the quality and experience of online platforms over time.” Doctorow’s definition is more specific, encompassing “why an online service gets worse, how that worsening unfolds,” and how this process spreads to other online services, such that everything is getting worse all at once.

For Doctorow, enshittification is a disease with symptoms, a mechanism, and an epidemiology. It has infected everything from Facebook, Twitter, Amazon, and Google, to Airbnb, dating apps, iPhones, and everything in between. “For me, the fact that there were a lot of platforms that were going through this at the same time is one of the most interesting and important factors in the critique,” he said. “It makes this a structural issue and not a series of individual issues.”

It starts with the creation of a new two-sided online product of high quality, initially offered at a loss to attract users—say, Facebook, to pick an obvious example. Once the users are hooked on the product, the vendor moves to the second stage: degrading the product in some way for the benefit of their business customers. This might include selling advertisements, scraping and/or selling user data, or tweaking algorithms to prioritize content the vendor wishes users to see rather than what those users actually want.

This locks in the business customers, who, in turn, invest heavily in that product, such as media companies that started Facebook pages to promote their published content. Once business customers are locked in, the vendor can degrade those services too—i.e., by de-emphasizing news and links away from Facebook—to maximize profits to shareholders. Voila! The product is now enshittified.

The four horsemen of the shitocalypse

Doctorow identifies four key factors that have played a role in ushering in an era that he has dubbed the “Enshittocene.” The first is competition (markets), in which companies are motivated to make good products at affordable prices, with good working conditions, because otherwise customers and workers will go to their competitors.  The second is government regulation, such as antitrust laws that serve to keep corporate consolidation in check, or levying fines for dishonest practices, which makes it unprofitable to cheat.

The third is interoperability: the inherent flexibility of digital tools, which can play a useful adversarial role. “The fact that enshittification can always be reversed with a dis-enshittifiting counter-technology always acted as a brake on the worst impulses of tech companies,” Doctorow writes. Finally, there is labor power; in the case of the tech industry, highly skilled workers were scarce and thus had considerable leverage over employers.

All four factors, when functioning correctly, should serve as constraints to enshittification. However, “One by one each enshittification restraint was eroded until it dissolved, leaving the enshittification impulse unchecked,” Doctorow writes. Any “cure” will require reversing those well-established trends.

But isn’t all this just the nature of capitalism? Doctorow thinks it’s not, arguing that the aforementioned weakening of traditional constraints has resulted in the usual profit-seeking behavior producing very different, enshittified outcomes. “Adam Smith has this famous passage in Wealth of Nations about how it’s not due to the generosity of the baker that we get our bread but to his own self-regard,” said Doctorow. “It’s the fear that you’ll get your bread somewhere else that makes him keep prices low and keep quality high. It’s the fear of his employees leaving that makes him pay them a fair wage. It is the constraints that causes firms to behave better. You don’t have to believe that everything should be a capitalist or a for-profit enterprise to acknowledge that that’s true.”

Our wide-ranging conversation below has been edited for length to highlight the main points of discussion.

Ars Technica: I was intrigued by your choice of framing device, discussing enshittification as a form of contagion. 

Cory Doctorow: I’m on a constant search for different framing devices for these complex arguments. I have talked about enshittification in lots of different ways. That frame was one that resonated with people. I’ve been a blogger for a quarter of a century, and instead of keeping notes to myself, I make notes in public, and I write up what I think is important about something that has entered my mind, for better or for worse. The downside is that you’re constantly getting feedback that can be a little overwhelming. The upside is that you’re constantly getting feedback, and if you pay attention, it tells you where to go next, what to double down on.

Another way of organizing this is the Galaxy Brain meme, where the tiny brain is “Oh, this is because consumers shopped wrong.” The medium brain is “This is because VCs are greedy.” The larger brain is “This is because tech bosses are assholes.” But the biggest brain of all is “This is because policymakers created the policy environment where greed can ruin our lives.” There’s probably never going to be just one way to talk about this stuff that lands with everyone. So I like using a variety of approaches. I suck at being on message. I’m not going to do Enshittification for the Soul and Mornings with Enshittifying Maury. I am restless, and my Myers-Briggs type is ADHD, and I want to have a lot of different ways of talking about this stuff.

Ars Technica: One site that hasn’t (yet) succumbed is Wikipedia. What has protected Wikipedia thus far? 

Cory Doctorow: Wikipedia is an amazing example of what we at the Electronic Frontier Foundation (EFF) call the public interest Internet. Internet Archive is another one. Most of these public interest Internet services start off as one person’s labor of love, and that person ends up being what we affectionately call the benevolent dictator for life. Very few of these projects have seen the benevolent dictator for life say, “Actually, this is too important for one person to run. I cannot be the keeper of the soul of this project. I am prone to self-deception and folly just like every other person. This needs to belong to its community.” Wikipedia is one of them. The founder, my friend Jimmy Wales, woke up one day and said, “No individual should run Wikipedia. It should be a communal effort.”

There’s a much more durable and thick constraint on the decisions of anyone at Wikipedia to do something bad. For example, Jimmy had this idea that you could use AI in Wikipedia to help people make entries and navigate Wikipedia’s policies, which are daunting. The community evaluated his arguments and decided—not in a reactionary way, but in a really thoughtful way—that this was wrong. Jimmy didn’t get his way. It didn’t rule out something in the future, but that’s not happening now. That’s pretty cool.

Wikipedia is not just governed by a board; it’s also structured as a nonprofit. That doesn’t mean that there’s no way it could go bad. But it’s a source of friction against enshittification. Wikipedia has its entire corpus irrevocably licensed as the most open it can be without actually being in the public domain. Even if someone were to capture Wikipedia, there’s limits on what they could do to it.

There’s also a labor constraint in Wikipedia in that there’s very little that the leadership can do without bringing along a critical mass of a large and diffuse body of volunteers. That cuts against the volunteers working in unison—they’re not represented by a union; it’s hard for them to push back with one voice. But because they’re so diffuse and because there’s no paychecks involved, it’s really hard for management to do bad things. So if there are two people vying for the job of running the Wikimedia Foundation and one of them has got nefarious plans and the other doesn’t, the nefarious plan person, if they’re smart, is going to give it up—because if they try to squeeze Wikipedia, the harder they squeeze, the more it will slip through their grasp.

So these are structural defenses against enshittification of Wikipedia. I don’t know that it was in the mechanism design—I think they just got lucky—but it is a template for how to run such a project. It does raise this question: How do you build the community? But if you have a community of volunteers around a project, it’s a model of how to turn that project over to that community.

Ars Technica: Your case studies naturally include the decay of social media, notably Facebook and the social media site formerly known as Twitter. How might newer social media platforms resist the spiral into “platform decay”?

Cory Doctorow: What you want is a foundation in which people on social media face few switching costs. If the social media is interoperable, if it’s federatable, then it’s much harder for management to make decisions that are antithetical to the interests of users. If they do, users can escape. And it sets up an internal dynamic within the firm, where the people who have good ideas don’t get shouted down by the people who have bad but more profitable ideas, because it makes those bad ideas unprofitable. It creates both short and long-term risks to the bottom line.

There has to be a structure that stops their investors from pressurizing them into doing bad things, that stops them from rationalizing their way into complying. I think there’s this pathology where you start a company, you convince 150 of your friends to risk their kids’ college fund and their mortgage working for you. You make millions of users really happy, and your investors come along and say, “You have to destroy the life of 5 percent of your users with some change.” And you’re like, “Well, I guess the right thing to do here is to sacrifice those 5 percent, keep the other 95 percent happy, and live to fight another day, because I’m a good guy. If I quit over this, they’ll just put a bad guy in who’ll wreck things. I keep those 150 people working. Not only that, I’m kind of a martyr because everyone thinks I’m a dick for doing this. No one understands that I have taken the tough decision.”

I think that’s a common pattern among people who, in fact, are quite ethical but are also capable of rationalizing their way into bad things. I am very capable of rationalizing my way into bad things. This is not an indictment of someone’s character. But it’s why, before you go on a diet, you throw away the Oreos. It’s why you bind yourself to what behavioral economists call “Ulysses pacts“: You tie yourself to the mast before you go into the sea of sirens, not because you’re weak but because you’re strong enough now to know that you’ll be weak in the future.

I have what I would call the epistemic humility to say that I don’t know what makes a good social media network, but I do know what makes it so that when they go bad, you’re not stuck there. You and I might want totally different things out of our social media experience, but I think that you should 100 percent have the right to go somewhere else without losing anything. The easier it is for you to go without losing something, the better it is for all of us.

My dream is a social media universe where knowing what network someone is using is just a weird curiosity. It’d be like knowing which cell phone carrier your friend is using when you give them a call. It should just not matter. There might be regional or technical reasons to use one network or another, but it shouldn’t matter to anyone other than the user what network they’re using. A social media platform where it’s always easier for users to leave is much more future-proof and much more effective than trying to design characteristics of good social media.

Ars Technica: How might this work in practice?

Cory Doctorow: I think you just need a protocol. This is [Mike] Maznik’s point: protocols, not products. We don’t need a universal app to make email work. We don’t need a universal app to make the web work. I always think about this in the context of administrable regulation. Making a rule that says your social media network must be good for people to use and must not harm their mental health is impossible. The fact intensivity of determining whether a platform satisfies that rule makes it a non-starter.

Whereas if you were to say, “OK, you have to support an existing federation protocol, like AT Protocol and Mastodon ActivityPub,” both have ways to port identity from one place to another and have messages auto-forward. This is also in RSS. There’s a permanent redirect directive. You do that, you’re in compliance with the regulation.

Or you have to do something that satisfies the functional requirements of the spec. So it’s not “did you make someone sad in a way that was reckless?” That is a very hard question to adjudicate. Did you satisfy these functional requirements? It’s not easy to answer that, but it’s not impossible. If you want to have our users be able to move to your platform, then you just have to support the spec that we’ve come up with, which satisfies these functional requirements.

We don’t have to have just one protocol. We can have multiple ones. Not everything has to connect to everything else, but everyone who wants to connect should be able to connect to everyone else who wants to connect. That’s end-to-end. End-to-end is not “you are required to listen to everything someone wants to tell you.” It’s that willing parties should be connected when they want to be.

Ars Technica: What about security and privacy protocols like GPG and PGP?

Cory Doctorow: There’s this argument that the reason GPG is so hard to use is that it’s intrinsic; you need a closed system to make it work. But also, until pretty recently, GPG was supported by one part-time guy in Germany who got 30,000 euros a year in donations to work on it, and he was supporting 20 million users. He was primarily interested in making sure the system was secure rather than making it usable. If you were to put Big Tech quantities of money behind improving ease of use for GPG, maybe you decide it’s a dead end because it is a 30-year-old attempt to stick a security layer on top of SMTP. Maybe there’s better ways of doing it. But I doubt that we have reached the apex of GPG usability with one part-time volunteer.

I just think there’s plenty of room there. If you have a pretty good project that is run by a large firm and has had billions of dollars put into it, the most advanced technologists and UI experts working on it, and you’ve got another project that has never been funded and has only had one volunteer on it—I would assume that dedicating resources to that second one would produce pretty substantial dividends, whereas the first one is only going to produce these minor tweaks. How much more usable does iOS get with every iteration?

I don’t know if PGP is the right place to start to make privacy, but I do think that if we can create independence of the security layer from the transport layer, which is what PGP is trying to do, then it wouldn’t matter so much that there is end-to-end encryption in Mastodon DMs or in Bluesky DMs. And again, it doesn’t matter whose sim is in your phone, so it just shouldn’t matter which platform you’re using so long as it’s secure and reliably delivered end-to-end.

Ars Technica: These days, I’m almost contractually required to ask about AI. There’s no escaping it. But it’s certainly part of the ongoing enshittification.

Cory Doctorow: I agree. Again, the companies are too big to care. They know you’re locked in, and the things that make enshittification possible—like remote software updating, ongoing analytics of use of devices—they allow for the most annoying AI dysfunction. I call it the fat-finger economy, where you have someone who works in a company on a product team, and their KPI, and therefore their bonus and compensation, is tied to getting you to use AI a certain number of times. So they just look at the analytics for the app and they ask, “What button gets pushed the most often? Let’s move that button somewhere else and make an AI summoning button.”

They’re just gaming a metric. It’s causing significant across-the-board regressions in the quality of the product, and I don’t think it’s justified by people who then discover a new use for the AI. That’s a paternalistic justification. The user doesn’t know what they want until you show it to them: “Oh, if I trick you into using it and you keep using it, then I have actually done you a favor.” I don’t think that’s happening. I don’t think people are like, “Oh, rather than press reply to a message and then type a message, I can instead have this interaction with an AI about how to send someone a message about takeout for dinner tonight.” I think people are like, “That was terrible. I regret having tapped it.” 

The speech-to-text is unusable now. I flatter myself that my spoken and written communication is not statistically average. The things that make it me and that make it worth having, as opposed to just a series of multiple-choice answers, is all the ways in which it diverges from statistical averages. Back when the model was stupider, when it gave up sooner if it didn’t recognize what word it might be and just transcribed what it thought you’d said rather than trying to substitute a more probable word, it was more accurate.  Now, what I’m getting are statistically average words that are meaningless.

That elision of nuance and detail is characteristic of what makes AI products bad. There is a bunch of stuff that AI is good at that I’m excited about, and I think a lot of it is going to survive the bubble popping. But I fear that we’re not planning for that. I fear what we’re doing is taking workers whose jobs are meaningful, replacing them with AIs that can’t do their jobs, and then those AIs are going to go away and we’ll have nothing. That’s my concern.

Ars Technica: You prescribe a “cure” for enshittification, but in such a polarized political environment, do we even have the collective will to implement the necessary policies?

Cory Doctorow: The good news is also the bad news, which is that this doesn’t just affect tech. Take labor power. There are a lot of tech workers who are looking at the way their bosses treat the workers they’re not afraid of—Amazon warehouse workers and drivers, Chinese assembly line manufacturers for iPhones—and realizing, “Oh, wait, when my boss stops being afraid of me, this is how he’s going to treat me.” Mark Zuckerberg stopped going to those all-hands town hall meetings with the engineering staff. He’s not pretending that you are his peers anymore. He doesn’t need to; he’s got a critical mass of unemployed workers he can tap into. I think a lot of Googlers figured this out after the 12,000-person layoffs. Tech workers are realizing they missed an opportunity, that they’re going to have to play catch-up, and that the only way to get there is by solidarity with other kinds of workers.

The same goes for competition. There’s a bunch of people who care about media, who are watching Warner about to swallow Paramount and who are saying, “Oh, this is bad. We need antitrust enforcement here.” When we had a functional antitrust system for the last four years, we saw a bunch of telecoms mergers stopped because once you start enforcing antitrust, it’s like eating Pringles. You just can’t stop. You embolden a lot of people to start thinking about market structure as a source of either good or bad policy. The real thing that happened with [former FTC chair] Lina Kahn doing all that merger scrutiny was that people just stopped planning mergers.

There are a lot of people who benefit from this. It’s not just tech workers or tech users; it’s not just media users. Hospital consolidation, pharmaceutical consolidation, has a lot of people who are very concerned about it. Mark Cuban is freaking out about pharmacy benefit manager consolidation and vertical integration with HMOs, as he should be. I don’t think that we’re just asking the anti-enshittification world to carry this weight.

Same with the other factors. The best progress we’ve seen on interoperability has been through right-to-repair. It hasn’t been through people who care about social media interoperability. One of the first really good state-level right-to-repair bills was the one that [Governor] Jared Polis signed in Colorado for powered wheelchairs. Those people have a story that is much more salient to normies.

What do you mean you spent six months in bed because there’s only two powered wheelchair manufacturers and your chair broke and you weren’t allowed to get it fixed by a third party?” And they’ve slashed their repair department, so it takes six months for someone to show up and fix your chair. So you had bed sores and pneumonia because you couldn’t get your chair fixed. This is bullshit.

So the coalitions are quite large. The thing that all of those forces share—interoperability, labor power, regulation, and competition—is that they’re all downstream of corporate consolidation and wealth inequality. Figuring out how to bring all of those different voices together, that’s how we resolve this. In many ways, the enshittification analysis and remedy are a human factors and security approach to designing an enshittification-resistant Internet. It’s about understanding this as a red team, blue team exercise. How do we challenge the status quo that we have now, and how do we defend the status quo that we want?

Anything that can’t go on forever eventually stops. That is the first law of finance, Stein’s law. We are reaching multiple breaking points, and the question is whether we reach things like breaking points for the climate and for our political system before we reach breaking points for the forces that would rescue those from permanent destruction.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Yes, everything online sucks now—but it doesn’t have to Read More »

why-signal’s-post-quantum-makeover-is-an-amazing-engineering-achievement

Why Signal’s post-quantum makeover is an amazing engineering achievement


COMING TO A PHONE NEAR YOU

New design sets a high standard for post-quantum readiness.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

The encryption protecting communications against criminal and nation-state snooping is under threat. As private industry and governments get closer to building useful quantum computers, the algorithms protecting Bitcoin wallets, encrypted web visits, and other sensitive secrets will be useless. No one doubts the day will come, but as the now-common joke in cryptography circles observes, experts have been forecasting this cryptocalypse will arrive in the next 15 to 30 years for the past 30 years.

The uncertainty has created something of an existential dilemma: Should network architects spend the billions of dollars required to wean themselves off quantum-vulnerable algorithms now, or should they prioritize their limited security budgets fighting more immediate threats such as ransomware and espionage attacks? Given the expense and no clear deadline, it’s little wonder that less than half of all TLS connections made inside the Cloudflare network and only 18 percent of Fortune 500 networks support quantum-resistant TLS connections. It’s all but certain that many fewer organizations still are supporting quantum-ready encryption in less prominent protocols.

Triumph of the cypherpunks

One exception to the industry-wide lethargy is the engineering team that designs the Signal Protocol, the open source engine that powers the world’s most robust and resilient form of end-to-end encryption for multiple private chat apps, most notably the Signal Messenger. Eleven days ago, the nonprofit entity that develops the protocol, Signal Messenger LLC, published a 5,900-word write-up describing its latest updates that make Signal fully quantum-resistant.

The complexity and problem-solving required for making the Signal Protocol quantum safe are as daunting as just about any in modern-day engineering. The original Signal Protocol already resembled the inside of a fine Swiss timepiece, with countless gears, wheels, springs, hands, and other parts all interoperating in an intricate way. In less adept hands, mucking about with an instrument as complex as the Signal protocol could have led to shortcuts or unintended consequences that hurt performance, undoing what would otherwise be a perfectly running watch. Yet this latest post-quantum upgrade (the first one came in 2023) is nothing short of a triumph.

“This appears to be a solid, thoughtful improvement to the existing Signal Protocol,” said Brian LaMacchia, a cryptography engineer who oversaw Microsoft’s post-quantum transition from 2015 to 2022 and now works at Farcaster Consulting Group. “As part of this work, Signal has done some interesting optimization under the hood so as to minimize the network performance impact of adding the post-quantum feature.”

Of the multiple hurdles to clear, the most challenging was accounting for the much larger key sizes that quantum-resistant algorithms require. The overhaul here adds protections based on ML-KEM-768, an implementation of the CRYSTALS-Kyber algorithm that was selected in 2022 and formalized last year by the National Institute of Standards and Technology. ML-KEM is short for Module-Lattice-Based Key-Encapsulation Mechanism, but most of the time, cryptographers refer to it simply as KEM.

Ratchets, ping-pong, and asynchrony

Like the Elliptic curve Diffie-Hellman (ECDH) protocol that Signal has used since its start, KEM is a key encapsulation mechanism. Also known as a key agreement mechanism, it provides the means for two parties who have never met to securely agree on one or more shared secrets in the presence of an adversary who is monitoring the parties’ connection. RSA, ECDH, and other encapsulation algorithms have long been used to negotiate symmetric keys (almost always AES keys) in protocols including TLS, SSH, and IKE. Unlike ECDH and RSA, however, the much newer KEM is quantum-safe.

Key agreement in a protocol like TLS is relatively straightforward. That’s because devices connecting over TLS negotiate a key over a single handshake that occurs at the beginning of a session. The agreed-upon AES key is then used throughout the session. The Signal Protocol is different. Unlike TLS sessions, Signal sessions are protected by forward secrecy, a cryptographic property that ensures the compromise of a key used to encrypt a recent set of messages can’t be used to decrypt an earlier set of messages. The protocol also offers Post-Compromise Security, which protects future messages from past key compromises. While a TLS  uses the same key throughout a session, keys within a Signal session constantly evolve.

To provide these confidentiality guarantees, the Signal Protocol updates secret key material each time a message party hits the send button or receives a message, and at other points, such as in graphical indicators that a party is currently typing and in the sending of read receipts. The mechanism that has made this constant key evolution possible over the past decade is what protocol developers call a “double ratchet.” Just as a traditional ratchet allows a gear to rotate in one direction but not in the other, the Signal ratchets allow messaging parties to create new keys based on a combination of preceding and newly agreed-upon secrets. The ratchets work in a single direction, the sending and receiving of future messages. Even if an adversary compromises a newly created secret, messages encrypted using older secrets can’t be decrypted.

The starting point is a handshake that performs three or four ECDH agreements that mix long- and short-term secrets to establish a shared secret. The creation of this “root key” allows the Double Ratchet to begin. Until 2023, the key agreement used X3DH. The handshake now uses PQXDH to make the handshake quantum-resistant.

The first layer of the Double Ratchet, the Symmetric Ratchet, derives an AES key from the root key and advances it for every message sent. This allows every message to be encrypted with a new secret key. Consequently, if attackers compromise one party’s device, they won’t be able to learn anything about the keys that came earlier. Even then, though, the attackers would still be able to compute the keys used in future messages. That’s where the second, “Diffie-Hellman ratchet” comes in.

The Diffie-Hellman ratchet incorporates a new ECDH public key into each message sent. Using Alice and Bob, the fictional characters often referred to when explaining asymmetric encryption, when Alice sends Bob a message, she creates a new ratchet keypair and computes the ECDH agreement between this key and the last ratchet public key Bob sent. This gives her a new secret, and she knows that once Bob gets her new public key, he will know this secret, too (because, as mentioned earlier, Bob previously sent that other key). With that, Alice can mix the new secret with her old root key to get a new root key and start fresh. The result: Attackers who learn her old secrets won’t be able to tell the difference between her new ratchet keys and random noise.

The result is what Signal developers describe as “ping-pong” behavior, as the parties to a discussion take turns replacing ratchet key pairs one at a time. The effect: An eavesdropper who compromises one of the parties might recover a current ratchet private key, but soon enough, that private key will be replaced with a new, uncompromised one, and in a way that keeps it free from the prying eyes of the attacker.

The objective of the newly generated keys is to limit the number of messages that can be decrypted if an adversary recovers key material at some point in an ongoing chat. Messages sent prior to and after the compromise will remain off limits.

A major challenge designers of the Signal Protocol face is the need to make the ratchets work in an asynchronous environment. Asynchronous messages occur when parties send or receive them at different times—such as while one is offline and the other is active, or vice versa—without either needing to be present or respond immediately. The entire Signal Protocol must work within this asynchronous environment. What’s more, it must work reliably over unstable networks and networks controlled by adversaries, such as a government that forces a telecom or cloud service to spy on the traffic.

Shor’s algorithm lurking

By all accounts, Signal’s double ratchet design is state-of-the-art. That said, it’s wide open to an inevitable if not immediate threat: quantum computing. That’s because an adversary capable of monitoring traffic passing from two or more messenger users can capture that data and feed it into a quantum computer—once one of sufficient power is viable—and calculate the ephemeral keys generated in the second ratchet.

In classical computing, it’s infeasible, if not impossible, for such an adversary to calculate the key. Like all asymmetric encryption algorithms, ECDH is based on a mathematical, one-way function. Also known as trapdoor functions, these problems are trivial to compute in one direction and substantially harder to compute in reverse. In elliptic curve cryptography, this one-way function is based on the Discrete Logarithm problem in mathematics. The key parameters are based on specific points in an elliptic curve over the field of integers modulo some prime P.

On average, an adversary equipped with only a classical computer would spend billions of years guessing integers before arriving at the right ones. A quantum computer, by contrast, would be able to calculate the correct integers in a matter of hours or days. A formula known as Shor’s algorithm—which runs only on a quantum computer—reverts this one-way discrete logarithm equation to a two-way one. Shor’s Algorithm can similarly make quick work of solving the one-way function that’s the basis for the RSA algorithm.

As noted earlier, the Signal Protocol received its first post-quantum makeover in 2023. This update added PQXDH—a Signal-specific implementation that combined the key agreements from elliptic curves used in X3DH (specifically X25519) and the quantum-safe KEM—in the initial protocol handshake. (X3DH was then put out to pasture as a standalone implementation.)

The move foreclosed the possibility of a quantum attack being able to recover the symmetric key used to start the ratchets, but the ephemeral keys established in the ping-ponging second ratchet remained vulnerable to a quantum attack. Signal’s latest update adds quantum resistance to these keys, ensuring that forward secrecy and post-compromise security are safe from Shor’s algorithm as well.

Even though the ping-ponging keys are vulnerable to future quantum attacks, they are broadly believed to be secure against today’s attacks from classical computers. The Signal Protocol developers didn’t want to remove them or the battle-tested code that produces them. That led to their decision to add quantum resistance by adding a third ratchet. This one uses a quantum-safe KEM to produce new secrets much like the Diffie-Hellman ratchet did before, ensuring quantum-safe, post-compromise security.

The technical challenges were anything but easy. Elliptic curve keys generated in the X25519 implementation are about 32 bytes long, small enough to be added to each message without creating a burden on already constrained bandwidths or computing resources. A ML-KEM 768 key, by contrast, is 1,000 bytes. Additionally, Signal’s design requires sending both an encryption key and a ciphertext, making the total size 2272 bytes.

And then there were three

To handle the 71x increase, Signal developers considered a variety of options. One was to send the 2272-byte KEM key less often—say every 50th message or once every week—rather than every message. That idea was nixed because it doesn’t work well in asynchronous or adversarial messaging environments. Signal Protocol developers Graeme Connell and Rolfe Schmidt explained:

Consider the case of “send a key if you haven’t sent one in a week”. If Bob has been offline for 2 weeks, what does Alice do when she wants to send a message? What happens if we can lose messages, and we lose the one in fifty that contains a new key? Or, what happens if there’s an attacker in the middle that wants to stop us from generating new secrets, and can look for messages that are [many] bytes larger than the others and drop them, only allowing keyless messages through?

Another option Signal engineers considered was breaking the 2272-byte key into smaller chunks, say 71 of them that are 32 bytes each. Breaking up the KEM key into smaller chunks and putting one in each message sounds like a viable approach at first, but once again, the asynchronous environment of messaging made it unworkable. What happens, for example, when data loss causes one of the chunks to be dropped? The protocol could deal with this scenario by just repeat-sending chunks again after sending all 71 previously. But then an adversary monitoring the traffic could simply cause packet 3 to be dropped each time, preventing Alice and Bob from completing the key exchange.

Signal developers ultimately went with a solution that used this multiple-chunks approach.

Sneaking an elephant through the cat door

To manage the asynchrony challenges, the developers turned to “erasure codes,” a method of breaking up larger data into smaller pieces such that the original can be reconstructed using any sufficiently sized subset of chunks.

Charlie Jacomme, a researcher at INRIA Nancy on the Pesto team who focuses on formal verification and secure messaging, said this design accounts for packet loss by building redundancy into the chunked material. Instead of all x number of chunks having to be successfully received to reconstruct the key, the model requires only x-y chunks to be received, where y is the acceptable number of packets lost. As long as that threshold is met, the new key can be established even when packet loss occurs.

The other part of the design was to split the KEM computations into smaller steps. These KEM computations are distinct from the KEM key material.

As Jacomme explained it:

Essentially, a small part of the public key is enough to start computing and sending a bigger part of the ciphertext, so you can quickly send in parallel the rest of the public key and the beginning of the ciphertext. Essentially, the final computations are equal to the standard, but some stuff was parallelized.

All this in fact plays a role in the end security guarantees, because by optimizing the fact that KEM computations are done faster, you introduce in your key derivation fresh secrets more frequently.

Signal’s post 10 days ago included several images that illustrate this design:

While the design solved the asynchronous messaging problem, it created a new complication of its own: This new quantum-safe ratchet advanced so quickly that it couldn’t be kept synchronized with the Diffie-Hellman ratchet. Ultimately, the architects settled on a creative solution. Rather than bolt KEM onto the existing double ratchet, they allowed it to remain more or less the same as it had been. Then they used the new quantum-safe ratchet to implement a parallel secure messaging system.

Now, when the protocol encrypts a message, it sources encryption keys from both the classic Double Ratchet and the new ratchet. It then mixes the two keys together (using a cryptographic key derivation function) to get a new encryption key that has all of the security of the classical Double Ratchet but now has quantum security, too.

The Signal engineers have given this third ratchet the formal name: Sparse Post Quantum Ratchet, or SPQR for short. The third ratchet was designed in collaboration with PQShield, AIST, and New York University. The developers presented the erasure-code-based chunking and the high-level Triple Ratchet design at the Eurocrypt 2025 conference. At the Usenix 25 conference, they discussed the six options they considered for adding quantum-safe forward secrecy and post-compromise security and why SPQR and one other stood out. Presentations at the NIST PQC Standardization Conference and the Cryptographic Applications Workshop explain the details of chunking, the design challenges, and how the protocol had to be adapted to use the standardized ML-KEM.

Jacomme further observed:

The final thing interesting for the triple ratchet is that it nicely combines the best of both worlds. Between two users, you have a classical DH-based ratchet going on one side, and fully independently, a KEM-based ratchet is going on. Then, whenever you need to encrypt something, you get a key from both, and mix it up to get the actual encryption key. So, even if one ratchet is fully broken, be it because there is now a quantum computer, or because somebody manages to break either elliptic curves or ML-KEM, or because the implementation of one is flawed, or…, the Signal message will still be protected by the second ratchet. In a sense, this update can be seen, of course simplifying, as doubling the security of the ratchet part of Signal, and is a cool thing even for people that don’t care about quantum computers.

As both Signal and Jacomme noted, users of Signal and other messengers relying on the Signal Protocol need not concern themselves with any of these new designs. To paraphrase a certain device maker, it just works.

In the coming weeks or months, various messaging apps and app versions will be updated to add the triple ratchet. Until then, apps will simply rely on the double ratchet as they always did. Once apps receive the update, they’ll behave exactly as they did before upgrading.

For those who care about the internal workings of their Signal-based apps, though, the architects have documented in great depth the design of this new ratchet and how it behaves. Among other things, the work includes a mathematical proof verifying that the updated Signal protocol provides the claimed security properties.

Outside researchers are applauding the work.

“If the normal encrypted messages we use are cats, then post-quantum ciphertexts are elephants,” Matt Green, a cryptography expert at Johns Hopkins University, wrote in an interview. “So the problem here is to sneak an elephant through a tunnel designed for cats. And that’s an amazing engineering achievement. But it also makes me wish we didn’t have to deal with elephants.”

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Why Signal’s post-quantum makeover is an amazing engineering achievement Read More »

“like-putting-on-glasses-for-the-first-time”—how-ai-improves-earthquake-detection

“Like putting on glasses for the first time”—how AI improves earthquake detection


AI is “comically good” at detecting small earthquakes—here’s why that matters.

Credit: Aurich Lawson | Getty Images

On January 1, 2008, at 1: 59 am in Calipatria, California, an earthquake happened. You haven’t heard of this earthquake; even if you had been living in Calipatria, you wouldn’t have felt anything. It was magnitude -0.53, about the same amount of shaking as a truck passing by. Still, this earthquake is notable, not because it was large but because it was small—and yet we know about it.

Over the past seven years, AI tools based on computer imaging have almost completely automated one of the fundamental tasks of seismology: detecting earthquakes. What used to be the task of human analysts—and later, simpler computer programs—can now be done automatically and quickly by machine-learning tools.

These machine-learning tools can detect smaller earthquakes than human analysts, especially in noisy environments like cities. Earthquakes give valuable information about the composition of the Earth and what hazards might occur in the future.

“In the best-case scenario, when you adopt these new techniques, even on the same old data, it’s kind of like putting on glasses for the first time, and you can see the leaves on the trees,” said Kyle Bradley, co-author of the Earthquake Insights newsletter.

I talked with several earthquake scientists, and they all agreed that machine-learning methods have replaced humans for the better in these specific tasks.

“It’s really remarkable,” Judith Hubbard, a Cornell University professor and Bradley’s co-author, told me.

Less certain is what comes next. Earthquake detection is a fundamental part of seismology, but there are many other data processing tasks that have yet to be disrupted. The biggest potential impacts, all the way to earthquake forecasting, haven’t materialized yet.

“It really was a revolution,” said Joe Byrnes, a professor at the University of Texas at Dallas. “But the revolution is ongoing.”

When an earthquake happens in one place, the shaking passes through the ground, similar to how sound waves pass through the air. In both cases, it’s possible to draw inferences about the materials the waves pass through.

Imagine tapping a wall to figure out if it’s hollow. Because a solid wall vibrates differently than a hollow wall, you can figure out the structure by sound.

With earthquakes, this same principle holds. Seismic waves pass through different materials (rock, oil, magma, etc.) differently, and scientists use these vibrations to image the Earth’s interior.

The main tool that scientists traditionally use is a seismometer. These record the movement of the Earth in three directions: up–down, north–south, and east–west. If an earthquake happens, seismometers can measure the shaking in that particular location.

An old-fashioned physical seismometer. Today, seismometers record data digitally. Credit: Yamaguchi先生 on Wikimedia CC BY-SA 3.0

Scientists then process raw seismometer information to identify earthquakes.

Earthquakes produce multiple types of shaking, which travel at different speeds. Two types, Primary (P) waves and Secondary (S) waves are particularly important, and scientists like to identify the start of each of these phases.

Before good algorithms, earthquake cataloging had to happen by hand. Byrnes said that “traditionally, something like the lab at the United States Geological Survey would have an army of mostly undergraduate students or interns looking at seismograms.”

However, there are only so many earthquakes you can find and classify manually. Creating algorithms to effectively find and process earthquakes has long been a priority in the field—especially since the arrival of computers in the early 1950s.

“The field of seismology historically has always advanced as computing has advanced,” Bradley told me.

There’s a big challenge with traditional algorithms, though: They can’t easily find smaller quakes, especially in noisy environments.

Composite seismogram of common events. Note how each event has a slightly different shape. Credit: EarthScope Consortium CC BY 4.0

As we see in the seismogram above, many different events can cause seismic signals. If a method is too sensitive, it risks falsely detecting events as earthquakes. The problem is especially bad in cities, where the constant hum of traffic and buildings can drown out small earthquakes.

However, earthquakes have a characteristic “shape.” The magnitude 7.7 earthquake above looks quite different from the helicopter landing, for instance.

So one idea scientists had was to make templates from human-labeled datasets. If a new waveform correlates closely with an existing template, it’s almost certainly an earthquake.

Template matching works very well if you have enough human-labeled examples. In 2019, Zach Ross’ lab at Caltech used template matching to find 10 times as many earthquakes in Southern California as had previously been known, including the earthquake at the start of this story. Almost all of the new 1.6 million quakes they found were very small, magnitude 1 and below.

If you don’t have an extensive pre-existing dataset of templates, however, you can’t easily apply template matching. That isn’t a problem in Southern California—which already had a basically complete record of earthquakes down to magnitude 1.7—but it’s a challenge elsewhere.

Also, template matching is computationally expensive. Creating a Southern California quake dataset using template matching took 200 Nvidia P100 GPUs running for days on end.

There had to be a better way.

AI detection models solve all of these problems:

  • They are faster than template matching.

  • Because AI detection models are very small (around 350,000 parameters compared to billions in LLMs like GPT4.0), they can be run on consumer CPUs.

  • AI models generalize well to regions not represented in the original dataset.

As an added bonus, AI models can give better information about when the different types of earthquake shaking arrive. Timing the arrivals of the two most important waves—P and S waves—is called phase picking. It allows scientists to draw inferences about the structure of the quake. AI models can do this alongside earthquake detection.

The basic task of earthquake detection (and phase picking) looks like this:

Cropped figure from Earthquake Transformer—an attentive deep-learning model for simultaneous earthquake detection and phase picking. Credit: Nature Communications

The first three rows represent different directions of vibration (east–west, north–south, and up–down respectively). Given these three dimensions of vibration, can we determine if an earthquake occurred, and if so, when it started?

We want to detect the initial P wave, which arrives directly from the site of the earthquake. But this can be tricky because echoes of the P wave may get reflected off other rock layers and arrive later, making the waveform more complicated.

Ideally, then, our model outputs three things at every time step in the sample:

  1. The probability that an earthquake is occurring at that moment.

  2. The probability that the first P wave arrives at that moment.

  3. The probability that the first S wave arrives at that moment.

We see all three outputs in the fourth row: the detection in green, the P wave arrival in blue, and the S wave arrival in red. (There are two earthquakes in this sample.)

To train an AI model, scientists take large amounts of labeled data, like what’s above, and do supervised training. I’ll describe one of the most used models: Earthquake Transformer, which was developed around 2020 by a Stanford University team led by S. Mostafa Mousavi, who later became a Harvard professor.

Like many earthquake detection models, Earthquake Transformer adapts ideas from image classification. Readers may be familiar with AlexNet, a famous image-recognition model that kicked off the deep-learning boom in 2012.

AlexNet used convolutions, a neural network architecture that’s based on the idea that pixels that are physically close together are more likely to be related. The first convolutional layer of AlexNet broke an image down into small chunks—11 pixels on a side—and classified each chunk based on the presence of simple features like edges or gradients.

The next layer took the first layer’s classifications as input and checked for higher-level concepts such as textures or simple shapes.

Each convolutional layer analyzed a larger portion of the image and operated at a higher level of abstraction. By the final layers, the network was looking at the entire image and identifying objects like “mushroom” and “container ship.”

Images are two-dimensional, so AlexNet is based on two-dimensional convolutions. By contrast, seismograph data is one-dimensional, so Earthquake Transformer uses one-dimensional convolutions over the time dimension. The first layer analyzes vibration data in 0.1-second chunks, while later layers identify patterns over progressively longer time periods.

It’s difficult to say what exact patterns the earthquake model is picking out, but we can analogize this to a hypothetical audio transcription model using one-dimensional convolutions. That model might first identify consonants, then syllables, then words, then sentences over increasing time scales.

Earthquake Transformer converts raw waveform data into a collection of high-level representations that indicate the likelihood of earthquakes and other seismologically significant events. This is followed by a series of deconvolution layers that pinpoint exactly when an earthquake—and its all-important P and S waves—occurred.

The model also uses an attention layer in the middle of the model to mix information between different parts of the time series. The attention mechanism is most famous in large language models, where it helps pass information between words. It plays a similar role in seismographic detection. Earthquake seismograms have a general structure: P waves followed by S waves followed by other types of shaking. So if a segment looks like the start of a P wave, the attention mechanism helps it check that it fits into a broader earthquake pattern.

All of the Earthquake Transformer’s components are standard designs from the neural network literature. Other successful detection models, like PhaseNet, are even simpler. PhaseNet uses only one-dimensional convolutions to pick the arrival times of earthquake waves. There are no attention layers.

Generally, there hasn’t been “much need to invent new architectures for seismology,” according to Byrnes. The techniques derived from image processing have been sufficient.

What made these generic architectures work so well then? Data. Lots of it.

Ars has previously reported on how the introduction of ImageNet, an image recognition benchmark, helped spark the deep learning boom. Large, publicly available earthquake datasets have played a similar role in seismology.

Earthquake Transformer was trained using the Stanford Earthquake Dataset (STEAD), which contains 1.2 million human-labeled segments of seismogram data from around the world. (The paper for STEAD explicitly mentions ImageNet as an inspiration). Other models, like PhaseNet, were also trained on hundreds of thousands or millions of labeled segments.

All recorded earthquakes in the Stanford Earthquake Dataset. Credit: IEEE (CC BY 4.0)

The combination of the data and the architecture just works. The current models are “comically good” at identifying and classifying earthquakes, according to Byrnes. Typically, machine-learning methods find 10 or more times the quakes that were previously identified in an area. You can see this directly in an Italian earthquake catalog:

From Machine learning and earthquake forecasting—next steps by Beroza et al. Credit: Nature Communications (CC-BY 4.0)

AI tools won’t necessarily detect more earthquakes than template matching. But AI-based techniques are much less compute- and labor-intensive, making them more accessible to the average research project and easier to apply in regions around the world.

All in all, these machine-learning models are so good that they’ve almost completely supplanted traditional methods for detecting and phase-picking earthquakes, especially for smaller magnitudes.

The holy grail of earthquake science is earthquake prediction. For instance, scientists know that a large quake will happen near Seattle but have little ability to know whether it will happen tomorrow or in a hundred years. It would be helpful if we could predict earthquakes precisely enough to allow people in affected areas to evacuate.

You might think AI tools would help predict earthquakes, but that doesn’t seem to have happened yet.

The applications are more technical and less flashy, said Cornell’s Judith Hubbard.

Better AI models have given seismologists much more comprehensive earthquake catalogs, which have unlocked “a lot of different techniques,” Bradley said.

One of the coolest applications is in understanding and imaging volcanoes. Volcanic activity produces a large number of small earthquakes, whose locations help scientists understand the structure of the magma system. In a 2022 paper, John Wilding and co-authors used a large AI-generated earthquake catalog to create this incredible image of the structure of the Hawaiian volcanic system.

Each dot represents an individual earthquake. Credit: Wilding et al., The magmatic web beneath Hawai‘i.

They provided direct evidence of a previously hypothesized magma connection between the deep Pāhala sill complex and Mauna Loa’s shallow volcanic structure. You can see this in the image with the arrow labeled as Pāhala-Mauna Loa seismicity band. The authors were also able to clarify the structure of the Pāhala sill complex into discrete sheets of magma. This level of detail could potentially facilitate better real-time monitoring of earthquakes and more accurate eruption forecasting.

Another promising area is lowering the cost of dealing with huge datasets. Distributed Acoustic Sensing (DAS) is a powerful technique that uses fiber-optic cables to measure seismic activity across the entire length of the cable. A single DAS array can produce “hundreds of gigabytes of data” a day, according to Jiaxuan Li, a professor at the University of Houston. That much data can produce extremely high-resolution datasets—enough to pick out individual footsteps.

AI tools make it possible to very accurately time earthquakes in DAS data. Before the introduction of AI techniques for phase picking in DAS data, Li and some of his collaborators attempted to use traditional techniques. While these “work roughly,” they weren’t accurate enough for their downstream analysis. Without AI, much of his work would have been “much harder,” he told me.

Li is also optimistic that AI tools will be able to help him isolate “new types of signals” in the rich DAS data in the future.

Not all AI techniques have paid off

As in many other scientific fields, seismologists face some pressure to adopt AI methods, whether or not they are relevant to their research.

“The schools want you to put the word AI in front of everything,” Byrnes said. “It’s a little out of control.”

This can lead to papers that are technically sound but practically useless. Hubbard and Bradley told me that they’ve seen a lot of papers based on AI techniques that “reveal a fundamental misunderstanding of how earthquakes work.”

They pointed out that graduate students can feel pressure to specialize in AI methods at the cost of learning less about the fundamentals of the scientific field. They fear that if this type of AI-driven research becomes entrenched, older methods will get “out-competed by a kind of meaninglessness.”

While these are real issues, and ones Understanding AI has reported on before, I don’t think they detract from the success of AI earthquake detection. In the last five years, an AI-based workflow has almost completely replaced one of the fundamental tasks in seismology for the better.

That’s pretty cool.

Kai Williams is a reporter for Understanding AI, a Substack newsletter founded by Ars Technica alum Timothy B. Lee. His work is supported by a Tarbell Fellowship. Subscribe to Understanding AI to get more from Tim and Kai.

“Like putting on glasses for the first time”—how AI improves earthquake detection Read More »

we’re-about-to-find-many-more-interstellar-interlopers—here’s-how-to-visit-one

We’re about to find many more interstellar interlopers—here’s how to visit one


“You don’t have to claim that they’re aliens to make these exciting.”

The Hubble Space Telescope captured this image of the interstellar comet 3I/ATLAS on July 21, when the comet was 277 million miles from Earth. Hubble shows that the comet has a teardrop-shaped cocoon of dust coming off its solid, icy nucleus. Credit: NASA, ESA, David Jewitt (UCLA); Image Processing: Joseph DePasquale (STScI)

The Hubble Space Telescope captured this image of the interstellar comet 3I/ATLAS on July 21, when the comet was 277 million miles from Earth. Hubble shows that the comet has a teardrop-shaped cocoon of dust coming off its solid, icy nucleus. Credit: NASA, ESA, David Jewitt (UCLA); Image Processing: Joseph DePasquale (STScI)

A few days ago, an inscrutable interstellar interloper made its closest approach to Mars, where a fleet of international spacecraft seek to unravel the red planet’s ancient mysteries.

Several of the probes encircling Mars took a break from their usual activities and turned their cameras toward space to catch a glimpse of an object named 3I/ATLAS, a rogue comet that arrived in our Solar System from interstellar space and is now barreling toward perihelion—its closest approach to the Sun—at the end of this month.

This is the third interstellar object astronomers have detected within our Solar System, following 1I/ʻOumuamua and 2I/Borisov discovered in 2017 and 2019. Scientists think interstellar objects routinely transit among the planets, but telescopes have only recently had the ability to find one. For example, the telescope that discovered Oumuamua only came online in 2010.

Detectable but still unreachable

Astronomers first reported observations of 3I/ATLAS on July 1, just four months before reaching its deepest penetration into the Solar System. Unfortunately for astronomers, the particulars of this object’s trajectory will bring it to perihelion when the Earth is on the opposite side of the Sun. The nearest 3I/ATLAS will come to Earth is about 170 million miles (270 million kilometers) in December, eliminating any chance for high-resolution imaging. The viewing geometry also means the Sun’s glare will block all direct views of the comet from Earth until next month.

The James Webb Space Telescope observed interstellar comet 3I/ATLAS on August 6 with its Near-Infrared Spectrograph instrument. Credit: NASA/James Webb Space Telescope

Because of that, the closest any active spacecraft will get to 3I/ATLAS happened Friday, when it passed less than 20 million miles (30 million kilometers) from Mars. NASA’s Perseverance rover and Mars Reconnaissance Orbiter were expected to make observations of 3I/ATLAS, along with Europe’s Mars Express and ExoMars Trace Gas Orbiter missions.

The best views of the object so far have been captured by the James Webb Space Telescope and the Hubble Space Telescope, positioned much closer to Earth. Those observations helped astronomers narrow down the object’s size, but the estimates remain imprecise. Based on Hubble’s images, the icy core of 3I/ATLAS is somewhere between the size of the Empire State Building to something a little larger than Central Park.

That may be the most we’ll ever know about the dimensions of 3I/ATLAS. The spacecraft at Mars lack the exquisite imaging sensitivity of Webb and Hubble, so don’t expect spectacular views from Friday’s observations. But scientists hope to get a better handle on the cloud of gas and dust surrounding the object, giving it the appearance of a comet. Spectroscopic observations have shown the coma around 3I/ATLAS contains water vapor and an unusually strong signature of carbon dioxide extending out nearly a half-million miles.

On Tuesday, the European Space Agency released the first grainy images of 3I/ATLAS captured at Mars. The best views will come from a small telescope called HiRISE on NASA’s Mars Reconnaissance Orbiter. The images from NASA won’t be released until after the end of the ongoing federal government shutdown, according to a member of the HiRISE team.

Europe’s ExoMars Trace Gas Orbiter turned its eyes toward interstellar comet 3I/ATLAS as it passed close to Mars on Friday, October 3. The comet’s coma is visible as a fuzzy blob surrounding its nucleus, which was not resolved by the spacecraft’s camera. Credit: ESA/TGO/CaSSIS

Studies of 3I/ATLAS suggest it was probably kicked out of another star system, perhaps by an encounter with a giant planet. Comets in our Solar System sometimes get ejected into the Milky Way galaxy when they come too close to Jupiter. It roamed the galaxy for billions of years before arriving in the Sun’s galactic neighborhood.

The rogue comet is now gaining speed as gravity pulls it toward perihelion, when it will max out at a relative velocity of 152,000 mph (68 kilometers per second), much too fast to be bound into a closed orbit around the Sun. Instead, the comet will catapult back into the galaxy, never to be seen again.

We need to talk about aliens

Anyone who studies planetary formation would relish the opportunity to get a close-up look at an interstellar object. Sending a mission to one would undoubtedly yield a scientific payoff. There’s a good chance that many of these interlopers have been around longer than our own 4.5 billion-year-old Solar System.

One study from the University of Oxford suggests that 3I/ATLAS came from the “thick disk” of the Milky Way, which is home to a dense population of ancient stars. This origin story would mean the comet is probably more than 7 billion years old, holding clues about cosmic history that are simply inaccessible among the planets, comets, and asteroids that formed with the birth of the Sun.

This is enough reason to mount a mission to explore one of these objects, scientists said. It doesn’t need justification from unfounded theories that 3I/ATLAS might be an artifact of alien technology, as proposed by Harvard University astrophysicist Avi Loeb. The scientific consensus is that the object is of natural origin.

Loeb shared a similar theory about the first interstellar object found wandering through our Solar System. His statements have sparked questions in popular media about why the world’s space agencies don’t send a probe to actually visit one. Loeb himself proposed redirecting NASA’s Juno spacecraft in orbit around Jupiter on a mission to fly by 3I/ATLAS, and his writings prompted at least one member of Congress to write a letter to NASA to “rejuvenate” the Juno mission by breaking out of Jupiter’s orbit and taking aim at 3I/ATLAS for a close-up inspection.

The problem is that Juno simply doesn’t have enough fuel to reach the comet, and its main engine is broken. In fact, the total boost required to send Juno from Jupiter to 3I/ATLAS (roughly 5,800 mph or 2.6 kilometers per second) would surpass the fuel capacity of most interplanetary probes.

Ars asked Scott Bolton, lead scientist on the Juno mission, and he confirmed that the spacecraft lacks the oomph required for the kind of maneuvers proposed by Loeb. “We had no role in that paper,” Bolton told Ars. “He assumed propellant that we don’t really have.”

Avi Loeb, a Harvard University astrophysicist. Credit: Anibal Martel/Anadolu Agency via Getty Images

So Loeb’s exercise was moot, but his talk of aliens has garnered public attention. Loeb appeared on the conservative network Newsmax last week to discuss his theory of 3I/ATLAS alongside Rep. Tim Burchett (R-Tenn.). Predictably, conspiracy theories abounded. But as of Tuesday, the segment has 1.2 million views on YouTube. Maybe it’s a good thing that people who approve government budgets, especially those without a preexisting interest in NASA, are eager to learn more about the Universe. We will leave it to the reader to draw their own conclusions on that matter.

Loeb’s calculations also help illustrate the difficulty of pulling off a mission to an interstellar object. So far, we’ve only known about an incoming interstellar intruder a few months before it comes closest to Earth. That’s not to mention the enormous speeds at which these objects move through the Solar System. It’s just not feasible to build a spacecraft and launch it on such short notice.

Now, some scientists are working on ways to overcome these limitations.

So you’re saying there’s a chance?

One of these people is Colin Snodgrass, an astronomer and planetary scientist at the University of Edinburgh. A few years ago, he helped propose to the European Space Agency a mission concept that would have very likely been laughed out of the room a generation ago. Snodgrass and his team wanted a commitment from ESA of up to $175 million (150 million euros) to launch a mission with no idea of where it would go.

ESA officials called Snodgrass in 2019 to say the agency would fund his mission, named Comet Interceptor, for launch in the late 2020s. The goal of the mission is to perform the first detailed observations of a long-period comet. So far, spacecraft have only visited short-period comets that routinely dip into the inner part of the Solar System.

A long-period comet is an icy visitor from the farthest reaches of the Solar System that has spent little time getting blasted by the Sun’s heat and radiation, freezing its physical and chemical properties much as they were billions of years ago.

Long-period comets are typically discovered a year or two before coming near the Sun, still not enough time to develop a mission from scratch. With Comet Interceptor, ESA will launch a probe to loiter in space a million miles from Earth, wait for the right comet to come along, then fire its engines to pursue it.

Odds are good that the right comet will come from within the Solar System. “That is the point of the mission,” Snodgrass told Ars.

ESA’s Comet Interceptor will be the first mission to visit a comet coming directly from the outer reaches of the Sun’s realm, carrying material untouched since the dawn of the Solar System. Credit: European Space Agency

But if astronomers detect an interstellar object coming toward us on the right trajectory, there’s a chance Comet Interceptor could reach it.

“I think that the entire science team would agree, if we get really lucky and there’s an interstellar object that we could reach, then to hell with the normal plan, let’s go and do this,” Snodgrass said. “It’s an opportunity you couldn’t just leave sitting there.”

But, he added, it’s “very unlikely” that an interstellar object will be in the right place at the right time. “Although everyone’s always very excited about the possibility, and we’re excited about the possibility, we kind of try and keep the expectations to a realistic level.”

For example, if Comet Interceptor were in space today, there’s no way it could reach 3I/ATLAS. “It’s an unfortunate one,” Snodgrass said. “Its closest point to the Sun, it reaches that on the other side of the Sun from where the Earth is. Just bad timing.” If an interceptor were parked somewhere else in the Solar System, it might be able to get itself in position for an encounter with 3I/ATLAS. “There’s only so much fuel aboard,” Snodgrass said. “There’s only so fast we can go.”

It’s even harder to send a spacecraft to encounter an interstellar object than it is to visit one of the Solar System’s homegrown long-period comets. The calculation of whether Comet Interceptor could reach one of these galactic visitors boils down to where it’s heading and when astronomers discover it.

Snodgrass is part of a team using big telescopes to observe 3I/ATLAS from a distance. “As it’s getting closer to the Sun, it is getting brighter,” he said in an interview.

“You don’t have to claim that they’re aliens to make these exciting,” Snodgrass said. “They’re interesting because they are a bit of another solar system that you can actually feasibly get an up-close view of, even the sort of telescopic views we’re getting now.”

Colin Snodgrass, a professor at the University of Edinburgh, leads the Comet Interceptor science team. Credit: University of Edinburgh

Comets and asteroids are the linchpins for understanding the formation of the Solar System. These modest worlds are the leftover building blocks from the debris that coalesced into the planets. Today, direct observations have only allowed scientists to study the history of one planetary system. An interstellar comet would grow the sample size to two.

Still, Snodgrass said his team prefers to keep their energy focused on reaching a comet originating from the frontier of our own Solar System. “We’re not going to let a very lovely Solar System comet go by, waiting to see ‘what if there’s an interstellar thing?'” he said.

Snodgrass sees Comet Interceptor as a proof of concept for scientists to propose a future mission specially designed to travel to an interstellar object. “You need to figure out how do you build the souped-up version that could really get to an interstellar object? I think that’s five or 10 years away, but [it’s] entirely realistic.”

An American answer

Scientists in the United States are working on just such a proposal. A team from the Southwest Research Institute completed a concept study showing how a mission could fly by one of these interstellar visitors. What’s more, the US scientists say their proposed mission could have actually reached 3I/ATLAS had it already been in space.

The American concept is similar to Europe’s Comet Interceptor in that it will park a spacecraft somewhere in deep space and wait for the right target to come along. The study was led by Alan Stern, the chief scientist on NASA’s New Horizons mission that flew by Pluto a decade ago. “These new kinds of objects offer humankind the first feasible opportunity to closely explore bodies formed in other star systems,” he said.

An animation of the trajectory of 3I/ATLAS through the inner Solar System. Credit: NASA/JPL

It’s impossible with current technology to send a spacecraft to match orbits and rendezvous with a high-speed interstellar comet. “We don’t have to catch it,” Stern recently told Ars. “We just have to cross its orbit. So it does carry a fair amount of fuel in order to get out of Earth’s orbit and onto the comet’s path to cross that path.”

Stern said his team developed a cost estimate for such a mission, and while he didn’t disclose the exact number, he said it would fall under NASA’s cost cap for a Discovery-class mission. The Discovery program is a line of planetary science missions that NASA selects through periodic competitions within the science community. The cost cap for NASA’s next Discovery competition is expected to be $800 million, not including the launch vehicle.

A mission to encounter an interstellar comet requires no new technologies, Stern said. Hopes for such a mission are bolstered by the activation of the US-funded Vera Rubin Observatory, a state-of-the-art facility high in the mountains of Chile set to begin deep surveys of the entire southern sky later this year. Stern predicts Rubin will discover “one or two” interstellar objects per year. The new observatory should be able to detect the faint light from incoming interstellar bodies sooner, providing missions with more advance warning.

“If we put a spacecraft like this in space for a few years, while it’s waiting, there should be five or 10 to choose from,” he said.

Alan Stern speaks onstage during Day 1 of TechCrunch Disrupt SF 2018 in San Francisco. Credit: Photo by Kimberly White/Getty Images for TechCrunch

Winning NASA funding for a mission like Stern’s concept will not be easy. It must compete with dozens of other proposals, and NASA’s next Discovery competition is probably at least two or three years away. The timing of the competition is more uncertain than usual due to swirling questions about NASA’s budget after the Trump administration announced it wants to cut the agency’s science funding in half.

Comet Interceptor, on the other hand, is already funded in Europe. ESA has become a pioneer in comet exploration. The Giotto probe flew by Halley’s Comet in 1986, becoming the first spacecraft to make close-up observations of a comet. ESA’s Rosetta mission became the first spacecraft to orbit a comet in 2014, and later that year, it deployed a German-built lander to return the first data from the surface of a comet. Both of those missions explored short-period comets.

“Each time that ESA has done a comet mission, it’s done something very ambitious and very new,” Snodgrass said. “The Giotto mission was the first time ESA really tried to do anything interplanetary… And then, Rosetta, putting this thing in orbit and landing on a comet was a crazy difficult thing to attempt to do.”

“They really do push the envelope a bit, which is good because ESA can be quite risk averse, I think it’s fair to say, with what they do with missions,” he said. “But the comet missions, they are things where they’ve really gone for that next step, and Comet Interceptor is the same. The whole idea of trying to design a space mission before you know where you’re going is a slightly crazy way of doing things. But it’s the only way to do this mission. And it’s great that we’re trying it.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

We’re about to find many more interstellar interlopers—here’s how to visit one Read More »

google-pixel-10-pro-fold-review:-the-ultimate-google-phone

Google Pixel 10 Pro Fold review: The ultimate Google phone


Google delivers another phone that is slightly better than its predecessor—is that enough?

Pixel 10 Pro Fold flexed

The Pixel 10 Pro Fold is a sleek piece of hardware. Credit: Ryan Whitwam

The Pixel 10 Pro Fold is a sleek piece of hardware. Credit: Ryan Whitwam

When the first foldable phones came along, they seemed like a cool evolution of the traditional smartphone form factor and, if they got smaller and cheaper, like something people might actually want. After more than five years of foldable phones, we can probably give up on the latter. Google’s new Pixel 10 Pro Fold retains the $1,800 price tag of last year’s model, and while it’s improved in several key ways, spending almost two grand on any phone remains hard to justify.

For those whose phones are a primary computing device or who simply love gadgets, the Pixel 10 Pro Fold is still appealing. It offers the same refined Android experience as the rest of the Pixel 10 lineup, with much more screen real estate on which to enjoy it. Google also improved the hinge for better durability, shaved off some bezel, and boosted both charging speed and battery capacity. However, the form factor hasn’t taken the same quantum leap as Samsung’s latest foldable.

An iterative (but good) design

The Pixel 10 Pro Fold doesn’t reinvent the wheel—it looks and feels almost exactly like last year’s foldable, with a few minor tweaks centered around a new “gearless” hinge. Dropping the internal gears allegedly helps make the mechanism twice as durable. Google claims the Pixel 10 Pro Fold’s hinge will last for more than 10 years of folding and unfolding.

Specs at a glance: Google Pixel 10 series
Pixel 10 ($799) Pixel 10 Pro ($999) Pixel 10 Pro XL ($1,199) Pixel 10 Pro Fold ($1,799)
SoC Google Tensor G5  Google Tensor G5  Google Tensor G5  Google Tensor G5
Memory 12GB 16GB 16GB 16GB
Storage 128GB / 256GB 128GB / 256GB / 512GB 128GB / 256GB / 512GB / 1TB 256GB / 512GB / 1TB
Display 6.3-inch 1080×2424 OLED, 60-120 Hz, 3,000 nits 6.3-inch 1280×2856 LTPO OLED, 1-120 Hz, 3,300 nits 6.8-inch 1344×2992 LTPO OLED, 1-120 Hz, 3,300 nits External: 6.4-inch 1080×2364 OLED, 60-120 Hz, 3,000 nits; Internal: 8-inch 2076×2152 LTPO OLED, 1-120 Hz, 3,000 nits
Cameras 48 MP wide with Macro

Focus, F/1.7, 1/2-inch sensor; 13 MP ultrawide, f/2.2, 1/3.1-inch sensor;

10.8 MP 5x telephoto, f/3.1, 1/3.2-inch sensor; 10.5 MP selfie, f/2.2
50 MP wide with Macro

Focus, F/1.68, 1/1.3-inch sensor; 48 MP ultrawide, f/1.7, 1/2.55-inch sensor;

48 MP 5x telephoto, f/2.8, 1/2.55-inch sensor; 42 MP selfie, f/2.2
50 MP wide with Macro

Focus, F/1.68, 1/1.3-inch sensor; 48 MP ultrawide, f/1.7, 1/2.55-inch sensor;

48 MP 5x telephoto, f/2.8, 1/2.55-inch sensor; 42 MP selfie, f/2.2
48 MP wide, F/1.7, 1/2-inch sensor; 10.5 MP ultrawide with Macro Focus, f/2.2, 1/3.4-inch sensor;

10.8 MP 5x telephoto, f/3.1, 1/3.2-inch sensor; 10.5 MP selfie, f/2.2 (outer and inner)
Software Android 16 Android 16 Android 16 Android 16
Battery 4,970 mAh, up to 30 W wired charging, 15 W wireless charging (Pixelsnap) 4,870 mAh, up to 30 W wired charging, 15 W wireless charging (Pixelsnap) 5,200 mAh, up to 45 W wired charging, 25 W wireless charging (Pixelsnap) 5,015 mAh, up to 30 W wired charging, 15 W wireless charging (Pixelsnap)
Connectivity Wi-Fi 6e, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, USB-C 3.2 Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, UWB, USB-C 3.2 Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, UWB, USB-C 3.2 Wi-Fi 7, NFC, Bluetooth 6.0, sub-6 GHz and mmWave 5G, UWB, USB-C 3.2
Measurements 152.8 height×72.0 width×8.6 depth (mm), 204 g 152.8 height×72.0 width×8.6 depth (mm), 207 g 162.8 height×76.6 width×8.5 depth (mm), 232 g Folded: 154.9 height×76.2 width×10.1 depth (mm); Unfolded: 154.9 height×149.8 width×5.1 depth (mm); 258 g
Colors Indigo

Frost

Lemongrass

Obsidian
Moonstone

Jade

Porcelain

Obsidian
Moonstone

Jade

Porcelain

Obsidian
Moonstone

Jade

While the new phone is technically a fraction of a millimeter thicker, it’s narrowed by a similar amount. You likely won’t notice this, nor will the 1g in additional mass register. You may, however, spot the slimmer bezels and hinge. And that means cases for the 2024 foldable are just a fraction of a millimeter from fitting on the Pixel 10 Pro Fold. It does fit better in your hand, though.

Pixel 10 Pro Fold side

The Pixel is on the thick side for 2025, but this was record-setting thinness last year.

The Pixel is on the thick side for 2025, but this was record-setting thinness last year.

Thanks to the gearless hinge, the Pixel 10 Pro Fold the first foldable with full IP68 certification for water and dust resistance. The hinge feels extremely smooth and sturdy, but it’s a bit stiffer than we’ve seen on most foldables. This might change over time, but it’s a little harder to open and close out of the box. Samsung’s Z Fold 7 is thinner and easier to fold, but the hinge doesn’t open to a full 180 degrees like the Pixel does.

The new foldable also retains the camera module design of last year’s phone—it’s off-center on the back panel, a break from Google’s camera bar on other Pixels. The Pixel 10 Pro Fold, therefore, doesn’t lie flat on tables and will rock back and forth like most other phones. However, it does have the Qi2 magnets like in the cheaper phones. There are various Maglock kickstands and mounting rings that will attach to the back of the phone if you want to prop it up on a surface.

Pixel 10 Pro Fold and Z Fold 7

The Pixel 10 Pro Fold (left) and the Galaxy Z Fold 7 (right) both have 8-inch displays, but the Pixel is curvier.

Credit: Ryan Whitwam

The Pixel 10 Pro Fold (left) and the Galaxy Z Fold 7 (right) both have 8-inch displays, but the Pixel is curvier. Credit: Ryan Whitwam

The power and volume buttons are on the right edge in the same location as last year. The buttons are stable and tactile when pressed, and there’s a fingerprint sensor in the power button. It’s as fast and accurate as any capacitive sensor on a phone today. The aluminum frame and the buttons have the same matte finish, which differs from the glossy look of the other Pro Pixels. The more grippy matte texture is preferable for a phone you need to fold and unfold throughout the day.

Thanks to the modestly slimmer bezels, Google equipped the phone with a 6.4-inch external screen, slightly larger than the 6.3-inch panel on last year’s Fold. The 120 Hz OLED has a respectable 1080p resolution, and the brightness peaks around 3,000 nits, making it readable in bright outdoor light.

Pixel 10 Pro Fold and Pixel 9 Pro Fold

The Pixel 10 Pro Fold (left) has a more compact hinge and slimmer bezels compared to the Pixel 9 Pro Fold (right).

The Pixel 10 Pro Fold (left) has a more compact hinge and slimmer bezels compared to the Pixel 9 Pro Fold (right).

The Pixel 10 Pro Fold has a big 8-inch flexible OLED inside, clocking in at 2076×2152 pixels and 120Hz. It gets similarly bright, but the plastic layer is more reflective than the Gorilla Glass Victus 2 on the cover screen. While the foldable screen is legible, it’s not as pleasant to use outside as high-brightness glass screens.

Like all foldable screens, it’s possible to damage the internal OLED if you’re not careful. On the other hand, the flexible OLED is well-protected when the phone is closed—there’s no gap between the halves, and the magnets hold them together securely. There’s a crease visible in the middle of the screen, but it’s slightly improved from last year’s phone. You can see it well from some angles, but you get used to it.

Pixel 10 Pro Fold keyboard glamor

The Jade colorway looks great.

Credit: Ryan Whitwam

The Jade colorway looks great. Credit: Ryan Whitwam

While the flat Pixel 10 phones have dropped the physical SIM card slot, the Pixel 10 Pro Fold still has one. It has moved to the top this year, but it seems like only a matter of time before Google removes the slot in foldables, too. For the time being, you can move a physical SIM card to the Fold, transfer to eSIM, or use a combination of physical and electronic SIMs.

Google’s take on big Androids

Google’s version of Android is pretty refined these days. The Pixel 10 Pro Fold uses the same AI-heavy build of Android 16 as the flat Pixels. That means you can expect old favorites like Pixel Screenshots, Call Screen, and Magic Compose, along with new arrivals like Magic Cue and Pixel Journal. One thing you won’t see right now is the largely useless Daily Brief, which was pulled after its launch on the Pixel 10 so it could be improved.

Google’s expanded use of Material 3 Expressive theming is also a delight. The Pixel OS has a consistent, clean look you don’t often see on Android phones. Google bundles almost every app it makes on this phone, but you won’t see any sponsored apps, junk games, or other third-party software cluttering up the experience. In short, if you like the vibe of the Pixel OS on other Pixel 10 phones, you’ll like it on the Pixel 10 Pro Fold. We’ve noted a few minor UI glitches in the launch software, but there are no show-stopping bugs.

Pixel 10 Pro Fold split-screen

Multitasking on foldables is a snap.

Credit: Ryan Whitwam

Multitasking on foldables is a snap. Credit: Ryan Whitwam

The software on this phone goes beyond the standard Pixel features to take advantage of the folding screen. There’s a floating taskbar that can make swapping apps and multitasking easier, and you can pin it on the screen for even more efficiency. The Pixel 10 Pro Fold also supports saving app pairs to launch both at once in split-screen.

Google’s multi-window system on the Fold isn’t as robust as what you get with Samsung, though. For example, split-screen apps open in portrait mode on the Pixel, and if you want them in landscape, you have to physically rotate the phone. On Samsung foldables, you can move app windows around and change the orientation however you like—there’s even support for floating app windows and up to three windowed apps. Google reserves floating windows for tablets, none of which it has released since the Pixel Tablet in 2023. It would be nice to see a bit more multitasking power to make the most of the Fold’s big internal display.

As with all of Google’s Pixels, the new foldable gets seven years of update support, all the way through 2032. You’ll probably need at least one battery swap to make it that long, but you might be more inclined to hold onto an $1,800 phone for seven years. Samsung also offers seven years of support, but its updates are slower and don’t usually include new features after the first year. Google rolls out new updates promptly every month, and updated features are delivered in regular Pixel Drops.

Almost the best cameras

Google may have fixed some of the drawbacks of foldables, but you’ll get better photos with flat Pixels. That said, the Pixel 10 Pro Fold is no slouch—it has a camera setup very similar to the base model Pixel 10 (and last year’s foldable), which is still quite good in the grand scheme of mobile photography.

Pixel 10 Pro Fold cameras

The cameras are unchanged from last year.

Credit: Ryan Whitwam

The cameras are unchanged from last year. Credit: Ryan Whitwam

The Pixel 10 Pro Fold sports a 48 MP primary sensor, a 10.5 MP ultrawide, and a 10.8 MP 5x telephoto. There are 10 MP selfie cameras peeking through the front and internal displays as well.

Like the other Pixels, this phone is great for quick snapshots. Google’s image processing does an admirable job of sharpening details and has extraordinary dynamic range. The phone also manages to keep exposure times short to help capture movement. You don’t have to agonize over exactly how to frame a shot or wait for the right moment to hit the shutter. The Pixel 10 Pro and Pro XL do all of this slightly better, but provided you don’t zoom too much, the Pixel 10 Pro Fold photos are similarly excellent.

Medium indoor light. Ryan Whitwam

The primary sensor does better than most in dim conditions, but this is where you’ll notice limitations compared to the flat Pro phones. The Fold’s smaller image sensor can’t collect as much light, resulting in longer exposures. You’ll notice this most in Night Sight shots.

The telephoto sensor is only 10.8 MP compared to 48 MP on the other Pro Pixels. So images won’t be as sharp if you zoom in, but the normal framing looks fine and gets you much closer to your subject. The phone does support up to 20x zoom, but going much beyond 5x begins to reveal the camera’s weakness, and even Google’s image processing can’t hide that. The ultrawide camera is good enough for landscapes and wide group shots, but don’t bother zooming in. It also has autofocus for macro shots.

The selfie cameras are acceptable, but you don’t have to use them. As a foldable, this phone allows you to use the main cameras to snap selfies with the external display as a viewfinder. The results are much better, but the phone is a bit awkward to hold in that orientation. Google also added a few more camera features that complement the form factor, including a split-screen camera roll similar to Samsung’s app and a new version of the Made You Look cover screen widgets.

The Pixel 10 Pro Fold can leverage generative AI in several imaging features, so it has the same C2PA labeling as the other Pixels. We’ve seen this “AI edited” tag appear most often on images from the flat Pixels that are zoomed beyond 20x, so you likely won’t end up with any of those on the Fold. However, features like Add Me and Best Take will get the AI labeling.

The Tensor tension

This probably won’t come as a surprise, but the Tensor G5 in the Pixel 10 Pro Fold performs identically to the Tensor in other Pixel 10 phones. It is marginally faster across the board than the Tensor G4, but this isn’t the huge leap people hoped for with Google’s first TSMC chip. While it’s fast enough to keep the phone chugging, benchmarks are not its forte.

Pixel 10 Pro Fold in hand

Pixel 10 Pro Fold hinge has been redesigned.

Credit: Ryan Whitwam

Pixel 10 Pro Fold hinge has been redesigned. Credit: Ryan Whitwam

Across all our usual benchmarks, the Tensor G5 shows small gains over last year’s Google chip, but it’s running far behind the latest from Qualcomm. We expect that gap to widen even further when Qualcomm updates its flagship Snapdragon line in a few months.

The Tensor G5 does remain a bit cooler under load than the Snapdragon 8 Elite, losing only about 20 percent to thermal throttling. So real-world gaming performance on the Pixel 10 Pro Fold is closer to Qualcomm-based devices than the benchmark numbers would lead you to believe. Some game engines behave strangely on the Tensor’s PowerVR GPU, though. If mobile gaming is a big part of your usage, a Samsung or OnePlus flagship might be more your speed.

Day-to-day performance with the Pixel 10 Pro Fold is solid. Google’s new foldable is quick to register taps and open apps, even though the Tensor G5 chip doesn’t offer the most raw speed. Even on Snapdragon-based phones like the Galaxy Z Fold 7, the UI occasionally hiccups or an animation gets jerky. That’s a rarer occurrence on the Pixel 10 Pro Fold.

One of the biggest spec bumps is the battery—it’s 365 mAh larger, at 5,015 mAh. This finally puts Google’s foldables in the same range as flat phones. Granted, you will use more power when the main display is unfurled, and you should not expect a substantial increase in battery life generally. The power-hungry Tensor and increased background AI processing appear to soak up most of the added capacity. The Pixel 10 Pro Fold should last all day, but there won’t be much leeway.

The Pixel 10 Pro Fold does bring a nice charging upgrade, boosting wired speeds from 21 W to 30 W with a USB-PD charger that supports PPS (as most now do). That’s enough for a 50 percent charge in about half an hour. Wireless charging is now twice as fast, thanks to the addition of Qi2 support. Any Qi2-certified charger can hit those speeds, including the Google Pixelsnap charger. But the Fold is limited to 15 W, whereas the Pixel 10 Pro XL gets 25 W over Qi2. It’s nice to see an upgrade here, but all of Google’s phones should charge faster than they do.

Big phone, big questions

The Pixel 10 Pro Fold is better than last year’s Google foldable, and that means there’s a lot to like. The new hinge and slimmer bezels make the third-gen foldable a bit easier to hold, and the displays are fantastic. The camera setup, while a step down from the other Pro Pixels, is still one of the best you can get on a phone. The addition of Qi2 charging is much appreciated, too. And while Google has overloaded the Pixels with AI features, more of them are useful compared to those on the likes of Samsung, Motorola, or OnePlus.

Pixel 10 Pro Fold and Pixel 10 Pro

Left: Pixel 10 Pro Fold, Right: Pixel 10 Pro.

Credit: Ryan Whitwam

Left: Pixel 10 Pro Fold, Right: Pixel 10 Pro. Credit: Ryan Whitwam

That’s all great, but these are relatively minor improvements for an $1,800 phone, and the competition is making great strides. The Pixel 10 Pro Fold isn’t as fast or slim as the Galaxy Z Fold 7, and Samsung’s multitasking system is much more powerful. The Z Fold 7 retails for $200 more, but that distinction hardly matters as you close in on two grand for a smartphone. If you’re willing to pay $1,800, going to $2,000 isn’t much of a leap.

Pixel 10 Pro Fold back in hand

It’s the size of a normal phone when closed.

Credit: Ryan Whitwam

It’s the size of a normal phone when closed. Credit: Ryan Whitwam

The Pixel 10 Pro Fold is the ultimate Google phone with some useful AI features, but the Galaxy Z Fold 7 is a better piece of hardware. Ultimately, the choice depends on what’s more important to you, but Google will have to move beyond iterative upgrades if it wants foldables to look like a worthwhile upgrade.

The good

  • Redesigned hinge and slimmer bezels
  • Huge, gorgeous foldable OLED screen
  • Colorful, attractive Material 3 UI
  • IP68 certification
  • Includes Qi2 with magnetic attachment
  • Seven years of update support
  • Most AI features run on-device for better privacy

The bad

  • Cameras are a step down from other Pro Pixels
  • Tons of AI features you probably won’t use
  • Could use more robust multitasking
  • Tensor G5 still not benchmark king
  • High $1,800 price

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google Pixel 10 Pro Fold review: The ultimate Google phone Read More »

elon-musk-tries-to-make-apple-and-mobile-carriers-regret-choosing-starlink-rivals

Elon Musk tries to make Apple and mobile carriers regret choosing Starlink rivals

SpaceX holds spectrum licenses for the Starlink fixed Internet service for homes and businesses. Adding the EchoStar spectrum will make its holdings suitable for mobile service.

“SpaceX currently holds no terrestrial spectrum authorizations and no license to use spectrum allocated on a primary basis to MSS,” the company’s FCC filing said. “Its only authorization to provide any form of mobile service is an authorization for secondary SCS [Supplemental Coverage from Space] operations in spectrum licensed to T-Mobile.”

Starlink unlikely to dethrone major carriers

SpaceX’s spectrum purchase doesn’t make it likely that Starlink will become a fourth major carrier. Grand claims of that sort are “complete nonsense,” wrote industry analyst Dean Bubley. “Apart from anything else, there’s one very obvious physical obstacle: walls and roofs,” he wrote. “Space-based wireless, even if it’s at frequencies supported in normal smartphones, won’t work properly indoors. And uplink from devices to satellites will be even worse.”

When you’re indoors, “there’s more attenuation of the signal,” resulting in lower data rates, Farrar said. “You might not even get megabits per second indoors, unless you are going to go onto a home Starlink broadband network,” he said. “You might only be able to get hundreds of kilobits per second in an obstructed area.”

The Mach33 analyst firm is more bullish than others regarding Starlink’s potential cellular capabilities. “With AWS-4/H-block and V3 [satellites], Starlink DTC is no longer niche, it’s a path to genuine MNO competition. Watch for retail mobile bundles, handset support, and urban hardware as the signals of that pivot,” the firm said.

Mach33’s optimism is based in part on the expectation that SpaceX will make more deals. “DTC isn’t just a coverage filler, it’s a springboard. It enables alternative growth routes; M&A, spectrum deals, subleasing capacity in denser markets, or technical solutions like mini-towers that extend Starlink into neighborhoods,” the group’s analysis said.

The amount of spectrum SpaceX is buying from EchoStar is just a fraction of what the national carriers control. There is “about 1.1 GHz of licensed spectrum currently allocated to mobile operators,” wireless lobby group CTIA said in a January 2025 report. The group also says the cellular industry has over 432,000 active cell sites around the US.

What Starlink can offer cellular users “is nothing compared to the capacity of today’s 5G networks,” but it would be useful “in less populated areas or where you cannot get coverage,” Rysavy said.

Starlink has about 8,500 satellites in orbit. Rysavy estimated in a July 2025 report that about 280 of them are over the United States at any given time. These satellites are mostly providing fixed Internet service in which an antenna is placed outside a building so that people can use Wi-Fi indoors.

SpaceX’s FCC filing said the EchoStar spectrum’s mix of terrestrial and satellite frequencies will be ideal for Starlink.

“By acquiring EchoStar’s market-access authorization for 2 GHz MSS as well as its terrestrial AWS-4 licenses, SpaceX will be able to deploy a hybrid satellite and terrestrial network, just as the Commission envisioned EchoStar would do,” SpaceX said. “Consistent with the Commission’s finding that potential interference between MSS and terrestrial mobile service can best be managed by enabling a single licensee to control both networks, assignment of the AWS-4 spectrum is critical to enable SpaceX to deploy robust MSS service in this band.”

Elon Musk tries to make Apple and mobile carriers regret choosing Starlink rivals Read More »

apple-iphone-17-pro-review:-come-for-the-camera,-stay-for-the-battery

Apple iPhone 17 Pro review: Come for the camera, stay for the battery


a weird-looking phone for taking pretty pictures

If your iPhone is your main or only camera, the iPhone 17 Pro is for you.

The iPhone 17 Pro’s excellent camera is the best reason to buy it instead of the regular iPhone 17. Credit: Andrew Cunningham

The iPhone 17 Pro’s excellent camera is the best reason to buy it instead of the regular iPhone 17. Credit: Andrew Cunningham

Apple’s “Pro” iPhones usually look and feel a lot like the regular ones, just with some added features stacked on top. They’ve historically had better screens and more flexible cameras, and there has always been a Max option for people who really wanted to blur the lines between a big phone and a small tablet (Apple’s commitment to the cheaper “iPhone Plus” idea has been less steadfast). But the qualitative experience of holding and using one wasn’t all that different compared to the basic aluminum iPhone.

This year’s iPhone 17 Pro looks and feels like more of a departure from the basic iPhone, thanks to a new design that prioritizes function over form. It’s as though Apple anticipated the main complaints about the iPhone Air—why would I want a phone with worse battery and fewer cameras, why don’t they just make the phone thicker so they can fit in more things—and made a version of the iPhone that they could point to and say, “We already make that phone—it’s that one over there.”

Because the regular iPhone 17 is so good, and because it uses the same 6.3-inch OLED ProMotion screen, I think the iPhone 17 Pro is playing to a narrower audience than usual this year. But Apple’s changes and additions are also tailor-made to serve that audience. In other words, fewer people even need to consider the iPhone Pro this time around, but there’s a lot to like here for actual “pros” and people who demand a lot from their phones.

Design

The iPhone 17 drops the titanium frame of the iPhone 15 and 16 Pro in favor of a return to aluminum. But it’s no longer the aluminum-framed glass-sandwich design that the iPhone 17 still uses; it’s a reformulated “aluminum unibody” design that also protects a substantial portion of the phone’s back. It’s the most metal we’ve seen on the back of the iPhone since 2016’s iPhone 7.

But remember that part of the reason the 2017 iPhone 8 and iPhone X switched to the glass sandwich design was wireless charging. The aluminum iPhones always featured some kind of cutouts or gaps in the aluminum to allow Wi-Fi, Bluetooth, and cellular signals through. But the addition of wireless charging to the iPhone meant that a substantial portion of the phone’s back now needed to be permeable by wireless signals, and the solution to that problem was simply to embrace it with a full sheet of glass.

The iPhone 17 Pro returns to the cutout approach, and while it might be functional, it leaves me pretty cold, aesthetically. Small stripes on the sides of the phone and running all the way around the “camera plateau” provide gaps between the metal parts so that you can’t mess with your cellular reception by holding the phone wrong; on US versions of the phone with support for mmWave 5G, there’s another long ovular cutout on the top of the phone to allow those signals to pass through.

But the largest and most obvious is the sheet of glass on the back that Apple needed to add to make wireless charging work. The aluminum, the cell signal cutouts, and this sheet of glass are all different shades of the phone’s base color (it’s least noticeable on the Deep Blue phone and most noticeable on the orange one).

The result is something that looks sort of unfinished and prototype-y. There are definitely people who will like or even prefer this aesthetic, which makes it clearer that this piece of technology is a piece of technology rather than trying to hide it—the enduring popularity of clear plastic electronics is a testament to this. But it does feel like a collection of design decisions that Apple was forced into by physics rather than choices it wanted to make.

That also extends to the camera plateau area, a reimagining of the old iPhone camera bump that extends all the way across the top of the phone. It’s a bit less slick-looking than the one on the iPhone Air because of the multiple lenses. And because the camera bumps are still additional protrusions on top of the plateau, the phone wobbles when it’s resting flat on a table instead of resting on the plateau in a way that stabilizes the phone.

Finally, there’s the weight of the phone, which isn’t breaking records but is a step back from a substantial weight reduction that Apple was using as a first-sentence-of-the-press-release selling point just two years ago. The iPhone 17 Pro weighs the same amount as the iPhone 14 Pro, and it has a noticeable heft to it that the iPhone Air (say) does not have. You’ll definitely notice if (like me) your current phone is an iPhone 15 Pro.

Apple sent me one of its $59 “TechWoven” cases with the iPhone 17 Pro, and it solved a lot of what I didn’t like about the design—the inconsistent materials and colors everywhere, and the bump-on-a-bump camera. There’s still a bump on the top, but at least the aperture of a case evens it out so that your phone isn’t tilted by the plateau and wobbling because of the bump.

I liked Apple’s TechWoven case for the iPhone Pro, partly because it papered over some of the things I don’t love about the design. Credit: Andrew Cunningham

The original FineWoven cases were (rightly) panned for how quickly and easily they scratched, but the TechWoven case might be my favorite Apple-designed phone case of the ones I’ve used. It doesn’t have the weird soft lint-magnet feel of some of the silicone cases, FineWoven’s worst problems seem solved, and the texture on the sides of the case provides a reassuring grippiness. My main issue is that the opening for the USB-C port on the bottom is relatively narrow. Apple’s cables will fit fine, but I had a few older or thicker USB-C connectors that didn’t.

This isn’t a case review, but I bring it up mainly to say that I stand by my initial assessment of the Pro’s function-over-form design: I am happy I put it in a case, and I think you will be, too, whichever case you choose (when buying for myself or family members, I have defaulted to Smartish cases for years, but your mileage may vary).

On “Scratchgate”

Early reports from Apple’s retail stores indicated that the iPhone 17 Pro’s design was more susceptible to scratches than past iPhones and that some seemed to be showing marks from as simple and routine an activity as connecting and disconnecting a MagSafe charging pad.

Apple says the marks left by its in-store MagSafe chargers weren’t permanent scratches and could be cleaned off. But independent testing from the likes of iFixit has found that the anodization process Apple uses to add color to the iPhone’s aluminum frame is more susceptible to scratching and flaking on non-flat surfaces like the edges of the camera bump.

Like “antennagate” and “bendgate” before it, many factors will determine whether “scratchgate” is actually something you’ll notice. Independent testing shows there is something to the complaints, but it doesn’t show how often this kind of damage will appear in actual day-to-day use over the course of months or years. Do keep it in mind when deciding which iPhone and accessories you want—it’s just one more reason to keep the iPhone 17 Pro in a case, if you ask me—but I wouldn’t say it should keep you from buying this phone if you like everything else about it.

Camera

I have front-loaded my complaints about the iPhone 17 Pro to get them out of the way, but the fun thing about an iPhone in which function follows form is that you get a lot of function.

When I made the jump from the regular iPhone to the Pro (I went from an 11 to a 13 Pro and then to a 15 Pro), I did it mainly for the telephoto lens in the camera. For both kid photos and casual product photography, it was game-changing to be able to access the functional equivalent of optical zoom on my phone.

The iPhone 17 Pro’s telephoto lens in 4x mode. Andrew Cunningham

The iPhone 16 Pro changed the telephoto lens’ zoom level from 3x to 5x, which was useful if you want maximum zoom but which did leave a gap between it and the Fusion Camera-enabled 2x mode. The 17 Pro switches to a 4x zoom by default, closing that gap, and it further maximizes the zooming capabilities by switching to a 48 MP sensor.

Like the main and ultrawide cameras, which had already switched to 48 MP sensors in previous models, the telephoto camera saves 24 MP images when shooting in 4x mode. But it can also crop a 12 MP image out of the center of that sensor to provide a native-resolution 12 MP image at an 8x zoom level, albeit without the image quality improvements from the “pixel binning” process that 4x images get.

You can debate how accurate it is to market this as “optical-quality zoom” as Apple does, but it’s hard to argue with the results. The level of detail you can capture from a distance in 8x mode is consistently impressive, and Apple’s hardware and software image stabilization help keep these details reasonably free of the shake and blur you might see if you were shooting at this zoom level with an actual hardware lens.

It’s my favorite feature of the iPhone 17 Pro, and it’s the thing about the phone that comes closest to being worth the $300 premium over the regular iPhone 17.

The iPhone 17 Pro, main lens, 1x mode. Andrew Cunningham

Apple continues to gate several other camera-related features to the Pro iPhones. All phones can shoot RAW photos in third-party camera apps that support it, but only the Pro iPhones can shoot Apple’s ProRAW format in the first-party camera app (ProRAW performs Apple’s typical image processing for RAW images but retains all the extra information needed for more flexible post-processing).

I don’t spend as much time shooting video on my phone as I do photos, but for the content creator and influencer set (and the “we used phones and also professional lighting and sound equipment to shoot this movie” set) Apple still reserves several video features for the Pro iPhones. That list includes 120 fps 4K Dolby Vision video recording and a four-mic array (both also supported by the iPhone 16 Pro), plus ProRes RAW recording and Genlock support for synchronizing video from multiple sources (both new to the 17 Pro).

The iPhone Pro also remains the only iPhone to support 10 Gbps USB transfer speeds over the USB-C port, making it faster to transfer large video files from the phone to an external drive or a PC or Mac for additional processing and editing. It’s likely that Apple built this capability into the A19 Pro’s USB controller, but both the iPhone Air and the regular iPhone 17 are restricted to the same old 25-year-old 480 Mbps USB 2.0 data transfer speeds.

The iPhone 17 Pro gets the same front camera treatment as the iPhone 17 and the Air: a new square “Center Stage” sensor that crops a 24 MP square image into an 18 MP image, allowing users to capture approximately the same aspect ratios and fields-of-view with the front camera regardless of whether they’re holding the phone in portrait or landscape mode. It’s definitely an image-quality improvement, but it’s the same as what you get with the other new iPhones.

Specs, speeds, and battery

You still need to buy a Pro phone to get a USB-C port with 10 Gbps USB 3 transfer speeds instead of 480 Mbps USB 2.0 speeds. Credit: Andrew Cunningham

The iPhone 17 Pro uses, by a slim margin, the fastest and most capable version of the A19 Pro chip, partly because it has all of the A19 Pro’s features fully enabled and partly because its thermal management is better than the iPhone Air’s.

The A19 Pro in the iPhone 17 Pro uses two high-performance CPU cores and four smaller high-efficiency CPU cores, plus a fully enabled six-core GPU. Like the iPhone Air, the iPhone Pro also includes 12GB of RAM, up from 8GB in the iPhone 16 Pro and the regular iPhone 17. Apple has added a vapor chamber to the iPhone 17 Pro to help keep it cool rather than relying on metal to conduct heat away from the chips—an infinitesimal amount of water inside a small metal pocket continually boils, evaporates, and condenses inside the closed copper-lined chamber. This spreads the heat evenly over a large area, compared to just using metal to conduct the heat; having the heat spread out over a larger area then allows that heat to be dissipated more quickly.

All phones were tested with Adaptive Power turned off.

We saw in our iPhone 17 review how that phone’s superior thermals helped it outrun the iPhone Air’s version of the A19 Pro in many of our graphics tests; the iPhone Pro’s A19 Pro beats both by a decent margin, thanks to both thermals and the extra hardware.

The performance line graph that 3DMark generates when you run its benchmarks actually gives us a pretty clear look at the difference between how the iPhones act. The graphs for the iPhone 15 Pro, the iPhone 17, and the iPhone 17 Pro all look pretty similar, suggesting that they’re cooled well enough to let the benchmark run for a couple of minutes without significant throttling. The iPhone Air follows a similar performance curve for the first half of the test or so but then drops noticeably lower for the second half—the ups and downs of the line actually look pretty similar to the other phones, but the performance is just a bit lower because the A19 Pro in the iPhone Air is already slowing down to keep itself cool.

The CPU performance of the iPhone 17 Pro is also marginally better than this year’s other phones, but not by enough that it will be user-noticeable.

As for battery, Apple’s own product pages say it lasts for about 10 percent longer than the regular iPhone 17 and between 22 and 36 percent longer than the iPhone Air, depending on what you’re doing.

I found the iPhone Air’s battery life to be tolerable with a little bit of babying and well-timed use of the Low Power Mode feature, and the iPhone 17’s battery was good enough that I didn’t worry about making it through an 18-hour day. But the iPhone 17 Pro’s battery really is a noticeable step up.

One day, I forgot to plug it in overnight and awoke to a phone that still had a 30 percent charge, enough that I could make it through the morning school drop-off routine and plug it in when I got back home. Not only did I not have to think about the iPhone 17 Pro’s battery, but it’s good enough that even a battery with 85-ish percent capacity (where most of my iPhone batteries end up after two years of regular use) should still feel pretty comfortable. After the telephoto camera lens, it’s definitely the second-best thing about the iPhone 17 Pro, and the Pro Max should last for even longer.

Pros only

Apple’s iPhone 17 Pro. Credit: Andrew Cunningham

I’m taken with a lot of things about the iPhone 17 Pro, but the conclusion of our iPhone 17 review still holds: If you’re not tempted by the lightness of the iPhone Air, then the iPhone 17 is the one most people should get.

Even more than most Pro iPhones, the iPhone 17 Pro and Pro Max will make the most sense for people who actually use their phones professionally, whether that’s for product or event photography, content creation, or some other camera-centric field where extra flexibility and added shooting modes can make a real difference. The same goes for people who want a bigger screen, since there’s no iPhone 17 Plus.

Sure, the 17 Pro also performs a little better than the regular 17, and the battery lasts longer. But the screen was always the most immediately noticeable upgrade for regular people, and the exact same display panel is now available in a phone that costs $300 less.

The benefit of the iPhone Pro becoming a bit more niche is that it’s easier to describe who each of these iPhones is for. The Air is the most pleasant to hold and use, and it’s the one you’ll probably buy if you want people to ask you, “Oh, is that one of the new iPhones?” The Pro is for people whose phones are their most important camera (or for people who want the biggest phone they can get). And the iPhone 17 is for people who just want a good phone but don’t want to think about it all that much.

The good

  • Excellent performance and great battery life
  • It has the most flexible camera in any iPhone, and the telephoto lens in particular is a noticeable step up from a 2-year-old iPhone 15 Pro
  • 12GB of RAM provides extra future-proofing compared to the standard iPhone
  • Not counting the old iPhone 16, it’s Apple’s only iPhone to be available in two screen sizes
  • Extra photography and video features for people who use those features in their everyday lives or even professionally

The bad

  • Clunky, unfinished-looking design
  • More limited color options compared to the regular iPhone
  • Expensive
  • Landscape layouts for apps only work on the Max model

The ugly

  • Increased weight compared to previous models, which actually used their lighter weight as a selling point

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Apple iPhone 17 Pro review: Come for the camera, stay for the battery Read More »

how-america-fell-behind-china-in-the-lunar-space-race—and-how-it-can-catch-back-up

How America fell behind China in the lunar space race—and how it can catch back up


Thanks to some recent reporting, we’ve found a potential solution to the Artemis blues.

A man in a suit speaks in front of a mural of the Moon landing.

NASA Administrator Jim Bridenstine says that competition is good for the Artemis Moon program. Credit: NASA

NASA Administrator Jim Bridenstine says that competition is good for the Artemis Moon program. Credit: NASA

For the last month, NASA’s interim administrator, Sean Duffy, has been giving interviews and speeches around the world, offering a singular message: “We are going to beat the Chinese to the Moon.”

This is certainly what the president who appointed Duffy to the NASA post wants to hear. Unfortunately, there is a very good chance that Duffy’s sentiment is false. Privately, many people within the space industry, and even at NASA, acknowledge that the US space agency appears to be holding a losing hand. Recently, some influential voices, such as former NASA Administrator Jim Bridenstine, have spoken out.

“Unless something changes, it is highly unlikely the United States will beat China’s projected timeline to the Moon’s surface,” Bridenstine said in early September.

As the debate about NASA potentially losing the “second” space race to China heats up in Washington, DC, everyone is pointing fingers. But no one is really offering answers for how to beat China’s ambitions to land taikonauts on the Moon as early as the year 2029. So I will. The purpose of this article is to articulate how NASA ended up falling behind China, and more importantly, how the Western world could realistically retake the lead.

But first, space policymakers must learn from their mistakes.

Begin at the beginning

Thousands of words could be written about the space policy created in the United States over the last two decades and all of the missteps. However, this article will only hit the highlights (lowlights). And the story begins in 2003, when two watershed events occurred.

The first of these was the loss of space shuttle Columbia in February, the second fatal shuttle accident, which signaled that the shuttle era was nearing its end, and it began a period of soul-searching at NASA and in Washington, DC, about what the space agency should do next.

“There’s a crucial year after the Columbia accident,” said eminent NASA historian John Logsdon. “President George W. Bush said we should go back to the Moon. And the result of the assessment after Columbia is NASA should get back to doing great things.” For NASA, this meant creating a new deep space exploration program for astronauts, be it the Moon, Mars, or both.

The other key milestone in 2003 came in October, when Yang Liwei flew into space and China became the third country capable of human spaceflight. After his 21-hour spaceflight, Chinese leaders began to more deeply appreciate the soft power that came with spaceflight and started to commit more resources to related programs. Long-term, the Asian nation sought to catch up to the United States in terms of spaceflight capabilities and eventually surpass the superpower.

It was not much of a competition then. China would not take its first tentative steps into deep space for another four years, with the Chang’e 1 lunar orbiter. NASA had already walked on the Moon and sent spacecraft across the Solar System and even beyond.

So how did the United States squander such a massive lead?

Mistakes were made

SpaceX and its complex Starship lander are getting the lion’s share of the blame today for delays to NASA’s Artemis Program. But the company and its lunar lander version of Starship are just the final steps on a long, winding path that got the United States where it is today.

After Columbia, the Bush White House, with its NASA Administrator Mike Griffin, looked at a variety of options (see, for example, the Exploration Systems Architecture Study in 2005). But Griffin had a clear plan in his mind that he dubbed “Apollo on Steroids,” and he sought to develop a large rocket (Ares V), spacecraft (later to be named Orion), and a lunar lander to accomplish a lunar landing by 2020. Collectively, this became known as the Constellation Program.

It was a mess. Congress did not provide NASA the funding it needed, and the rocket and spacecraft programs quickly ran behind schedule. At one point, to pay for surging Constellation costs, NASA absurdly mulled canceling the just-completed International Space Station. By the end of the first decade of the 2000s, two things were clear: NASA was going nowhere fast, and the program’s only achievement was to enrich the legacy space contractors.

By early 2010, after spending a year assessing the state of play, the Obama administration sought to cancel Constellation. It ran into serious congressional pushback, powered by lobbying from Boeing, Lockheed Martin, Northrop Grumman, and other key legacy contractors.

The Space Launch System was created as part of a political compromise between Sen. Bill Nelson (D-Fla.) and senators from Alabama and Texas.

Credit: Chip Somodevilla/Getty Images

The Space Launch System was created as part of a political compromise between Sen. Bill Nelson (D-Fla.) and senators from Alabama and Texas. Credit: Chip Somodevilla/Getty Images

The Obama White House wanted to cancel both the rocket and the spacecraft and hold a competition for the private sector to develop a heavy lift vehicle. Their thinking: Only with lower-cost access to space could the nation afford to have a sustainable deep space exploration plan. In retrospect, it was the smart idea, but Congress was not having it. In 2011, Congress saved Orion and ordered a slightly modified rocket—it would still be based on space shuttle architecture to protect key contractors—that became the Space Launch System.

Then the Obama administration, with its NASA leader Charles Bolden, cast about for something to do with this hardware. They started talking about a “Journey to Mars.” But it was all nonsense. There was never any there there. Essentially, NASA lost a decade, spending billions of dollars a year developing “exploration” systems for humans and talking about fanciful missions to the red planet.

There were critics of this approach, myself included. In 2014, I authored a seven-part series at the Houston Chronicle called Adrift, the title referring to the direction of NASA’s deep space ambitions. The fundamental problem is that NASA, at the direction of Congress, was spending all of its exploration funds developing Orion, the SLS rocket, and ground systems for some future mission. This made the big contractors happy, but their cost-plus contracts gobbled up so much funding that NASA had no money to spend on payloads or things to actually fly on this hardware.

This is why doubters called the SLS the “rocket to nowhere.” They were, sadly, correct.

The Moon, finally

Fairly early on in the first Trump administration, the new leader of NASA, Jim Bridenstine, managed to ditch the Journey to Mars and establish a lunar program. However, any efforts to consider alternatives to the SLS rocket were quickly rebuffed by the US Senate.

During his tenure, Bridenstine established the Artemis Program to return humans to the Moon. But Congress was slow to open its purse for elements of the program that would not clearly benefit a traditional contractor or NASA field center. Consequently, the space agency did not select a lunar lander until April 2021, after Bridenstine had left office. And NASA did not begin funding work on this until late 2021 due to a protest by Blue Origin. The space agency did not support a lunar spacesuit program for another year.

Much has been made about the selection of SpaceX as the sole provider of a lunar lander. Was it shady? Was the decision rushed before Bill Nelson was confirmed as NASA administrator? In truth, SpaceX was the only company that bid a value that NASA could afford with its paltry budget for a lunar lander (again, Congress prioritized SLS funding), and which had the capability the agency required.

To be clear, for a decade, NASA spent in excess of $3 billion a year on the development of the SLS rocket and its ground systems. That’s every year for a rocket that used main engines from the space shuttle, a similar version of its solid rocket boosters, and had a core stage the same diameter as the shuttle’s external tank. Thirty billion bucks for a rocket highly derivative of a vehicle NASA flew for three decades. SpaceX was awarded less than a single year of this funding, $2.9 billion, for the entire development of a Human Landing System version of Starship, plus two missions.

So yes, after 20 years, Orion appears to be ready to carry NASA astronauts out to the Moon. After 15 years, the shuttle-derived rocket appears to work. And after four years (and less than a tenth of the funding), Starship is not ready to land humans on the Moon.

When will Starship be ready?

Probably not any time soon.

For SpaceX and its founder, Elon Musk, the Artemis Program is a sidequest to the company’s real mission of sending humans to Mars. It simply is not a priority (and frankly, the limited funding from NASA does not compel prioritization). Due to its incredible ambition, the Starship program has also understandably hit some technical snags.

Unfortunately for NASA and the country, Starship still has a long way to go to land humans on the Moon. It must begin flying frequently (this could happen next year, finally). It must demonstrate the capability to transfer and store large amounts of cryogenic propellant in space. It must land on the Moon, a real challenge for such a tall vehicle, necessitating a flat surface that is difficult to find near the poles. And then it must demonstrate the ability to launch from the Moon, which would be unprecedented for cryogenic propellants.

Perhaps the biggest hurdle is the complexity of the mission. To fully fuel a Starship in low-Earth orbit to land on the Moon and take off would require multiple Starship “tanker” launches from Earth. No one can quite say how many because SpaceX is still working to increase the payload capacity of Starship, and no one has real-world data on transfer efficiency and propellant boiloff. But the number is probably at least a dozen missions. One senior source recently suggested to Ars that it may be as many as 20 to 40 launches.

The bottom line: It’s a lot. SpaceX is far and away the highest-performing space company in the Solar System. But putting all of the pieces together for a lunar landing will require time. Privately, SpaceX officials are telling NASA it can meet a 2028 timeline for Starship readiness for Artemis astronauts.

But that seems very optimistic. Very. It’s not something I would feel comfortable betting on, especially if China plans to land on the Moon “before” 2030, and the country continues to make credible progress toward this date.

What are the alternatives?

Duffy’s continued public insistence that he will not let China beat the United States back to the Moon rings hollow. The shrewd people in the industry I’ve spoken with say Duffy is an intelligent person and is starting to realize that betting the entire farm on SpaceX at this point would be a mistake. It would be nice to have a plan B.

But please, stop gaslighting us. Stop blustering about how we’re going to beat China while losing a quarter of NASA’s workforce and watching your key contractors struggle with growing pains. Let’s have an honest discussion about the challenges and how we’ll solve them.

What few people have done is offer solutions to Duffy’s conundrum. Fortunately, we’re here to help. As I have conducted interviews in recent weeks, I have always closed by asking this question: “You’re named NASA administrator tomorrow. You have one job: get NASA astronauts safely back to the Moon before China. What do you do?”

I’ve received a number of responses, which I’ll boil down into the following buckets. None of these strike me as particularly practical solutions, which underscores the desperation of NASA’s predicament. However, recent reporting has uncovered one solution that probably would work. I’ll address that last. First, the other ideas:

  • Stubby Starship: Multiple people have suggested this option. Tim Dodd has even spoken about it publicly. Two of the biggest issues with Starship are the need for many refuelings and its height, making it difficult to land on uneven terrain. NASA does not need Starship’s incredible capability to land 100–200 metric tons on the lunar surface. It needs fewer than 10 tons for initial human missions. So shorten Starship, reduce its capability, and get it down to a handful of refuelings. It’s not clear how feasible this would be beyond armchair engineering. But the larger problem is that Musk wants Starship to get taller, not shorter, so SpaceX would probably not be willing to do this.
  • Surge CLPS funding: Since 2019, NASA has been awarding relatively small amounts of funding to private companies to land a few hundred kilograms of cargo on the Moon. NASA could dramatically increase funding to this program, say up to $10 billion, and offer prizes for the first and second companies to land two humans on the Moon. This would open the competition to other companies beyond SpaceX and Blue Origin, such as Firefly, Intuitive Machines, and Astrobotic. The problem is that time is running short, and scaling up from 100 kilograms to 10 metric tons is an extraordinary challenge.
  • Build the Lunar Module: NASA already landed humans on the Moon in the 1960s with a Lunar Module built by Grumman. Why not just build something similar again? In fact, some traditional contractors have been telling NASA and Trump officials this is the best option, that such a solution, with enough funding and cost-plus guarantees, could be built in two or three years. The problem with this is that, sorry, the traditional space industry just isn’t up to the task. It took more than a decade to build a relatively simple rocket based on the space shuttle. The idea that a traditional contractor will complete a Lunar Module in five years or less is not supported by any evidence in the last 20 years. The flimsy Lunar Module would also likely not pass NASA’s present-day safety standards.
  • Distract China: I include this only for completeness. As for how to distract China, use your imagination. But I would submit that ULA snipers or starting a war in the South China Sea is not the best way to go about winning the space race.

OK, I read this far. What’s the answer?

The answer is Blue Origin’s Mark 1 lander.

The company has finished assembly of the first Mark 1 lander and will soon ship it from Florida to Johnson Space Center in Houston for vacuum chamber testing. A pathfinder mission is scheduled to launch in early 2026. It will be the largest vehicle to ever land on the Moon. It is not rated for humans, however. It was designed as a cargo lander.

There have been some key recent developments, though. About two weeks ago, NASA announced that a second mission of Mark 1 will carry the VIPER rover to the Moon’s surface in 2027. This means that Blue Origin intends to start a production line of Mark 1 landers.

At the same time, Blue Origin already has a contract with NASA to develop the much larger Mark 2 lander, which is intended to carry humans to the lunar surface. Realistically, though, this will not be ready until sometime in the 2030s. Like SpaceX’s Starship, it will require multiple refueling launches. As part of this contract, Blue has worked extensively with NASA on a crew cabin for the Mark 2 lander.

A full-size mock-up of the Blue Origin Mk. 1 lunar lander.

Credit: Eric Berger

A full-size mock-up of the Blue Origin Mk. 1 lunar lander. Credit: Eric Berger

Here comes the important part. Ars can now report, based on government sources, that Blue Origin has begun preliminary work on a modified version of the Mark 1 lander—leveraging learnings from Mark 2 crew development—that could be part of an architecture to land humans on the Moon this decade. NASA has not formally requested Blue Origin to work on this technology, but according to a space agency official, the company recognizes the urgency of the need.

How would it work? Blue Origin is still architecting the mission, but it would involve “multiple” Mark 1 landers to carry crew down to the lunar surface and then ascend back up to lunar orbit to rendezvous with the Orion spacecraft. Enough work has been done, according to the official, that Blue Origin engineers are confident the approach could work. Critically, it would not require any refueling.

It is unclear whether this solution has reached Duffy, but he would be smart to listen. According to sources, Blue Origin founder Jeff Bezos is intrigued by the idea. And why wouldn’t he be? For a quarter of a century, he has been hearing about how Musk has been kicking his ass in spaceflight. Bezos also loves the Apollo program and could now play an essential role in serving his country in an hour of need. He could beat SpaceX to the Moon and stamp his name in the history of spaceflight.

Jeff and Sean? Y’all need to talk.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

How America fell behind China in the lunar space race—and how it can catch back up Read More »

zr1,-gtd,-and-america’s-new-nurburgring-war

ZR1, GTD, and America’s new Nürburgring war


Drive quickly and make a lot of horsepower.

Ford and Chevy set near-identical lap times with very different cars; we drove both.

Credit: Tim Stevens | Aurich Lawson

Credit: Tim Stevens | Aurich Lawson

There’s a racetrack with a funny name in Germany that, in the eyes of many international enthusiasts, is the de facto benchmark for automotive performance. But the Nürburgring, a 13-mile (20 km) track often called the Green Hell, rarely hits the radar of mainstream US performance aficionados. That’s because American car companies rarely take the time to run cars there, and if they do, it’s in secrecy, to test pre-production machines cloaked in camouflage without publishing official times.

The track’s domestic profile has lately been on the rise, though. Late last year, Ford became the first American manufacturer to run a sub-7-minute lap: 6: 57.685 from its ultra-high-performance Mustang GTD. It then did better, announcing a 6: 52.072 lap time in May. Two months later, Chevrolet set a 6: 49.275 lap time with the hybrid Corvette ZR1X, becoming the new fastest American car around that track.

It’s a vehicular war of escalation, but it’s about much more than bragging rights.

The Green Hell as a must-visit for manufacturers

The Nürburgring is a delightfully twisted stretch of purpose-built asphalt and concrete strewn across the hills of western Germany. It dates back to the 1920s and has hosted the German Grand Prix for a half-century before it was finally deemed too unsafe in the late 1970s.

It’s still a motorsports mecca, with sports car racing events like the 24 Hours of the Nürburgring drawing hundreds of thousands of spectators, but today, it’s better known as the ultimate automotive performance proving ground.

It offers an unmatched variety of high-speed corners, elevation changes, and differing surfaces that challenge the best engineers in the world. “If you can develop a car that goes fast on the Nürburgring, it’s going to be fast everywhere in the whole world,” said Brian Wallace, the Corvette ZR1’s vehicle dynamics engineer and the driver who set that car’s fast lap of 6: 50.763.

“When you’re going after Nürburgring lap time, everything in the car has to be ten tenths,” said Greg Goodall, Ford’s chief program engineer for the Mustang GTD. “You can’t just use something that is OK or decent.”

Thankfully, neither of these cars is merely decent.

Mustang, deconstructed

You know the scene in Robocop where a schematic displays how little of Alex Murphy’s body remains inside that armor? Just enough of Peter Weller’s iconic jawline remains to identify the man, but the focus is clearly on the machine.

That’s a bit like how Multimatic creates the GTD, which retains just enough Mustang shape to look familiar, but little else.

Multimatic, which builds the wild Ford GT and also helms many of Ford’s motorsports efforts, starts with partially assembled Mustangs pulled from the assembly line, minus fenders, hood, and roof. Then the company guts what’s left in the middle.

Ford’s partner Multimatic cut as much of the existing road car chassis as it could for the GTD. Tim Stevens

“They cut out the second row seat area where our suspension is,” Ford’s Goodall said. “They cut out the rear floor in the trunk area because we put a flat plate on there to mount the transaxle to it. And then they cut the rear body side off and replace that with a wide-body carbon-fiber bit.”

A transaxle is simply a fun name for a rear-mounted transmission—in this case, an eight-speed dual-clutch unit mounted on the rear axle to help balance the car’s weight.

The GTD needs as much help as it can get to offset the heft of the 5.2-liter supercharged V8 up front. It gets a full set of carbon-fiber bodywork, too, but the resulting package still weighs over 4,300 lbs (1,950 kg).

With 815 hp (608 kW) and 664 lb-ft (900 Nm) of torque, it’s the most powerful road-going Mustang of all time, and it received other upgrades to match, including carbon-ceramic brake discs at the corners and the wing to end all wings slung off the back. It’s not only big; it’s smart, featuring a Formula One-style drag-reduction system.

At higher speeds, the wing’s element flips up, enabling a 202 mph (325 km/h) top speed. No surprise, that makes this the fastest factory Mustang ever. At a $325,000 starting price, it had better be, but when it comes to the maximum-velocity stakes, the Chevrolet is in another league.

More Corvette

You lose the frunk but gain cooling and downforce. Tim Stevens

On paper, when it comes to outright speed and value, the Chevrolet Corvette ZR1 seems to offer far more bang for what is still a significant number of bucks. To be specific, the ZR1 starts at about $175,000, which gets you a 1,064 hp (793 kW) car that will do 233 mph (375 km/h) if you point it down a road long enough.

Where the GTD is a thorough reimagining of what a Mustang can be, the ZR1 sticks closer to the Corvette script, offering more power, more aerodynamics, and more braking without any dramatic internal reconfiguration. That’s because it was all part of the car’s original mission plan, GM’s Brian Wallace told me.

“We knew we were going to build this car,” he said, “knowing it had the backbone to double the horsepower, put 20 percent more grip in the car, and oodles of aero.”

At the center of it all is a 5.5-liter twin-turbocharged V8. You can get a big wing here, too, but it isn’t active like the GTD’s.

Chevrolet engineers bolstered the internal structure at the back of the car to handle the extra downforce at the rear. Up front, the frunk is replaced by a duct through the hood, providing yet more grip to balance things. Big wheels, sticky tires, and carbon-ceramic brakes round out a package that looks a little less radical on the outside than the Mustang and substantially less retooled on the inside, but clearly no less capable.

The engine bay of a yellow Corvette ZR1.

A pair of turbochargers lurk behind that rear window. Credit: Tim Stevens

And if that’s not enough, Chevrolet has the 1,250 hp (932 kW), $208,000 ZR1X on offer, which adds the Corvette E-Ray’s hybrid system into the mix. That package does add more weight, but the result is still a roughly 4,000-lb (1,814 kg) car, hundreds less than the Ford.

’Ring battles

Ford and Chevy’s battle at the ‘ring blew up this summer, but both brands have tested there for years. Chevrolet has even set official lap times in the past, including the previous-generation Corvette Z06’s 7: 22.68 in 2012. Despite that, a fast lap time was not in the initial plan for the new ZR1 and ZR1X. Drew Cattell, ZR1X vehicle dynamics engineer and the driver of that 6: 49.275 lap, told me it “wasn’t an overriding priority” for the new Corvette.

But after developing the cars there so extensively, they decided to give it a go. “Seeing what the cars could do, it felt like the right time. That we had something we were proud of and we could really deliver with,” he said.

Ford, meanwhile, had never set an official lap time at the ‘ring, but it was part of the GTD’s raison d’être: “That was always a goal: to go under seven minutes. And some of it was to be the first American car ever to do it,” Ford’s Goodall said.

That required extracting every bit of performance, necessitating a last-minute change during final testing. In May of 2024, after the car’s design had been finalized by everyone up the chain of command at Ford, the test team in Germany determined the GTD needed a little more front grip.

To fix it, Steve Thompson, a dynamic technical specialist at Ford, designed a prototype aerodynamic extension to the vents in the hood. “It was 3D-printed, duct taped,” Goodall said. That design was refined and wound up on the production car, boosting frontal downforce on the GTD without adding drag.

Chevrolet’s development process relied not only on engineers in Germany but also on work in the US. “The team back home will keep on poring over the data while we go to sleep, because of the time difference,” Cattell said, “and then they’ll have something in our inbox the next morning to try out.”

When it was time for the Corvette’s record-setting runs, there wasn’t much left to change, just a few minor setup tweaks. “Maybe a millimeter or two,” Wallace said, “all within factory alignment settings.”

A few months later, it was my turn.

Behind the wheel

No, I wasn’t able to run either of these cars at the Nürburgring, but I was lucky enough to spend one day with both the GTD and the ZR1. First was the Corvette at one of America’s greatest racing venues: the Circuit of the Americas, a 3.5-mile track and host of the Formula One United States Grand Prix since 2012.

A head-on shot of a yellow Corvette ZR1.

How does 180 mph on the back straight at the Circuit of the Americas sound? Credit: Tim Stevens

I’ve been lucky to spend a lot of time in various Corvettes over the years, but none with performance like this. I was expecting a borderline terrifying experience, but I couldn’t have been more wrong. Despite its outrageous speed and acceleration, the ZR1 really is still a Corvette.

On just my second lap behind the wheel of the ZR1, I was doing 180 mph down the back straight and running a lap time close to the record set by a $1 million McLaren Senna a few years before. The Corvette is outrageously fast—and frankly exhausting to drive thanks to the monumental G forces—but it’s more encouraging than intimidating.

The GTD was more of a commitment. I sampled one at The Thermal Club near Palm Springs, California, a less auspicious but more technical track with tighter turns and closer walls separating them. That always amps up the pressure a bit, but the challenging layout of the track really forced me to focus on extracting the most out of the Mustang at low and high speeds.

The GTD has a few tricks up its sleeve to help with that, including an advanced multi-height suspension that drops it by about 1.5 inches (4 cm) at the touch of a button, optimizing the aerodynamic performance and lowering the roll height of the car.

A black Ford Mustang GTD in profile.

Heavier and less powerful than the Corvette, the Mustang GTD has astonishing levels of cornering grip. Credit: Tim Stevens

While road-going Mustangs typically focus on big power in a straight line, the GTD’s real skill is astonishing grip and handling. Remember, the GTD is only a few seconds slower on the ‘ring than the ZR1, despite weighing somewhere around 400 pounds (181 kg) more and having nearly 200 fewer hp (149 kw).

The biggest difference in feel between the two, though, is how they accelerate. The ZR1’s twin-turbocharged V8 delivers big power when you dip in the throttle and then just keeps piling on more and more as the revs increase. The supercharged V8 in the Mustang, on the other hand, is more like an instantaneous kick in the posterior. It’s ferocious.

Healthy competition

The ZR1 is brutally fast, yes, but it’s still remarkably composed, and it feels every bit as usable and refined as any of the other flavors of modern Corvette. The GTD, on the other hand, is a completely different breed than the base Mustang, every bit the purpose-built racer you’d expect from a race shop like Multimatic.

Chevrolet did the ZR1 and ZR1X development in-house. Cattell said that is a huge point of pride for the team. So, too, is setting those ZR1 and ZR1X lap times using General Motors’ development engineers. Ford turned to a pro race driver for its laps.

A racing driver stands in front his car as mechanics and engineers celebrate in the background.

Ford factory racing driver Dirk Muller was responsible for setting the GTD’s time at the ‘ring. Credit: Giles Jenkyn Photography LTD/Ford

An engineer in a fire suit stands next to a yellow Corvette, parked on the Nurburgring.

GM vehicle dynamics engineer Drew Cattell set the ZR1X’s Nordschleife time. Credit: Chevrolet

That, though, was as close to a barb as I could get out of any engineer on either side of this new Nürburgring. Both teams were extremely complimentary of each other.

“We’re pretty proud of that record. And I don’t say this in a snarky way, but we were first, and you can’t ever take away first,” Ford’s Goodall said. “Congratulations to them. We know better than anybody how hard of an accomplishment or how big of an accomplishment it is and how much effort goes into it.”

But he quickly added that Ford isn’t done. “You’re not a racer if you’re just going to take that lying down. So it took us approximately 30 seconds to align that we were ready to go back and do something about it,” he said.

In other words, this Nürburgring war is just beginning.

ZR1, GTD, and America’s new Nürburgring war Read More »