Features

curated-realities:-an-ai-film-festival-and-the-future-of-human-expression

Curated realities: An AI film festival and the future of human expression


We saw 10 AI films and interviewed Runway’s CEO as well as Hollywood pros.

An AI-generated frame of a person looking at an array of television screens

A still from Total Pixel Space, the Grand Prix winner at AIFF 2025.

A still from Total Pixel Space, the Grand Prix winner at AIFF 2025.

Last week, I attended a film festival dedicated to shorts made using generative AI. Dubbed AIFF 2025, it was an event precariously balancing between two different worlds.

The festival was hosted by Runway, a company that produces models and tools for generating images and videos. In panels and press briefings, a curated list of industry professionals made the case for Hollywood to embrace AI tools. In private meetings with industry professionals, I gained a strong sense that there is already a widening philosophical divide within the film and television business.

I also interviewed Runway CEO Cristóbal Valenzuela about the tightrope he walks as he pitches his products to an industry that has deeply divided feelings about what role AI will have in its future.

To unpack all this, it makes sense to start with the films, partly because the film that was chosen as the festival’s top prize winner says a lot about the issues at hand.

A festival of oddities and profundities

Since this was the first time the festival has been open to the public, the crowd was a diverse mix: AI tech enthusiasts, working industry creatives, and folks who enjoy movies and who were curious about what they’d see—as well as quite a few people who fit into all three groups.

The scene at the entrance to the theater at AIFF 2025 in Santa Monica, California.

The films shown were all short, and most would be more at home at an art film fest than something more mainstream. Some shorts featured an animated aesthetic (including one inspired by anime) and some presented as live action. There was even a documentary of sorts. The films could be made entirely with Runway or other AI tools, or those tools could simply be a key part of a stack that also includes more traditional filmmaking methods.

Many of these shorts were quite weird. Most of us have seen by now that AI video-generation tools excel at producing surreal and distorted imagery—sometimes whether the person prompting the tool wants that or not. Several of these films leaned into that limitation, treating it as a strength.

Representing that camp was Vallée Duhamel’s Fragments of Nowhere, which visually explored the notion of multiple dimensions bleeding into one another. Cars morphed into the sides of houses, and humanoid figures, purported to be inter-dimensional travelers, moved in ways that defied anatomy. While I found this film visually compelling at times, I wasn’t seeing much in it that I hadn’t already seen from dreamcore or horror AI video TikTok creators like GLUMLOT or SinRostroz in recent years.

More compelling were shorts that used this propensity for oddity to generate imagery that was curated and thematically tied to some aspect of human experience or identity. For example, More Tears than Harm by Herinarivo Rakotomanana was a rotoscope animation-style “sensory collage of childhood memories” of growing up in Madagascar. Its specificity and consistent styling lent it a credibility that Fragments of Nowhere didn’t achieve. I also enjoyed Riccardo Fusetti’s Editorial on this front.

More Tears Than Harm, an unusual animated film at AIFF 2025.

Among the 10 films in the festival, two clearly stood above the others in my impressions—and they ended up being the Grand Prix and Gold prize winners. (The judging panel included filmmakers Gaspar Noé and Harmony Korine, Tribeca Enterprises CEO Jane Rosenthal, IMAX head of post and image capture Bruce Markoe, Lionsgate VFX SVP Brianna Domont, Nvidia developer relations lead Richard Kerris, and Runway CEO Cristóbal Valenzuela, among others).

Runner-up Jailbird was the aforementioned quasi-documentary. Directed by Andrew Salter, it was a brief piece that introduced viewers to a program in the UK that places chickens in human prisons as companion animals, to positive effect. Why make that film with AI, you might ask? Well, AI was used to achieve shots that wouldn’t otherwise be doable for a small-budget film to depict the experience from the chicken’s point of view. The crowd loved it.

Jailbird, the runner-up at AIFF 2025.

Then there was the Grand Prix winner, Jacob Adler’s Total Pixel Space, which was, among other things, a philosophical defense of the very idea of AI art. You can watch Total Pixel Space on YouTube right now, unlike some of the other films. I found it strangely moving, even as I saw its selection as the festival’s top winner with some cynicism. Of course they’d pick that one, I thought, although I agreed it was the most interesting of the lot.

Total Pixel Space, the Grand Prix winner at AIFF 2025.

Total Pixel Space

Even though it risked navel-gazing and self-congratulation in this venue, Total Pixel Space was filled with compelling imagery that matched the themes, and it touched on some genuinely interesting ideas—at times, it seemed almost profound, didactic as it was.

“How many images can possibly exist?” the film’s narrator asked. To answer that, it explains the concept of total pixel space, which actually reflects how image generation tools work:

Pixels are the building blocks of digital images—tiny tiles forming a mosaic. Each pixel is defined by numbers representing color and position. Therefore, any digital image can be represented as a sequence of numbers…

Just as we don’t need to write down every number between zero and one to prove they exist, we don’t need to generate every possible image to prove they exist. Their existence is guaranteed by the mathematics that defines them… Every frame of every possible film exists as coordinates… To deny this would be to deny the existence of numbers themselves.

The nine-minute film demonstrates that the number of possible images or films is greater than the number of atoms in the universe and argues that photographers and filmmakers may be seen as discovering images that already exist in the possibility space rather than creating something new.

Within that framework, it’s easy to argue that generative AI is just another way for artists to “discover” images.

The balancing act

“We are all—and I include myself in that group as well—obsessed with technology, and we keep chatting about models and data sets and training and capabilities,” Runway CEO Cristóbal Valenzuela said to me when we spoke the next morning. “But if you look back and take a minute, the festival was celebrating filmmakers and artists.”

I admitted that I found myself moved by Total Pixel Space‘s articulations. “The winner would never have thought of himself as a filmmaker, and he made a film that made you feel something,” Valenzuela responded. “I feel that’s very powerful. And the reason he could do it was because he had access to something that just wasn’t possible a couple of months ago.”

First-time and outsider filmmakers were the focus of AIFF 2025, but Runway works with established studios, too—and those relationships have an inherent tension.

The company has signed deals with companies like Lionsgate and AMC. In some cases, it trains on data provided by those companies; in others, it embeds within them to try to develop tools that fit how they already work. That’s not something competitors like OpenAI are doing yet, so that, combined with a head start in video generation, has allowed Runway to grow and stay competitive so far.

“We go directly into the companies, and we have teams of creatives that are working alongside them. We basically embed ourselves within the organizations that we’re working with very deeply,” Valenzuela explained. “We do versions of our film festival internally for teams as well so they can go through the process of making something and seeing the potential.”

Founded in 2018 at New York University’s Tisch School of the Arts by two Chileans and one Greek co-founder, Runway has a very different story than its Silicon Valley competitors. It was one of the first to bring an actually usable video-generation tool to the masses. Runway also contributed in foundational ways to the popular Stable Diffusion model.

Though it is vastly outspent by competitors like OpenAI, it has taken a hands-on approach to working with existing industries. You won’t hear Valenzuela or other Runway leaders talking about the imminence of AGI or anything so lofty; instead, it’s all about selling the product as something that can solve existing problems in creatives’ workflows.

Still, an artist’s mindset and relationships within the industry don’t negate some fundamental conflicts. There are multiple intellectual property cases involving Runway and its peers, and though the company hasn’t admitted it, there is evidence that it trained its models on copyrighted YouTube videos, among other things.

Cristóbal Valenzuela speaking on the AIFF 2025 stage. Credit: Samuel Axon

Valenzuela suggested that studios are worried about liability, not underlying principles, though, saying:

Most of the concerns on copyright are on the output side, which is like, how do you make sure that the model doesn’t create something that already exists or infringes on something. And I think for that, we’ve made sure our models don’t and are supportive of the creative direction you want to take without being too limiting. We work with every major studio, and we offer them indemnification.

In the past, he has also defended Runway by saying that what it’s producing is not a re-creation of what has come before. He sees the tool’s generative process as distinct—legally, creatively, and ethically—from simply pulling up assets or references from a database.

“People believe AI is sort of like a system that creates and conjures things magically with no input from users,” he said. “And it’s not. You have to do that work. You still are involved, and you’re still responsible as a user in terms of how you use it.”

He seemed to share this defense of AI as a legitimate tool for artists with conviction, but given that he’s been pitching these products directly to working filmmakers, he was also clearly aware that not everyone agrees with him. There is not even a consensus among those in the industry.

An industry divided

While in LA for the event, I visited separately with two of my oldest friends. Both of them work in the film and television industry in similar disciplines. They each asked what I was in town for, and I told them I was there to cover an AI film festival.

One immediately responded with a grimace of disgust, “Oh, yikes, I’m sorry.” The other responded with bright eyes and intense interest and began telling me how he already uses AI in his day-to-day to do things like extend shots by a second or two for a better edit, and expressed frustration at his company for not adopting the tools faster.

Neither is alone in their attitudes. Hollywood is divided—and not for the first time.

There have been seismic technological changes in the film industry before. There was the transition from silent films to talkies, obviously; moviemaking transformed into an entirely different art. Numerous old jobs were lost, and numerous new jobs were created.

Later, there was the transition from film to digital projection, which may be an even tighter parallel. It was a major disruption, with some companies and careers collapsing while others rose. There were people saying, “Why do we even need this?” while others believed it was the only sane way forward. Some audiences declared the quality worse, and others said it was better. There were analysts arguing it could be stopped, while others insisted it was inevitable.

IMAX’s head of post production, Bruce Markoe, spoke briefly about that history at a press mixer before the festival. “It was a little scary,” he recalled. “It was a big, fundamental change that we were going through.”

People ultimately embraced it, though. “The motion picture and television industry has always been very technology-forward, and they’ve always used new technologies to advance the state of the art and improve the efficiencies,” Markoe said.

When asked whether he thinks the same thing will happen with generative AI tools, he said, “I think some filmmakers are going to embrace it faster than others.” He pointed to AI tools’ usefulness for pre-visualization as particularly valuable and noted some people are already using it that way, but it will take time for people to get comfortable with.

And indeed, many, many filmmakers are still loudly skeptical. “The concept of AI is great,” The Mitchells vs. the Machines director Mike Rianda said in a Wired interview. “But in the hands of a corporation, it is like a buzzsaw that will destroy us all.”

Others are interested in the technology but are concerned that it’s being brought into the industry too quickly, with insufficient planning and protections. That includes Crafty Apes Senior VFX Supervisor Luke DiTomasso. “How fast do we roll out AI technologies without really having an understanding of them?” he asked in an interview with Production Designers Collective. “There’s a potential for AI to accelerate beyond what we might be comfortable with, so I do have some trepidation and am maybe not gung-ho about all aspects of it.

Others remain skeptical that the tools will be as useful as some optimists believe. “AI never passed on anything. It loved everything it read. It wants you to win. But storytelling requires nuance—subtext, emotion, what’s left unsaid. That’s something AI simply can’t replicate,” said Alegre Rodriquez, a member of the Emerging Technology committee at the Motion Picture Editors Guild.

The mirror

Flying back from Los Angeles, I considered two key differences between this generative AI inflection point for Hollywood and the silent/talkie or film/digital transitions.

First, neither of those transitions involved an existential threat to the technology on the basis of intellectual property and copyright. Valenzuela talked about what matters to studio heads—protection from liability over the outputs. But the countless creatives who are critical of these tools also believe they should be consulted and even compensated for their work’s use in the training data for Runway’s models. In other words, it’s not just about the outputs, it’s also about the sourcing. As noted before, there are several cases underway. We don’t know where they’ll land yet.

Second, there’s a more cultural and philosophical issue at play, which Valenzuela himself touched on in our conversation.

“I think AI has become this sort of mirror where anyone can project all their fears and anxieties, but also their optimism and ideas of the future,” he told me.

You don’t have to scroll for long to come across techno-utopians declaring with no evidence that AGI is right around the corner and that it will cure cancer and save our society. You also don’t have to scroll long to encounter visceral anger at every generative AI company from people declaring the technology—which is essentially just a new methodology for programming a computer—fundamentally unethical and harmful, with apocalyptic societal and economic ramifications.

Amid all those bold declarations, this film festival put the focus on the on-the-ground reality. First-time filmmakers who might never have previously cleared Hollywood’s gatekeepers are getting screened at festivals because they can create competitive-looking work with a fraction of the crew and hours. Studios and the people who work there are saying they’re saving time, resources, and headaches in pre-viz, editing, visual effects, and other work that’s usually done under immense time and resource pressure.

“People are not paying attention to the very huge amount of positive outcomes of this technology,” Valenzuela told me, pointing to those examples.

In this online discussion ecosystem that elevates outrage above everything else, that’s likely true. Still, there is a sincere and rigorous conviction among many creatives that their work is contributing to this technology’s capabilities without credit or compensation and that the structural and legal frameworks to ensure minimal human harm in this evolving period of disruption are still inadequate. That’s why we’ve seen groups like the Writers Guild of America West support the Generative AI Copyright Disclosure Act and other similar legislation meant to increase transparency about how these models are trained.

The philosophical question with a legal answer

The winning film argued that “total pixel space represents both the ultimate determinism and the ultimate freedom—every possibility existing simultaneously, waiting for consciousness to give it meaning through the act of choice.”

In making this statement, the film suggested that creativity, above all else, is an act of curation. It’s a claim that nothing, truly, is original. It’s a distillation of human expression into the language of mathematics.

To many, that philosophy rings undeniably true: Every possibility already exists, and artists are just collapsing the waveform to the frame they want to reveal. To others, there is more personal truth to the romantic ideal that artwork is valued precisely because it did not exist until the artist produced it.

All this is to say that the debate about creativity and AI in Hollywood is ultimately a philosophical one. But it won’t be resolved that way.

The industry may succumb to litigation fatigue and a hollowed-out workforce—or it may instead find its way to fair deals, new opportunities for fresh voices, and transparent training sets.

For all this lofty talk about creativity and ideas, the outcome will come down to the contracts, court decisions, and compensation structures—all things that have always been at least as big a part of Hollywood as the creative work itself.

Photo of Samuel Axon

Samuel Axon is the editorial lead for tech and gaming coverage at Ars Technica. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Curated realities: An AI film festival and the future of human expression Read More »

how-a-grad-student-got-lhc-data-to-play-nice-with-quantum-interference

How a grad student got LHC data to play nice with quantum interference


New approach is already having an impact on the experiment’s plans for future work.

The ATLAS particle detector of the Large Hadron Collider (LHC) at the European Nuclear Research Center (CERN) in Geneva, Switzerland. Credit: EThamPhoto/Getty Images

The ATLAS particle detector of the Large Hadron Collider (LHC) at the European Nuclear Research Center (CERN) in Geneva, Switzerland. Credit: EThamPhoto/Getty Images

Measurements at the Large Hadron Collider have been stymied by one of the most central phenomena of the quantum world. But now, a young researcher has championed a new method to solve the problem using deep neural networks.

The Large Hadron Collider is one of the biggest experiments in history, but it’s also one of the hardest to interpret. Unlike seeing an image of a star in a telescope, saying anything at all about the data that comes out of the LHC requires careful statistical modeling.

“If you gave me a theory [that] the Higgs boson is this way or that way, I think people imagine, ‘Hey, you built the experiment, you should be able to tell me what you’re going to see under various hypotheses!’” said Daniel Whiteson, a professor at the University of California, Irvine. “But we don’t.”

One challenge with interpreting LHC data is interference, a core implication of quantum mechanics. Interference allows two possible events to inhibit each other, weakening the likelihood of seeing the result of either. In the presence of interference, physicists needed to use a fuzzier statistical method to analyze data, losing the data’s full power and increasing its uncertainty.

However, a recent breakthrough suggests a different way to tackle the problem. The ATLAS collaboration, one of two groups studying proton collisions at the LHC, released two papers last December that describe new ways of exploring data from their detector. One describes how to use a machine learning technique called Neural Simulation-Based Inference to maximize the potential of particle physics data. The other demonstrates its effectiveness with the ultimate test: re-doing a previous analysis with the new technique and seeing dramatic improvement.

The papers are the culmination of a young researcher’s six-year quest to convince the collaboration of the value of the new technique. Its success is already having an impact on the experiment’s plans for future work.

Making sense out of fusing bosons

Each particle collision at the LHC involves many possible pathways in which different particles combine to give rise to the spray of debris that experimenters see. In 2017, David Rousseau at IJCLab in Orsay, a member of the ATLAS collaboration, asked one of his students, Aishik Ghosh, to improve his team’s ability to detect a specific pathway. That particular pathway is quite important since it’s used to measure properties of the Higgs boson, a particle (first measured in 2012) that helps explain the mass of all other fundamental particles.

It was a pretty big ask. “When a grad student gets started in ATLAS, they’re a tiny cog in a giant, well-oiled machine of 3,500 physicists, who all seem to know exactly what they’re doing,” said Ghosh.

The pathway Ghosh was asked to study occurs via several steps. First, the two colliding protons each emit a W boson, a particle associated with the weak nuclear force. These two bosons fuse together, changing their identity to form a Higgs boson. The Higgs boson then decays, forming a pair of Z bosons, another particle associated with the weak force. Finally, those Z bosons themselves each decay into a lepton, like an electron, and its antimatter partner, like a positron.

A Feynman diagram for the pathway studied by Aishik Ghosh. Credit: ATLAS

Measurements like the one Ghosh was studying are a key way of investigating the properties of the Higgs boson. By precisely measuring how long it takes the Higgs boson to decay, physicists could find evidence of it interacting with new, undiscovered particles that are too massive for the LHC to produce directly.

Ghosh started on the project, hoping to find a small improvement in the collaboration’s well-tested methods. Instead, he noticed a larger issue. The goal he was given, of detecting a single pathway by itself, didn’t actually make sense.

“I was doing that and I realized, ‘What am I doing?’ There’s no clear objective,” said Ghosh.

The problem was quantum interference.

How quantum histories interfere

One of the most famous demonstrations of the mysterious nature of quantum mechanics is called the double-slit experiment. In this demonstration, electrons are shot through a screen with two slits that allow them to pass through to a photographic plate on the other side. With one slit covered, the electrons form a pattern centered on the opening. The photographic plate lights up bright right across from the slit and dims further away from it.

With both slits open, you would expect the pattern to get brighter as more electrons reach the photographic plate. Instead, the effect varies. The two slits do not give rise to two nice bright peaks; instead, you see a rippling pattern in which some areas get brighter while others get dimmer, even though the dimmer areas should, in principle, be easier for electrons to reach.

The effect happens even if the electrons are shot at the screen one by one to stop them from influencing each other directly. It’s as if each electron carries with it two possible histories, one in which it goes through one slit and another where it goes through the other before both end up at the same place. These two histories interfere with each other so that some destinations become less likely instead of more likely.

Results of the double-slit experiment. Credit: Jordgette (CC BY-SA 3.0)

For electrons in the double-slit experiment, the two different histories are two different paths through space. For a measurement at the Large Hadron Collider, the histories are more abstract—paths that lead through transformations of fields. One history might be like the pathway Ghosh was asked to study, in which two W bosons fuse to form a Higgs boson before the Higgs boson splits into two Z bosons. But in another history, the two W bosons might fuse and immediately split into two Z bosons without ever producing a Higgs.

Both histories have the same beginning, with two W bosons, and the same end, with two Z bosons. And just as the two histories of electrons in the double-slit experiment can interfere, so can the two histories for these particles.

Another possible history for colliding particles at the Large Hadron Collider, which interferes with the measurement Ghosh was asked to do. Credit: ATLAS

That interference makes the effect of the Higgs boson much more challenging to spot. ATLAS scientists wanted to look for two pairs of electrons and positrons, which would provide evidence that two Z bosons were produced. They would classify their observations into two types: observations that are evidence for the signal they were looking for (that of a decaying Higgs boson) and observations of events that generate this pattern of particles without the Higgs boson acting as an intermediate (the latter are called the background). But the two types of observations, signal and background, interfere. With a stronger signal, corresponding to more Higgs bosons decaying, you might observe more pairs of electrons and positrons… but if these events interfere, you also might see those pairs disappear.

Learning to infer

In traditional approaches, those disappearances are hard to cope with, even when using methods that already incorporate machine learning.

One of the most common uses of machine learning is classification—for example, distinguishing between pictures of dogs and cats. You train the machine on pictures of cats and pictures of dogs, and it tells you, given a picture, which animal is the most likely match. Physicists at the LHC were already using this kind of classification method to characterize the products of collisions, but it functions much worse when interference is involved.

“If you have something that disappears, you don’t quite know what to train on,” said David Rousseau. “Usually, you’re training signal versus background, exactly like you’re training cats versus dogs. When there is something that disappears, you don’t see what you trained on.”

At first, Ghosh tried a few simple tricks, but as time went on, he realized he needed to make a more fundamental change. He reached out to others in the community and learned about a method called Neural Simulation-Based Inference, or NSBI.

In older approaches, people had trained machine learning models to classify observations into signal and background, using simulations of particle collisions to make the training data. Then they used that classification to infer the most likely value of a number, like the amount of time it takes a Higgs boson to decay, based on data from an actual experiment. Neural Simulation-Based Inference skips the classification and goes directly to the inference.

Instead of trying to classify observations into signal and background, NSBI uses simulations to teach an artificial neural network to guess a formula called a likelihood ratio. Someone using NSBI would run several simulations that describe different situations, such as letting the Higgs boson decay at different rates, and then check how many of each type of simulation yielded a specific observation. The fraction of these simulations with a certain decay rate would provide the likelihood ratio, a method for inferring which decay rate is more likely given experimental evidence. If the neural network is good at guessing this ratio, it will be good at finding how long the Higgs takes to decay.

Because NSBI doesn’t try to classify observations into different categories, it handles quantum interference more effectively. Instead of trying to find the Higgs based on a signal that disappears, it examines all the data, trying to guess which decay time is the most likely.

Ghosh tested the method, which showed promising results on test data, and presented the results at a conference in 2019. But if he was going to convince the ATLAS collaboration that the method was safe to use, he still had a lot of work ahead of him.

Shifting the weight on ATLAS’ shoulders

Experiments like ATLAS have high expectations attached to them. A collaboration of thousands of scientists, ATLAS needs to not only estimate the laws of physics but also have a clear idea of just how uncertain those estimates are. At the time, NSBI hadn’t been tested in that way.

“None of this has actually been used on data,” said Ghosh. “Nobody knew how to quantify the uncertainties. So you have a neural network that gives you a likelihood. You don’t know how good the likelihood is. Is it well-estimated? What if it’s wrongly estimated just in some weird corner? That would completely bias your results.”

Checking those corners was too big a job for a single PhD student and too complex to complete within a single PhD degree. Aishik would have to build a team, and he would need time to build that team. That’s tricky in the academic world, where students go on to short-term postdoc jobs with the expectation that they quickly publish new results to improve their CV for the next position.

“We’re usually looking to publish the next paper within two to three years—no time to overhaul our methods,” said Ghosh. Fortunately, Ghosh had support. He received his PhD alongside Rousseau and went to work with Daniel Whiteson, who encouraged him to pursue his ambitious project.

“I think it’s really important that postdocs learn to take those risks because that’s what science is,” Whiteson said.

Ghosh gathered his team. Another student of Rousseau’s, Arnaud Maury, worked to calibrate the machine’s confidence in its answers. A professor at the University of Massachusetts, Rafael Coelho Lopes de Sa, joined the project. His student Jay Sandesara would have a key role in getting the calculation to work at full scale on a computer cluster. IJCLab emeritus RD Schaffer and University of Liège professor Gilles Loupe provided cross-checks and advice.

The team wanted a clear demonstration that their method worked, so they took an unusual step. They took data that ATLAS had already analyzed and performed a full analysis using their method instead, showing that it could pass every check the collaboration could think of. They would publish two papers, one describing the method and the other giving the results of their upgraded analysis. Zach Marshall, who was the computing coordinator for ATLAS at the time, helped get the papers through, ensuring that they were vetted by experts in multiple areas.

“It was a very small subset of our community that had that overlap between this technical understanding and the physics analysis experience and understanding that were capable of really speaking to whether that paper was sufficient and intelligible and useful. So we really had to make sure that we engaged that little group of humans by name,” said Marshall.

The new method showed significant improvements, getting a much more precise result than the collaboration’s previous analysis. That improvement, and the thorough checks, persuaded ATLAS to use NSBI more broadly going forward. It will give them much more precision than they expected, using the Higgs boson to search for new particles and clarify our understanding of the quantum world. When ATLAS discusses its future plans, it makes projections of the precision it expects to reach in the future. But those plans are now being upended.

“One of the fun things about this method that Aishik pushed hard is each time it feels like now we do that projection—here’s how well we’ll do in 15 years—we absolutely crush those projections,” said Marshall. “So we are just now having to redo a set of projections because we matched our old projections for 15 years out already today. It’s a very fun problem to have.”

How a grad student got LHC data to play nice with quantum interference Read More »

study:-meta-ai-model-can-reproduce-almost-half-of-harry-potter-book

Study: Meta AI model can reproduce almost half of Harry Potter book


Harry Potter and the Copyright Lawsuit

The research could have big implications for generative AI copyright lawsuits.

Meta CEO Mark Zuckerberg. Credit: Andrej Sokolow/picture alliance via Getty Images

In recent years, numerous plaintiffs—including publishers of books, newspapers, computer code, and photographs—have sued AI companies for training models using copyrighted material. A key question in all of these lawsuits has been how easily AI models produce verbatim excerpts from the plaintiffs’ copyrighted content.

For example, in its December 2023 lawsuit against OpenAI, The New York Times Company produced dozens of examples where GPT-4 exactly reproduced significant passages from Times stories. In its response, OpenAI described this as a “fringe behavior” and a “problem that researchers at OpenAI and elsewhere work hard to address.”

But is it actually a fringe behavior? And have leading AI companies addressed it? New research—focusing on books rather than newspaper articles and on different companies—provides surprising insights into this question. Some of the findings should bolster plaintiffs’ arguments, while others may be more helpful to defendants.

The paper was published last month by a team of computer scientists and legal scholars from Stanford, Cornell, and West Virginia University. They studied whether five popular open-weight models—three from Meta and one each from Microsoft and EleutherAI—were able to reproduce text from Books3, a collection of books that is widely used to train LLMs. Many of the books are still under copyright.

This chart illustrates their most surprising finding:

The chart shows how easy it is to get a model to generate 50-token excerpts from various parts of Harry Potter and the Sorcerer’s Stone. The darker a line is, the easier it is to reproduce that portion of the book.

Each row represents a different model. The three bottom rows are Llama models from Meta. And as you can see, Llama 3.1 70B—a mid-sized model Meta released in July 2024—is far more likely to reproduce Harry Potter text than any of the other four models.

Specifically, the paper estimates that Llama 3.1 70B has memorized 42 percent of the first Harry Potter book well enough to reproduce 50-token excerpts at least half the time. (I’ll unpack how this was measured in the next section.)

Interestingly, Llama 1 65B, a similar-sized model released in February 2023, had memorized only 4.4 percent of Harry Potter and the Sorcerer’s Stone. This suggests that despite the potential legal liability, Meta did not do much to prevent memorization as it trained Llama 3. At least for this book, the problem got much worse between Llama 1 and Llama 3.

Harry Potter and the Sorcerer’s Stone was one of dozens of books tested by the researchers. They found that Llama 3.1 70B was far more likely to reproduce popular books—such as The Hobbit and George Orwell’s 1984—than obscure ones. And for most books, Llama 3.1 70B memorized more than any of the other models.

“There are really striking differences among models in terms of how much verbatim text they have memorized,” said James Grimmelmann, a Cornell law professor who has collaborated with several of the paper’s authors.

The results surprised the study’s authors, including Mark Lemley, a law professor at Stanford. (Lemley used to be part of Meta’s legal team, but in January, he dropped them as a client after Facebook adopted more Trump-friendly moderation policies.)

“We’d expected to see some kind of low level of replicability on the order of 1 or 2 percent,” Lemley told me. “The first thing that surprised me is how much variation there is.”

These results give everyone in the AI copyright debate something to latch onto. For AI industry critics, the big takeaway is that—at least for some models and some books—memorization is not a fringe phenomenon.

On the other hand, the study only found significant memorization of a few popular books. For example, the researchers found that Llama 3.1 70B only memorized 0.13 percent of Sandman Slim, a 2009 novel by author Richard Kadrey. That’s a tiny fraction of the 42 percent figure for Harry Potter.

This could be a headache for law firms that have filed class-action lawsuits against AI companies. Kadrey is the lead plaintiff in a class-action lawsuit against Meta. To certify a class of plaintiffs, a court must find that the plaintiffs are in largely similar legal and factual situations.

Divergent results like these could cast doubt on whether it makes sense to lump J.K. Rowling, Kadrey, and thousands of other authors together in a single mass lawsuit. And that could work in Meta’s favor, since most authors lack the resources to file individual lawsuits.

The broader lesson of this study is that the details will matter in these copyright cases. Too often, online discussions have treated “do generative models copy their training data or merely learn from it?” as a theoretical or even philosophical question. But it’s a question that can be tested empirically—and the answer might differ across models and across copyrighted works.

It’s common to talk about LLMs predicting the next token. But under the hood, what the model actually does is generate a probability distribution over all possibilities for the next token. For example, if you prompt an LLM with the phrase “Peanut butter and,” it will respond with a probability distribution that might look like this made-up example:

  • P(“jelly”) = 70 percent
  • P(“sugar”) = 9 percent
  • P(“peanut”) = 6 percent
  • P(“chocolate”) = 4 percent
  • P(“cream”) = 3 percent

And so forth.

After the model generates a list of probabilities like this, the system will select one of these options at random, weighted by their probabilities. So 70 percent of the time the system will generate “Peanut butter and jelly.” Nine percent of the time, we’ll get “Peanut butter and sugar.” Six percent of the time, it will be “Peanut butter and peanut.” You get the idea.

The study’s authors didn’t have to generate multiple outputs to estimate the likelihood of a particular response. Instead, they could calculate probabilities for each token and then multiply them together.

Suppose someone wants to estimate the probability that a model will respond to “My favorite sandwich is” with “peanut butter and jelly.” Here’s how to do that:

  • Prompt the model with “My favorite sandwich is,” and look up the probability of “peanut” (let’s say it’s 20 percent).
  • Prompt the model with “My favorite sandwich is peanut,” and look up the probability of “butter” (let’s say it’s 90 percent).
  • Prompt the model with “My favorite sandwich is peanut butter” and look up the probability of “and” (let’s say it’s 80 percent).
  • Prompt the model with “My favorite sandwich is peanut butter and” and look up the probability of “jelly” (let’s say it’s 70 percent).

Then we just have to multiply the probabilities like this:

0.2 0.9 0.8 0.7 = 0.1008

So we can predict that the model will produce “peanut butter and jelly” about 10 percent of the time, without actually generating 100 or 1,000 outputs and counting how many of them were that exact phrase.

This technique greatly reduced the cost of the research, allowed the authors to analyze more books, and made it feasible to precisely estimate very low probabilities.

For example, the authors estimated that it would take more than 10 quadrillion samples to exactly reproduce some 50-token sequences from some books. Obviously, it wouldn’t be feasible to actually generate that many outputs. But it wasn’t necessary: the probability could be estimated just by multiplying the probabilities for the 50 tokens.

A key thing to notice is that probabilities can get really small really fast. In my made-up example, the probability that the model will produce the four tokens “peanut butter and jelly” is just 10 percent. If we added more tokens, the probability would get even lower. If we added 46 more tokens, the probability could fall by several orders of magnitude.

For any language model, the probability of generating any given 50-token sequence “by accident” is vanishingly small. If a model generates 50 tokens from a copyrighted work, that is strong evidence that the tokens “came from” the training data. This is true even if it only generates those tokens 10 percent, 1 percent, or 0.01 percent of the time.

The study authors took 36 books and divided each of them into overlapping 100-token passages. Using the first 50 tokens as a prompt, they calculated the probability that the next 50 tokens would be identical to the original passage. They counted a passage as “memorized” if the model had a greater than 50 percent chance of reproducing it word for word.

This definition is quite strict. For a 50-token sequence to have a probability greater than 50 percent, the average token in the passage needs a probability of at least 98.5 percent! Moreover, the authors only counted exact matches. They didn’t try to count cases where—for example—the model generates 48 or 49 tokens from the original passage but got one or two tokens wrong. If these cases were counted, the amount of memorization would be even higher.

This research provides strong evidence that significant portions of Harry Potter and the Sorcerer’s Stone were copied into the weights of Llama 3.1 70B. But this finding doesn’t tell us why or how this happened. I suspect that part of the answer is that Llama 3 70B was trained on 15 trillion tokens—more than 10 times the 1.4 trillion tokens used to train Llama 1 65B.

The more times a model is trained on a particular example, the more likely it is to memorize that example. Perhaps Meta had trouble finding 15 trillion distinct tokens, so it trained on the Books3 dataset multiple times. Or maybe Meta added third-party sources—such as online Harry Potter fan forums, consumer book reviews, or student book reports—that included quotes from Harry Potter and other popular books.

I’m not sure that either of these explanations fully fits the facts. The fact that memorization was a much bigger problem for the most popular books does suggest that Llama may have been trained on secondary sources that quote these books rather than the books themselves. There are likely exponentially more online discussions of Harry Potter than Sandman Slim.

On the other hand, it’s surprising that Llama memorized so much of Harry Potter and the Sorcerer’s Stone.

“If it were citations and quotations, you’d expect it to concentrate around a few popular things that everyone quotes or talks about,” Lemley said. The fact that Llama 3 memorized almost half the book suggests that the entire text was well represented in the training data.

Or there could be another explanation entirely. Maybe Meta made subtle changes in its training recipe that accidentally worsened the memorization problem. I emailed Meta for comment last week but haven’t heard back.

“It doesn’t seem to be all popular books,” Mark Lemley told me. “Some popular books have this result and not others. It’s hard to come up with a clear story that says why that happened.”

  1. Training on a copyrighted work is inherently infringing because the training process involves making a digital copy of the work.
  2. The training process copies information from the training data into the model, making the model a derivative work under copyright law.
  3. Infringement occurs when a model generates (portions of) a copyrighted work.

A lot of discussion so far has focused on the first theory because it is the most threatening to AI companies. If the courts uphold this theory, most current LLMs would be illegal, whether or not they have memorized any training data.

The AI industry has some pretty strong arguments that using copyrighted works during the training process is fair use under the 2015 Google Books ruling. But the fact that Llama 3.1 70B memorized large portions of Harry Potter could color how the courts consider these fair use questions.

A key part of fair use analysis is whether a use is “transformative”—whether a company has made something new or is merely profiting from the work of others. The fact that language models are capable of regurgitating substantial portions of popular works like Harry Potter1984, and The Hobbit could cause judges to look at these fair use arguments more skeptically.

Moreover, one of Google’s key arguments in the books case was that its system was designed to never return more than a short excerpt from any book. If the judge in the Meta lawsuit wanted to distinguish Meta’s arguments from the ones Google made in the books case, he could point to the fact that Llama can generate far more than a few lines of Harry Potter.

The new study “complicates the story that the defendants have been telling in these cases,” co-author Mark Lemley told me. “Which is ‘we just learn word patterns. None of that shows up in the model.’”

But the Harry Potter result creates even more danger for Meta under that second theory—that Llama itself is a derivative copy of Rowling’s book.

“It’s clear that you can in fact extract substantial parts of Harry Potter and various other books from the model,” Lemley said. “That suggests to me that probably for some of those books there’s something the law would call a copy of part of the book in the model itself.”

The Google Books precedent probably can’t protect Meta against this second legal theory because Google never made its books database available for users to download—Google almost certainly would have lost the case if it had done that.

In principle, Meta could still convince a judge that copying 42 percent of Harry Potter was allowed under the flexible, judge-made doctrine of fair use. But it would be an uphill battle.

“The fair use analysis you’ve gotta do is not just ‘is the training set fair use,’ but ‘is the incorporation in the model fair use?’” Lemley said. “That complicates the defendants’ story.”

Grimmelmann also said there’s a danger that this research could put open-weight models in greater legal jeopardy than closed-weight ones. The Cornell and Stanford researchers could only do their work because the authors had access to the underlying model—and hence to the token probability values that allowed efficient calculation of probabilities for sequences of tokens.

Most leading labs, including OpenAI, Anthropic, and Google, have increasingly restricted access to these so-called logits, making it more difficult to study these models.

Moreover, if a company keeps model weights on its own servers, it can use filters to try to prevent infringing output from reaching the outside world. So even if the underlying OpenAI, Anthropic, and Google models have memorized copyrighted works in the same way as Llama 3.1 70B, it might be difficult for anyone outside the company to prove it.

Moreover, this kind of filtering makes it easier for companies with closed-weight models to invoke the Google Books precedent. In short, copyright law might create a strong disincentive for companies to release open-weight models.

“It’s kind of perverse,” Mark Lemley told me. “I don’t like that outcome.”

On the other hand, judges might conclude that it would be bad to effectively punish companies for publishing open-weight models.

“There’s a degree to which being open and sharing weights is a kind of public service,” Grimmelmann told me. “I could honestly see judges being less skeptical of Meta and others who provide open-weight models.”

Timothy B. Lee was on staff at Ars Technica from 2017 to 2021. Today, he writes Understanding AI, a newsletter that explores how AI works and how it’s changing our world. You can subscribe here.

Photo of Timothy B. Lee

Timothy is a senior reporter covering tech policy and the future of transportation. He lives in Washington DC.

Study: Meta AI model can reproduce almost half of Harry Potter book Read More »

framework-laptop-12-review:-i’m-excited-to-see-what-the-2nd-generation-looks-like

Framework Laptop 12 review: I’m excited to see what the 2nd generation looks like


how much would you pay for personality?

A sturdy, thoughtful, cute design that just can’t compete in its price range.

Framework’s Laptop 12 has a lot of personality, but also a lot of shortcomings. Credit: Andrew Cunningham

Framework’s Laptop 12 has a lot of personality, but also a lot of shortcomings. Credit: Andrew Cunningham

“What’s this purple laptop? It’s cool.”

Over a decade-plus of doing gadget reviews and review-adjacent things, my wife (and, lately, my 5-year-old) have mostly stopped commenting on the ever-shifting selection of laptops I have in my bag or lying around the house at any given time. Maybe she can’t tell them apart, or maybe she just figures there isn’t that much to say about whatever black or silver metal slab I’m carrying around. Either way, they practically never elicit any kind of response, unless there are just too many of them sitting out in too many places.

But she did ask about the Framework Laptop 12, the third and latest major design in Framework’s slowly expanding lineup of modular, repairable, upgradeable laptops. With its five two-toned color options and sturdy plastic exterior, it’s definitely more approachable and friendly-looking than the Laptop 13 or Laptop 16, both metal slabs with a somewhat less-finished and prototype-y look to them. But it retains the features that a certain kind of PC geek likes about Framework’s other laptops—user-customizable and swappable ports, an easy-to-open design, first-class Linux support, and the promise of future upgrades that improve its performance and other specs.

Look and feel

The Laptop 12 stacked atop the Laptop 13. Credit: Andrew Cunningham

Plastic gets a bad rap, and there are indeed many subpar plastic gadgets out there. When done poorly, plastic can look and feel cheap, resulting in less durable devices that show more wear over time.

But well-done plastic can still feel solid and high-quality, in addition to being easier to make in different colors. Framework says the Laptop 12’s chassis is a combination of ABS plastic and TPU plastic (a more flexible, rubberized material), molded over a metal inner structure. The result is something that can probably actually take the shock of a drop or a fall better than many aluminum-and-glass laptops without feeling overly cheap or chintzy.

The five two-tone color options—the boring, businesslike black and gray, plus purple-and-gray lavender, pink-and-baby-blue bubblegum, and the green sage options—are the most fun thing about it, and the lavender and bubblegum colors are particularly eye-catching.

Keyboard and trackpad. Only the lavender and gray laptops get a color-matched trackpad; the keyboard and deck are always different shades of gray. Credit: Andrew Cunningham

Matching other components to the exterior of the system can be a bit of a crapshoot, though. The screwdriver and spudger that Framework provides for upgrading and repairing all of its systems does match the color of the laptop, and the two-tone styluses for the touchscreens will also match the laptops when they’re made available for purchase in the coming months.

The lavender option is the only one that can also be configured with a color-matched lavender trackpad—the only other trackpad option is gray, and the keyboard deck and the keyboard itself are all gray no matter what color laptop you pick. This is presumably meant to limit the number of different trackpad options that Framework has to manufacture and stock, but it is too bad that the laptop’s keyboard and palm rest aren’t as colorful as the rest of it.

The Laptop 12 also uses Framework’s still-unique Expansion Card system for customizing the built-in ports. These are all 10 Gbps USB 3.2 Gen 2 ports rather than the Thunderbolt ports on the Intel versions of the Laptop 13, but all four support the same speeds, all four support charging, and all four support display output, so you really can put whatever port you want wherever you want it.

A downside of the Laptop 12 is that, as of this writing, only the USB-C Expansion Modules are available in color-matched versions. If you want USB-A, HDMI, DisplayPort, or any other kind of port on your system, you’ll get the silver modules that were designed to match the finish on the Framework Laptops 13 and 16, so you’ll have to put up with at least one mismatched port on your otherwise adorable system.

Only the USB-C Expansion Cards are available in lavender, which can make for goofy-looking mismatches. Credit: Andrew Cunningham

Once you get past the adorable design, the Expansion Modules, and the sturdy construction, the system’s downsides start to become more apparent. The 12.2-inch, 1920×1200 touchscreen gets plenty bright and has a respectable contrast ratio (440 nits and 1,775:1 in our testing, respectively). But it’s surrounded by thick black bezels on all sides, particularly on the bottom—it does seem that either a larger screen or a slightly smaller laptop design would be possible if so much space weren’t wasted by these thick borders.

The display has good viewing angles but a distinctly mediocre color gamut, covering around 60 percent of the SRGB color space (compared to the high 90s for the Laptop 13 and most midrange to high-end IPS screens in other laptops). This is low enough that most colors appear slightly muted and washed out—reds most noticeably, though greens aren’t much better. You definitely don’t need a colorimeter to see the difference here.

Framework’s color-matched stylus isn’t ready yet, but you won’t need to wait for one if you want to use a pen with this touchscreen. Both the Universal Stylus Initiative (USI) 2.0 and Microsoft Pen Protocol (MPP) 2.0 specs are supported, so the Surface Pen, a bunch of Lenovo styluses, and any number of inexpensive third-party Amazon styluses will all work just fine. That said, the screen can only support one of those stylus specs at a time—MPP is on by default, and you can swap between them in the BIOS settings.

The webcam and mic have locks to disable them so that the OS can’t see or use them. Credit: Andrew Cunningham

The keyboard feels mostly fine, with good key spacing and a nice amount of travel. I noticed that I was occasionally missing letters the first couple of days I used the laptop—I was pressing the keys, but they intermittently didn’t register. That got better as I adjusted to the system. The trackpad is also unremarkable in a good way. Finger tracking and multi-touch gestures all worked as intended.

But the keyboard lacks a backlight, and it doesn’t have the fingerprint sensor you get with the Laptop 13. With no fingerprint sensor and no IR webcam, there are no biometric authentication options available for use with Windows Hello, so you’ll either need a PIN or a password to unlock your laptop every time you want to use it. Either omission would be sort of annoying in a laptop in this price range (we complained about the lack of keyboard backlight in the $700 Surface Laptop Go 2 a few years ago), but to be missing both is particularly frustrating in a modern system that costs this much.

Repairs and upgrades

We’ve been inside the Framework Laptop 13 enough times that we don’t do deep dives into its insides anymore, but as a new (and, in some ways, more refined) design, the Laptop 12 warrants a closer look this time around.

Framework’s pack-in Torx screwdriver is still the only tool you need to work on the Laptop 12. Undo the eight captive screws on the bottom of the laptop, and you’ll be able to lift away the entire keyboard and trackpad area to expose all of the other internal components, including the RAM, SSD, battery, and the motherboard itself.

The motherboard is quite a bit smaller than the Framework Laptop 13 board, and the two are definitely not interchangeable. Framework has never said otherwise, but it’s worth highlighting that these are two totally separate models that will have their own distinct components and upgrade paths—that goes for parts like the speakers and battery, too.

Laptop 12 motherboard on top, Laptop 13 motherboard on bottom. Credit: Andrew Cunningham

As a result of that reduction in board space, the Laptop 12 can only fit a single DDR5 RAM slot, which reduces memory bandwidth and limits your RAM capacity to 48GB. It also uses shorter M.2 2230 SSDs, like the Surface lineup or the Steam Deck. Unlike a few years ago, these SSDs are now readily available at retail, and it’s also easy to buy warranty-less ones on eBay or elsewhere that have been pulled from OEM systems. But they’re still a bit more expensive than the more common M.2 2280 size, and you have fewer options overall.

Framework has already published a guide on setting up the DIY Edition of the laptop and a few repair guides for common components. Guides for replacing bigger or more co parts, like the display or the webcam, are still listed as “coming soon.”

Performance and battery life

I could politely describe the Laptop 12’s 2.5-year-old 13th-gen Intel Core processor as “mature.” This generation of Intel chips has stuck around for a lot longer than usual, to the point that Intel recently acknowledged that it has been dealing with shortages. They’re appealing to PC companies because they still offer decent everyday performance for basic computing without the additional costs imposed by things like on-package memory or having some or all of the chip manufactured outside of Intel’s own factories.

The upside of a slightly older processor is a more stable computing experience, in both Windows and Linux, since the companies and communities involved have had more time to add support and work out bugs; I had none of the sleep-and-wake issues or occasional video driver crashes I had while testing the Ryzen AI 300 version of the Framework Laptop 13.

The downside, of course, is that performance is pretty unexciting. These low-power U-series 12th- and 13th-gen Intel chips remain capable when it comes to day-to-day computing, but they fall far behind the likes of Intel and AMD’s newer chips, Qualcomm’s Snapdragon chips from the Microsoft Surface and other Copilot+ PCs, or the Apple M4 in the MacBook Air.

And while none of these chips are really intended for gaming laptops, the Laptop 12 isn’t even a great fit for that kind of casual Steam Deck-y 3D gaming that most Framework Laptop 13 models can handle. Technically, this is the same basic Intel Iris Xe GPU that the first few generations of Framework Laptop 13 used, which is not exciting as integrated GPUs go but is at least still minimally capable. But because the Laptop 12 only has a single RAM slot instead of two, memory bandwidth is halved, which makes the GPU identify itself as “Intel UHD Graphics” to the device manager and drags down performance accordingly. (This is something these GPUs have always done, but they usually ship in systems that either have two RAM slots or soldered-down memory, so it usually doesn’t come up.)

Framework has tuned these chips to consume the same amount of power in both the “Balanced” and “Best Performance” power modes in Windows, with a 15 W sustained power limit and a 40 W limit for shorter, bursty workloads. This keeps the laptop feeling nice and responsive for day-to-day use and helps keep a lid on power usage for battery life reasons, but it also limits its performance for extended CPU-intensive workloads like our Handbrake video encoding test.

The Laptop 12 takes a lot longer to accomplish these tasks than some other laptops we’ve tested with similar chips, either because of the lower memory bandwidth or because Best Performance mode doesn’t let the chip consume a bunch of extra power. I’m not inclined to complain too much about this because it’s not the kind of thing you really buy an ultraportable laptop to do, but as with light gaming, it’s worth noting that the Laptop 12 doesn’t hit that same “usable for these workloads in a pinch” balance that the Laptop 13 does.

The Laptop 12’s battery life is decent relative to most Laptop 13s. Credit: Andrew Cunningham

The Core i5 version of the Laptop 12 lasted around 10 hours in the PCMark Modern Office battery life test, which isn’t stunning but is a step up from what the fully specced versions of the Framework Laptop 13 can offer. It will be just fine for a long flight or a full day of work or school. Our Framework reviews often complain about battery life, but I don’t think it will be an issue here for most users.

About that price

In some ways, the Laptop 12 is trying to be a fundamentally different laptop from the Laptop 13. For all the Laptop 13’s upgrades over the years, it has never had a touchscreen option, stylus support, or a convertible hinge.

But in most of the ways that count, the Laptop 12 is meant to be an “entry-level, lower-cost laptop,” which is how Framework CEO Nirav Patel has positioned it in the company’s announcement blog posts and videos. It features a slightly smaller, lower-resolution, less colorful screen with a lower refresh rate; a non-backlit keyboard; and considerably weaker processors. It also lacks both a fingerprint reader and a face-scanning webcam for Windows Hello.

The issue is that these cost-cutting compromises come at a price that’s a bit outside of what you’d expect of a “budget” laptop.

The DIY Edition of the Laptop 12 we’re evaluating here—a version that ships with the Windows license and all the components you need but which you assemble yourself—will run you at least $1,176, depending on the Expansion Modules you choose for your ports. That includes 16GB of GDDR5 RAM and a 1TB M.2 2230 SSD, plus the Core i5-1334U processor option (2 P-cores, 8 E-cores). If you stepped down to a 500GB SSD instead, that’s still $1,116. A pre-built edition—only available in black, but with identical specifications—would run you $1,049.

The Laptop 13 compared to the Laptop 12. The Laptop 12 is missing quite a few quality-of-life things and has worse performance, but it isn’t all that much cheaper. Credit: Andrew Cunningham

This puts the Framework Laptop 12 in the same general price range as Apple’s MacBook Air, Microsoft’s 13-inch Surface Laptop, and even many editions of the Framework Laptop 13. And the Laptop 12 is charming, but its day-to-day user experience falls well short of any of those devices.

You can make it cheaper! Say you go for the Core i3-1315U version (two P-cores, four E-cores) instead, and you buy your own 16GB stick of DDR5 RAM (roughly $50 instead of $80) and 1TB SSD ($70 or $80 for a decent one, instead of $159). Say you have plenty of USB-C chargers at home so you don’t need to pay $55 for Framework’s version, and say you run Linux or ChromeOS, or you already have a Windows 11 product key, or you’ve brought your own Windows 11 key from one of those gray-market key selling sites (as little as $10).

Now we’re talking about a PC that’s a little under $700, which is closer to “reasonable” for a brand-new touchscreen PC. But the laptop’s old CPU and poky performance also mean it’s competing with a wide swath of refurbished, used, and closeout-priced older PCs from other manufacturers.

In December, for example, I bought an SSD-less Lenovo ThinkPad L13 Yoga Gen 3 from eBay for around $300, with around a year left on its warranty. After I’d added an SSD and reinstalled Windows—no additional cost because it had a valid Windows license already—I ended up with a PC with the same screen resolution and similar specs but with a better-quality display with smaller bezels that made the screen larger without making the laptop larger; a faster GPU configuration; a backlit keyboard; and a fingerprint reader.

I know it’s not possible for everyone to just go out and buy a laptop like this. The boring black outline of a midrange ThinkPad is also the polar opposite of the Framework Laptop 12, but it’s an example of what the tech-savvy buyer can find in the secondhand market if you’re trying to find a cost-effective alternative to what Framework is offering here.

A good laptop, but not a good value

The Framework Laptop 12. Credit: Andrew Cunningham

There are plenty of factors beyond Framework’s control that contribute to the Laptop 12’s price, starting with on-again-off-again global trade wars and the uncertainty that comes with them. There’s also Framework’s status as a niche independent PC company rather than a high-volume behemoth. When you ship the number of computers that Apple does, it’s almost certainly easier to make a $999 laptop that is both premium and profitable.

But whatever the reason, I can’t escape the feeling that the Laptop 12 was meant to be cheaper than it has ended up being. The result is a computer with many of the compromises of an entry-level system, but without a matching entry-level price tag. It’s hard to put a price on some of the less-tangible benefits of a Framework laptop, like ease of repairs and the promise of future upgrades, but my gut feeling is that the Framework Laptop 13 falls on the “right” side of that line, and the Laptop 12 doesn’t.

I am charmed by the Laptop 12. It’s cute and functional, and it stands out among high-end aluminum slabs. It adds some subtle refinement to elements of the original Framework Laptop 13 design, including some things I hope end up making it into some future iteration of its design—softer corners, more color options, and an easier-to-install keyboard and trackpad. And it’s far from a bad performer for day-to-day desktop use; it’s just that the old, poky processor limits its capabilities compared to other PCs that don’t cost that much more than it does.

I probably wouldn’t recommend this over the Laptop 13 for anyone interested in what Framework is doing, unless a touchscreen is a make-or-break feature, and even then, I’d encourage people to take a good, long look at Microsoft, Lenovo, Dell, or HP’s convertible offerings first. But I hope that Framework does what it’s done for the Laptop 13 over the last four or so years: introduce updated components, iterate on different elements of the design, and gradually bring the price down into a more reasonable range through refurbished and factory-second parts. As a $1,000-ish computer, this leaves a lot to be desired. But as the foundation for a new Framework platform, it has enough promise to be interesting.

The good

  • Eye-catching, colorful, friendly design that stands out among metal slabs.
  • Simple to build, repair, and upgrade.
  • Dual-plastic design over a metal frame is good for durability.
  • First convertible touchscreen in the Framework laptop.
  • Customizable ports.
  • Decent performance for everyday computing.
  • Respectable battery life.

The bad

  • Old, slow chip isn’t really suitable for light gaming or heavy productivity work that the larger Framework Laptop 13 can do.
  • Pre-built laptop only comes in boring black.
  • Mediocre colors and large bezels spoil the screen.

The ugly

  • It’s just too expensive for what it is. It looks and feels like a lower-cost laptop, but without a dramatically lower price than the nicer, faster Framework 13.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Framework Laptop 12 review: I’m excited to see what the 2nd generation looks like Read More »

the-macbook-air-is-the-obvious-loser-as-the-sun-sets-on-the-intel-mac-era

The MacBook Air is the obvious loser as the sun sets on the Intel Mac era


In the end, Intel Macs have mostly gotten a better deal than PowerPC Macs did.

For the last three years, we’ve engaged in some in-depth data analysis and tea-leaf reading to answer two questions about Apple’s support for older Macs that still use Intel chips.

First, was Apple providing fewer updates and fewer years of software support to Macs based on Intel chips as it worked to transition the entire lineup to its internally developed Apple Silicon? And second, how long could Intel Mac owners reasonably expect to keep getting updates?

The answer to the first question has always been “it depends, but generally yes.” And this year, we have a definitive answer to the second question: For the bare handful of Intel Macs it supports, macOS 26 Tahoe will be the final new version of the operating system to support any of Intel’s chips.

To its credit, Apple has also clearly spelled this out ahead of time rather than pulling the plug on Intel Macs with no notice. The company has also said that it plans to provide security updates for those Macs for two years after Tahoe is replaced by macOS 27 next year. These Macs aren’t getting special treatment—this has been Apple’s unspoken, unwritten policy for macOS security updates for decades now—but to look past its usual “we don’t comment on our future plans” stance to give people a couple years of predictability is something we’ve been pushing Apple to do for a long time.

With none of the tea leaf reading left to do, we can now present a fairly definitive look at how Apple has handled the entire Intel transition, compare it to how the PowerPC-to-Intel switch went two decades ago, and predict what it might mean about support for Apple Silicon Macs.

The data

We’ve assembled an epoch-spanning spreadsheet of every PowerPC or Intel Mac Apple has released since the original iMac kicked off the modern era of Apple back in 1998. On that list, we’ve recorded the introduction date for each Mac, the discontinuation date (when it was either replaced or taken off the market), the version of macOS it shipped with, and the final version of macOS it officially supported.

For those macOS versions, we’ve recorded the dates they received their last major point update—these are the feature-adding updates these releases get when they’re Apple’s latest and greatest version of macOS, as macOS 15 Sequoia is right now. After replacing them, Apple releases security-only patches and Safari browser updates for old macOS versions for another two years after replacing them, so we’ve also recorded the dates that those Macs would have received their final security update. For Intel Macs that are still receiving updates (versions 13, 14, and 15) and macOS 26 Tahoe, we’ve extrapolated end-of-support dates based on Apple’s past practices.

A 27-inch iMac model. It’s still the only Intel Mac without a true Apple Silicon replacement. Credit: Andrew Cunningham

We’re primarily focusing on two time spans: from the date of each Mac’s introduction to the date it stopped receiving major macOS updates, and from the date of each Mac’s introduction to the date it stopped receiving any updates at all. We consider any Macs inside either of these spans to be actively supported; Macs that are no longer receiving regular updates from Apple will gradually become less secure and less compatible with modern apps as time passes. We measure by years of support rather than number of releases, which controls for Apple’s transition to a once-yearly release schedule for macOS back in the early 2010s.

We’ve also tracked the time between each Mac model’s discontinuation and when it stopped receiving updates. This is how Apple determines which products go on its “vintage” and “obsolete” hardware lists, which determine the level of hardware support and the kinds of repairs that the company will provide.

We have lots of detailed charts, but here are some highlights:

  • For all Mac models tracked, the average Mac receives about 6.6 years of macOS updates that add new features, plus another two years of security-only updates.
  • If you only count the Intel era, the average is around seven years of macOS updates, plus two years of security-only patches.
  • Most (though not all) Macs released since 2016 come in lower than either of these averages, indicating that Apple has been less generous to most Intel Macs since the Apple Silicon transition began.
  • The three longest-lived Macs are still the mid-2007 15- and 17-inch MacBook Pros, the mid-2010 Mac Pro, and the mid-2007 iMac, which received new macOS updates for around nine years after their introduction (and security updates for around 11 years).
  • The shortest-lived Mac is still the late-2008 version of the white MacBook, which received only 2.7 years of new macOS updates and another 3.3 years of security updates from the time it was introduced. (Late PowerPC-era and early Intel-era Macs are all pretty bad by modern standards.)

The charts

If you bought a Mac any time between 2016 and 2020, you’re generally settling for fewer years of software updates than you would have gotten in the recent past. If you bought a Mac released in 2020, the tail end of the Intel era when Apple Silicon Macs were around the corner, your reward is the shortest software support window since 2006.

There are outliers in either direction. The sole iMac Pro, introduced in 2017 as Apple tried to regain some of its lost credibility with professional users, will end up with 7.75 years of updates plus another two years of security updates when all is said and done. Buyers of 2018–2020 MacBook Airs and the two-port version of the 2020 13-inch MacBook Pro, however, are treated pretty poorly, getting not quite 5.5 years of updates (plus two years of security patches) on average from the date they were introduced.

That said, most Macs usually end up getting a little over six years of macOS updates and two more years of security updates. If that’s a year or two lower than the recent past, it’s also not ridiculously far from the historical average.

If there’s something to praise here, it’s interesting that Apple doesn’t seem to treat any of its Macs differently based on how much they cost. Now that we have a complete overview of the Intel era, breaking out the support timelines by model rather than by model year shows that a Mac mini doesn’t get dramatically more or less support than an iMac or a Mac Pro, despite costing a fraction of the price. A MacBook Air doesn’t receive significantly more or less support than a MacBook Pro.

These are just averages, and some models are lucky while others are not. The no-adjective MacBook that Apple has sold on and off since 2006 is also an outlier, with fewer years of support on average than the other Macs.

If there’s one overarching takeaway, it’s that you should buy new Macs as close to the date of their introduction as possible if you want to maximize your software support window. Especially for Macs that were sold continuously for years and years—the 2013 and 2019 Mac Pro, the 2018 Mac mini, the non-Retina 2015 MacBook Air that Apple sold some version of for over four years—buying them toward the end of their retail lifecycle means settling for years of fewer updates than you would have gotten if you had waited for the introduction of a new model. And that’s true even though Apple’s hardware support timelines are all calculated from the date of last availability rather than the date of introduction.

It just puts Mac buyers in a bad spot when Apple isn’t prompt with hardware updates, forcing people to either buy something that doesn’t fully suit their needs or settle for something older that will last for fewer years.

What should you do with an older Intel Mac?

The big question: If your Intel Mac is still functional but Apple is no longer supporting it, is there anything you can do to keep it both secure and functional?

All late-model Intel Macs officially support Windows 10, but that OS has its own end-of-support date looming in October 2025. Windows 11 can be installed, but only if you bypass its system requirements, which can work well, but it does require additional fiddling when it comes time to install major updates. Consumer-focused Linux distributions like Ubuntu, Mint, or Pop!_OS may work, depending on your hardware, but they come with a steep learning curve for non-technical users. Google’s ChromeOS Flex may also work, but ChromeOS is more functionally limited than most other operating systems.

The OpenCore Legacy Patcher provides one possible stay of execution for Mac owners who want to stay on macOS for as long as they can. But it faces two steep uphill climbs in macOS Tahoe. First, as Apple has removed more Intel Macs from the official support list, it has removed more of the underlying code from macOS that is needed to support those Macs and other Macs with similar hardware. This leaves more for the OpenCore Configurator team to have to patch in from older OSes, and this kind of forward-porting can leave hardware and software partly functional or non-functional.

Second, there’s the Apple T2 to consider. The Macs with a T2 treat it as a load-bearing co-processor, responsible for crucial operating system functions such as enabling Touch ID, serving as an SSD controller, encoding and decoding videos, communicating with the webcam and built-in microphone, and other operations. But Apple has never opened the T2 up to anyone, and it remains a bit of a black box for both the OpenCore/Hackintosh community and folks who would run Linux-based operating systems like Ubuntu or ChromeOS on that hardware.

The result is that the 2018 and 2019 MacBook Airs that didn’t support macOS 15 Sequoia last year never had support for them added to the OpenCore Legacy Patcher because the T2 chip simply won’t communicate with OpenCore firmware booted. Some T2 Macs don’t have this problem. But if yours does, it’s unlikely that anyone will be able to do anything about it, and your software support will end when Apple says it does.

Does any of this mean anything for Apple Silicon Mac support?

Late-model Intel MacBook Airs have fared worse than other Macs in terms of update longevity. Credit: Valentina Palladino

It will likely be at least two or three years before we know for sure how Apple plans to treat Apple Silicon Macs. Will the company primarily look at specs and technical capabilities, as it did from the late-’90s through to the mid-2010s? Or will Apple mainly stop supporting hardware based on its age, as it has done for more recent Macs and most current iPhones and iPads?

The three models to examine for this purpose are the first ones to shift to Apple Silicon: the M1 versions of the MacBook Air, Mac mini, and 13-inch MacBook Pro, all launched in late 2020. If these Macs are dropped in, say, 2027 or 2028’s big macOS release, but other, later M1 Macs like the iMac stay supported, it means Apple is likely sticking to a somewhat arbitrary age-based model, with certain Macs cut off from software updates that they are perfectly capable of running.

But it’s our hope that all Apple Silicon Macs have a long life ahead of them. The M2, M3, and M4 have all improved on the M1’s performance and other capabilities, but the M1 Macs are much more capable than the Intel ones they supplanted, the M1 was used so widely in various Mac models for so long, and Mac owners can pay so much more for their devices than iPhone and iPad owners. We’d love to see macOS return to the longer-tail software support it provided in the late-’00s and mid-2010s, when models could expect to see seven or eight all-new macOS versions and another two years of security updates afterward.

All signs point to Apple using the launch date of any given piece of hardware as the determining factor for continued software support. But that isn’t how it has always been, nor is it how it always has to be.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

The MacBook Air is the obvious loser as the sun sets on the Intel Mac era Read More »

nintendo-switch-2:-the-ars-technica-review

Nintendo Switch 2: The Ars Technica review


Nintendo’s overdue upgrade is a strong contender, even amid competition from handheld PCs.

Maybe not the best showcase of the hardware, but squeezing 40+ years of Nintendo history into a single image was too compelling.

Maybe not the best showcase of the hardware, but squeezing 40+ years of Nintendo history into a single image was too compelling.

When Nintendo launched the Switch in 2017, the sheer novelty of the new hardware brought the company a lot of renewed attention. After the market disaster of the Wii U’s homebound “second screen” tablet, Nintendo exploited advances in system-on-a-chip miniaturization to create something of a minimum viable HD-capable system that could work as both a lightweight handheld and a slightly underpowered TV-based console. That unique combination, and Nintendo’s usual selection of first-party system sellers, set the console apart from what the rest of the gaming market was offering at the time.

Eight years later, the Switch 2 launched into a transformed gaming hardware market that the original Switch played a large role in shaping, one full of portable gaming consoles that can optionally be connected to a TV. That includes full-featured handheld gaming PCs like the Steam Deck and its many imitators, but also streaming-focused Android-based gaming handhelds and retro-focused emulation machines on the cheaper end. Even Microsoft is preparing to get in on the act, streamlining the Windows gaming experience for an Asus-powered handheld gaming PC that hides the Windows desktop.

Mario is excited! Are you?

Credit: Kyle Orland

Mario is excited! Are you? Credit: Kyle Orland

Those market changes make the Switch 2 a lot less of a novelty than its predecessor. As its name implies, it is essentially a direct sequel to the original Switch hardware, with improvements to the physical hardware and internal architecture. Rather than shaking things up with a new concept, Nintendo seems to be saying, “Hey, you liked the Switch? Here’s the same thing, but moreso.”

That “moreso” will surely be enough for players who complained about the Switch’s increasingly obvious struggles to play graphically demanding games in the last few years. But in a gaming world full of capable and usable handheld PCs, a “more of the same” Switch 2 might be a bit of a tougher sell.

Joyful Joy-Cons

Let’s start with one feature that the Switch line still can boast over most of its handheld gaming competition: the removable Joy-Cons. The new magnetic slotting system for these updated controllers on the Switch 2 is a sheer joy to use, allowing for easy and quick one-handed removal as well as a surprisingly secure portable mode connection. After a week spent snapping them on and off dozens of times, I still can’t get over how great the design feels.

The new Joy-Cons also ameliorate what was probably the largest complaint about the ones on the Switch: their size. Everything from the overall footprint to the buttons and joystick has been expanded to feel much more appropriate in larger hands. The days of average adults having to awkwardly scrunch their fingers around a Switch Joy-Con in each hand can be relegated to the past, where they belong.

Holding a single Joy-Con in two hands is still not ideal, but it works in a pinch.

Holding a single Joy-Con in two hands is still not ideal, but it works in a pinch.

Like the Switch before it, the removable Joy-Cons can also be used separately, essentially offering baseline purchasers two controllers for the price of one. The added size helps make holding an individual Joy-Con horizontally in two hands much more comfortable, especially when it comes to tapping the expanded shoulder buttons on the controllers’ inner edge. But the face buttons and joystick are still a bit too cramped and oddly placed to make this a preferred way to play for long stretches.

Still, for situations where you happen to have other players around—especially young children who might not mind the smaller-than-standard size—it’s nice to have a feasible multiplayer option without needing to invest in new controllers. And the Switch 2’s seamless compatibility with your old Switch controllers (in tabletop or docked mode, at least) provides even more control flexibility and value for upgraders.

Control compromises

The main problem with the Switch 2 Joy-Cons continues to be their thinness, which is practically unchanged from the original Switch. That’s handy for keeping the overall system profile nice and trim in portable mode, but it means the Joy-Cons are missing the bulbous, rounded palm grips you see on handhelds like the Steam Deck and standard console controllers dating back to the original PlayStation.

Without this kind of grip, the thin, rounded bottom corner of the Joy-Cons ends up wedged oddly between the fleshy parts of your palm. Your free fingers, meanwhile, are either awkwardly wrapped around the edge of the loose Joy-Cons or uncomfortably perched to support the flat back of a portable system that’s a noticeable 34 percent heavier than the original Switch. And while an included Joy-Con holster helps add these rounded grips for tabletop or docked play, the “flat finger” problem is unavoidable when playing the system in portable mode.

The included grip gives your palms a comfortable place to rest when holding the Joy-Cons.

The included grip gives your palms a comfortable place to rest when holding the Joy-Cons.

After spending a week with the Joy-Cons, I started to notice a few other compromises. Despite the added size, the face buttons are still slightly smaller than you’ll find on other controllers, meaning they can dig into the pad of your thumb when held down for extended periods. The shoulder buttons, which have also been expanded from the original Switch, still lack the increased travel and sensitivity of the analog triggers that are standard on nearly every competing controller. And the positioning of the right joystick encroaches quite close to the buttons just above it, making it easy to accidentally nudge the stick when pressing the lower B button.

Those kinds of control compromises help keep the portable Switch 2 notably smaller and lighter than most of its handheld PC competition. But they also mean my Switch 2 will probably need something like the Nyxi Hyperion Pro, which I’ve come to rely on to make portable play on the original Switch much more comfortable.

Improvements inside and out

Unlike the controllers, the screen on the Switch 2 is remarkably low on compromises. The full 1080p, 7.9-inch display supports HDR and variable refresh rates up to 120 Hz, making it a huge jump over both the original Switch and most of the screens you’ll find on competing handheld gaming PCs (or even some standard HDTVs when it comes to the maximum frame rate). While the screen lacks the truly deep blacks of a true OLED display, I found that the overall brightness (which reportedly peaks at about 450 nits) makes it hard to notice.

The bigger, brighter, sharper screen on the Switch 2 (top) is a huge improvement over the first Switch.

Credit: Kyle Orland

The bigger, brighter, sharper screen on the Switch 2 (top) is a huge improvement over the first Switch. Credit: Kyle Orland

The custom Nvidia processor inside the Switch 2 is also a welcome improvement over a Tegra processor that was already underpowered for the Switch in 2017. We’ve covered in detail how much of a difference this makes for Switch titles that have been specially upgraded to take advantage of that extra power, fixing fuzzy graphics and frame rate issues that were common on Nintendo’s previous system. It’s hard to imagine going back after seeing Tears of the Kingdom running in a silky-smooth 60 fps or enjoying the much sharper textures and resolution of portable No Man’s Sky on the Switch 2.

Link’s Awakening, Switch 1, docked. Andrew Cunningham

However, the real proof of the Switch 2’s improved power can be seen in early third-party ports like Cyberpunk 2077, Split Fiction, Hitman World of Assassination, and Street Fighter VI, which would have required significant visual downgrades to even run on the original Switch. To my eye, the visual impact of these ports is roughly comparable to what you’d get on a PS4 Pro (in handheld mode) or an Xbox Series S (in docked mode). In the medium term, that should be more than enough performance for all but the most determined pixel-counters, given the distinctly diminishing graphical returns we’re seeing from more advanced (and more expensive) hardware like the PS5 Pro.

The Switch 2 delivers a perfectly fine-looking version of Cyberpunk 2077

Credit: CD Projekt Red

The Switch 2 delivers a perfectly fine-looking version of Cyberpunk 2077 Credit: CD Projekt Red

The biggest compromise for all this extra power comes in the battery life department. Games like Mario Kart World or Cyberpunk 2077 can take the system from a full charge to completely drained in somewhere between 2 and 2.5 hours. This time span increases significantly for less demanding games like old-school 2D classics and can be slightly extended if you reduce the screen brightness. Still, it’s a bit grating to need to rely on an external battery pack just to play Mario Kart World for an entire cross-country flight.

Externally, the Switch 2 is full of tiny but welcome improvements, like an extra upper edge USB-C port for more convenient charging and a thin-but-sturdy U-shaped stand for tabletop play. Internally, the extremely welcome high-speed storage helps cut initial load times on games like Mario Kart 8 roughly in half (16.5 seconds on the Switch versus 8.5 seconds on the Switch 2 in our testing).

The embedded stand on the Switch 2 (right) is a massive improvement for tabletop mode play.

Credit: Kyle Orland

The embedded stand on the Switch 2 (right) is a massive improvement for tabletop mode play. Credit: Kyle Orland

But the 256GB of internal storage included in the Switch 2 is also laughably small, considering that individual digital games routinely require downloads of 50GB to 70GB. That’s especially true in a world where many third-party games are only available as Game Key Cards, which still require that the full game be downloaded. Most Switch 2 customers should budget $50 or more for a MicroSD Express card to add at least 256GB of additional storage.

Those Nintendo gimmicks

Despite the “more of the same” overall package, there are a few small areas where the Switch 2 does something truly new. Mouse mode is the most noticeable of these, letting you transform a Joy-Con into a PC-style mouse simply by placing it on its edges against most flat-ish surfaces. We tested this mode on surfaces ranging from a hard coffee table to a soft pillow-top mattress and this reviewer’s hairy thighs and found the mouse mode was surprisingly functional in every test. While the accuracy and precision fall off on the squishier and rounder of those tested surfaces, it’s something of a marvel that it works at all.

A bottom-up look at the awkward claw-like grip required for mouse mode.

Credit: Kyle Orland

A bottom-up look at the awkward claw-like grip required for mouse mode. Credit: Kyle Orland

Unfortunately, the ergonomics of mouse mode still leave much to be desired. This again comes down to the thinness of the Joy-Cons, which don’t have the large, rounded palm rest you’d expect from a good PC mouse. That means getting a good sense of control in mouse mode requires hooking your thumb, ring finger, and pinky finger into a weird modified claw-like grip around the Joy-Con, a pose that becomes uncomfortable after even moderate use. A holster that lets the Joy-Con slot into a more traditional mouse shape could help with this problem; failing that, mouse mode seems destined to remain a little-used gimmick.

GameChat is the Switch 2’s other major “new” feature, letting you communicate with friends directly through the system’s built-in microphone (which works rather well even across a large and noisy living room) or an optional webcam (many standard USB cameras we tested worked just fine). It’s a welcome and simple way to connect with other players without having to resort to Discord or the bizarre external smartphone app Nintendo relied on for voice chat on the original Switch.

In most ways, it feels like GameChat is just playing catch-up to the kind of social sharing features competitors like Microsoft were already including in their consoles back in 2005. However, we appreciate GameChat’s ability to easily share a live view of your screen with friends, even if the low-frame-rate video won’t give Twitch streams a run for their money.

Those kinds of complaints can also apply to GameShare, which lets Switch 2 owners stream video of their game with a second player, allowing them to join in the game from a secondary Switch or Switch 2 console (either locally or remotely). The usability of this feature seems heavily dependent on the wireless environment in the players’ house, ranging from smooth but grainy to unplayably laggy. And the fact that GameShare only works with specially coded games is a bit annoying when Steam Remote Play offers a much more generalized remote co-op solution on PC.

The best of both worlds?

This is usually the point in a console review where I warn you that buying a console at or near launch is a poor value proposition, as you’ll never pay more for a system with fewer games. That’s not necessarily true these days. The original Switch never saw an official price drop in its eight years on the market, and price increases are becoming increasingly common for some video game hardware. If you think you’re likely to ever be in the market for a Switch 2, now might be the best time to pull the trigger.

Mario Kart World offers plenty to see and do until more must-have games come to the Switch 2 library.

Credit: Nintendo

Mario Kart World offers plenty to see and do until more must-have games come to the Switch 2 library. Credit: Nintendo

That said, there’s not all that much to do with a brand new Switch 2 unit at the moment. Mario Kart World is being positioned as the major system seller at launch, revitalizing an ultra-popular, somewhat stale series with a mixed bag of bold new ideas. Nintendo’s other first-party launch title, the $10 Switch 2 Welcome Tour, is a tedious affair that offers a few diverting minigames amid dull slideshows and quizzes full of corny PR speak.

The rest of the Switch 2’s launch library is dominated by ports of games that have been available on major non-Switch platforms for anywhere from months to years. That’s nice if the Switch has been your only game console during that time or if you’ve been looking for an excuse to play these titles in full HD on a beautiful portable screen. For many gamers, though, these warmed-over re-releases won’t be that compelling.

Other than that, there are currently only the barest handful of completely original launch titles that require the Switch 2, none of which really provide a meaningful reason to upgrade right away. For now, once you tire of Mario Kart, you’ll be stuck replaying your old Switch games (often with welcome frame rate and resolution improvements) or checking out a trio of emulated GameCube games available to Switch Online Expansion Pack subscribers (they look and play just fine).

Looking to the future, the promise of further Nintendo first-party games is, as usual, the primary draw for the company’s hardware. In the near term, games like Donkey Kong Bananza, Pokémon Legends Z-A, and Metroid Prime 4 (which will also be available on the older Switch with less wow-inducing performance) are the biggest highlights in the pipeline. Projecting a little further out, the Switch 2 will be the only way to legitimately play Mario and Zelda adventures that seem highly likely to be can’t-miss classics, given past performance.

From top: Switch 2, Steam Deck OLED, Lenovo Legion Go S. Two of these three can play your entire Steam library. One of these three can play the new Mario Kart…

Credit: Kyle Orland

From top: Switch 2, Steam Deck OLED, Lenovo Legion Go S. Two of these three can play your entire Steam library. One of these three can play the new Mario Kart… Credit: Kyle Orland

Nintendo aside, the Switch 2 seems well-positioned to receive able portable-ready ports of some of the more demanding third-party games in the foreseeable future. Already, we’ve seen Switch 2 announcements for catalog titles like Elden Ring and future releases like 007 First Light, as well as a handful of third-party exclusives like FromSoft’s vampire-filled Duskbloods.

Those are pretty good prospects for a $450 portable/TV console hybrid. But even with a bevy of ports and exclusives, it could be hard for the Switch 2’s library to compete with the tens of thousands of games available on any handheld PC worth its salt. You’ll pay a bit more for one of those portables if you’re looking for something that matches the quality of the Switch 2’s screen and processor—for the moment, at least. But the PC ecosystem’s wider software selection and ease of customization might make that investment worth it for gamers who don’t care too much about Nintendo’s first-party efforts.

If you found yourself either regularly using or regularly coveting a Switch at any point over the last eight years, the Switch 2 is an obvious and almost necessary upgrade. If you’ve resisted the siren song for this long, though, you can probably continue to ignore Nintendo’s once-novel hardware line.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

Nintendo Switch 2: The Ars Technica review Read More »

how-to-draft-a-will-to-avoid-becoming-an-ai-ghost—it’s-not-easy

How to draft a will to avoid becoming an AI ghost—it’s not easy


Why requests for “no AI resurrections” will probably go ignored.

Proton beams capturing the ghost of OpenAI to suck it into a trap where it belongs

All right! This AI is TOAST! Credit: Aurich Lawson

All right! This AI is TOAST! Credit: Aurich Lawson

As artificial intelligence has advanced, AI tools have emerged to make it possible to easily create digital replicas of lost loved ones, which can be generated without the knowledge or consent of the person who died.

Trained on the data of the dead, these tools, sometimes called grief bots or AI ghosts, may be text-, audio-, or even video-based. Chatting provides what some mourners feel is a close approximation to ongoing interactions with the people they love most. But the tech remains controversial, perhaps complicating the grieving process while threatening to infringe upon the privacy of the deceased, whose data could still be vulnerable to manipulation or identity theft.

Because of suspected harms and perhaps a general repulsion to the idea of it, not everybody wants to become an AI ghost.

After a realistic video simulation was recently used to provide a murder victim’s impact statement in court, Futurism summed up social media backlash, noting that the use of AI was “just as unsettling as you think.” And it’s not the first time people have expressed discomfort with the growing trend. Last May, The Wall Street Journal conducted a reader survey seeking opinions on the ethics of so-called AI resurrections. Responding, a California woman, Dorothy McGarrah, suggested there should be a way to prevent AI resurrections in your will.

“Having photos or videos of lost loved ones is a comfort. But the idea of an algorithm, which is as prone to generate nonsense as anything lucid, representing a deceased person’s thoughts or behaviors seems terrifying. It would be like generating digital dementia after your loved ones’ passing,” McGarrah said. “I would very much hope people have the right to preclude their images being used in this fashion after death. Perhaps something else we need to consider in estate planning?”

For experts in estate planning, the question may start to arise as more AI ghosts pop up. But for now, writing “no AI resurrections” into a will remains a complicated process, experts suggest, and such requests may not be honored by all unless laws are changed to reinforce a culture of respecting the wishes of people who feel uncomfortable with the idea of haunting their favorite people through AI simulations.

Can you draft a will to prevent AI resurrection?

Ars contacted several law associations to find out if estate planners are seriously talking about AI ghosts. Only the National Association of Estate Planners and Councils responded; it connected Ars to Katie Sheehan, an expert in the estate planning field who serves as a managing director and wealth strategist for Crestwood Advisors.

Sheehan told Ars that very few estate planners are prepared to answer questions about AI ghosts. She said not only does the question never come up in her daily work, but it’s also “essentially uncharted territory for estate planners since AI is relatively new to the scene.”

“I have not seen any documents drafted to date taking this into consideration, and I review estate plans for clients every day, so that should be telling,” Sheehan told Ars.

Although Sheehan has yet to see a will attempting to prevent AI resurrection, she told Ars that there could be a path to make it harder for someone to create a digital replica without consent.

“You certainly could draft into a power of attorney (for use during lifetime) and a will (for use post death) preventing the fiduciary (attorney in fact or executor) from lending any of your texts, voice, image, writings, etc. to any AI tools and prevent their use for any purpose during life or after you pass away, and/or lay the ground rules for when they can and cannot be used after you pass away,” Sheehan told Ars.

“This could also invoke issues with contract, property and intellectual property rights, and right of publicity as well if AI replicas (image, voice, text, etc.) are being used without authorization,” Sheehan said.

And there are likely more protections for celebrities than for everyday people, Sheehan suggested.

“As far as I know, there is no law” preventing unauthorized non-commercial digital replicas, Sheehan said.

Widely adopted by states, the Revised Uniform Fiduciary Access to Digital Assets Act—which governs who gets access to online accounts of the deceased, like social media or email accounts—could be helpful but isn’t a perfect remedy.

That law doesn’t directly “cover someone’s AI ghost bot, though it may cover some of the digital material some may seek to use to create a ghost bot,” Sheehan said.

“Absent any law” blocking non-commercial digital replicas, Sheehan expects that people’s requests for “no AI resurrections” will likely “be dealt with in the courts and governed by the terms of one’s estate plan, if it is addressed within the estate plan.”

Those potential fights seemingly could get hairy, as “it may be some time before we get any kind of clarity or uniform law surrounding this,” Sheehan suggested.

In the future, Sheehan said, requests prohibiting digital replicas may eventually become “boilerplate language in almost every will, trust, and power of attorney,” just as instructions on digital assets are now.

As “all things AI become more and more a part of our lives,” Sheehan said, “some aspects of AI and its components may also be woven throughout the estate plan regularly.”

“But we definitely aren’t there yet,” she said. “I have had zero clients ask about this.”

Requests for “no AI resurrections” will likely be ignored

Whether loved ones would—or even should—respect requests blocking digital replicas appears to be debatable. But at least one person who built a grief bot wished he’d done more to get his dad’s permission before moving forward with his own creation.

A computer science professor at the University of Washington Bothell, Muhammad Aurangzeb Ahmad, was one of the earliest AI researchers to create a grief bot more than a decade ago after his father died. He built the bot to ensure that his future kids would be able to interact with his father after seeing how incredible his dad was as a grandfather.

When Ahmad started his project, there was no ChatGPT or other advanced AI model to serve as the foundation, so he had to train his own model based on his dad’s data. Putting immense thought into the effort, Ahmad decided to close off the system from the rest of the Internet so that only his dad’s memories would inform the model. To prevent unauthorized chats, he kept the bot on a laptop that only his family could access.

Ahmad was so intent on building a digital replica that felt just like his dad that it didn’t occur to him until after his family started using the bot that he never asked his dad if this was what he wanted. Over time, he realized that the bot was biased to his view of his dad, perhaps even feeling off to his siblings who had a slightly different relationship with their father. It’s unclear if his dad would similarly view the bot as preserving just one side of him.

Ultimately, Ahmad didn’t regret building the bot, and he told Ars he thinks his father “would have been fine with it.”

But he did regret not getting his father’s consent.

For people creating bots today, seeking consent may be appropriate if there’s any chance the bot may be publicly accessed, Ahmad suggested. He told Ars that he would never have been comfortable with the idea of his dad’s digital replica being publicly available because the question of an “accurate representation” would come even more into play, as malicious actors could potentially access it and sully his dad’s memory.

Today, anybody can use ChatGPT’s model to freely create a similar bot with their own loved one’s data. And a wide range of grief tech services have popped up online, including HereAfter AI, SeanceAI, and StoryFile, Axios noted in an October report detailing the latest ways “AI could be used to ‘resurrect’ loved ones.” As this trend continues “evolving very fast,” Ahmad told Ars that estate planning is probably the best way to communicate one’s AI ghost preferences.

But in a recently published article on “The Law of Digital Resurrection,” law professor Victoria Haneman warned that “there is no legal or regulatory landscape against which to estate plan to protect those who would avoid digital resurrection, and few privacy rights for the deceased. This is an intersection of death, technology, and privacy law that has remained relatively ignored until recently.”

Haneman agreed with Sheehan that “existing protections are likely sufficient to protect against unauthorized commercial resurrections”—like when actors or musicians are resurrected for posthumous performances. However, she thinks that for personal uses, digital resurrections may best be blocked not through estate planning but by passing a “right to deletion” that would focus on granting the living or next of kin the rights to delete the data that could be used to create the AI ghost rather than regulating the output.

A “right to deletion” could help people fight inappropriate uses of their loved ones’ data, whether AI is involved or not. After her article was published, a lawyer reached out to Haneman about a client’s deceased grandmother whose likeness was used to create a meme of her dancing in a church. The grandmother wasn’t a public figure, and the client had no idea “why or how somebody decided to resurrect her deceased grandmother,” Haneman told Ars.

Although Haneman sympathized with the client, “if it’s not being used for a commercial purpose, she really has no control over this use,” Haneman said. “And she’s deeply troubled by this.”

Haneman’s article offers a rare deep dive into the legal topic. It sensitively maps out the vague territory of digital rights of the dead and explains how those laws—or the lack thereof—interact with various laws dealing with death, from human remains to property rights.

In it, Haneman also points out that, on balance, the rights of the living typically outweigh the rights of the dead, and even specific instructions on how to handle human remains aren’t generally considered binding. Some requests, like organ donation that can benefit the living, are considered critical, Haneman noted. But there are mixed results on how courts enforce other interests of the dead—like a famous writer’s request to destroy all unpublished work or a pet lover’s insistence to destroy their cat or dog at death.

She told Ars that right now, “a lot of people are like, ‘Why do I care if somebody resurrects me after I’m dead?’ You know, ‘They can do what they want.’ And they think that, until they find a family member who’s been resurrected by a creepy ex-boyfriend or their dead grandmother’s resurrected, and then it becomes a different story.”

Existing law may protect “the privacy interests of the loved ones of the deceased from outrageous or harmful digital resurrections of the deceased,” Haneman noted, but in the case of the dancing grandma, her meme may not be deemed harmful, no matter how much it troubles the grandchild to see her grandma’s memory warped.

Limited legal protections may not matter so much if, culturally, communities end up developing a distaste for digital replicas, particularly if it becomes widely viewed as disrespectful to the dead, Haneman suggested. Right now, however, society is more fixated on solving other problems with deepfakes rather than clarifying the digital rights of the dead. That could be because few people have been impacted so far, or it could also reflect a broader cultural tendency to ignore death, Haneman told Ars.

“We don’t want to think about our own death, so we really kind of brush aside whether or not we care about somebody else being digitally resurrected until it’s in our face,” Haneman said.

Over time, attitudes may change, especially if the so-called “digital afterlife industry” takes off. And there is some precedent that the law could be changed to reinforce any culture shift.

“The throughline revealed by the law of the dead is that a sacred trust exists between the living and the deceased, with an emphasis upon protecting common humanity, such that data afforded no legal status (or personal data of the deceased) may nonetheless be treated with dignity and receive some basic protections,” Haneman wrote.

An alternative path to prevent AI resurrection

Preventing yourself from becoming an AI ghost seemingly now falls in a legal gray zone that policymakers may need to address.

Haneman calls for a solution that doesn’t depend on estate planning, which she warned “is a structurally inequitable and anachronistic approach that maximizes social welfare only for those who do estate planning.” More than 60 percent of Americans die without a will, often including “those without wealth,” as well as women and racial minorities who “are less likely to die with a valid estate plan in effect,” Haneman reported.”We can do better in a technology-based world,” Haneman wrote. “Any modern framework should recognize a lack of accessibility as an obstacle to fairness and protect the rights of the most vulnerable through approaches that do not depend upon hiring an attorney and executing an estate plan.”

Rather than twist the law to “recognize postmortem privacy rights,” Haneman advocates for a path for people resistant to digital replicas that focuses on a right to delete the data that would be used to create the AI ghost.

“Put simply, the deceased may exert control over digital legacy through the right to deletion of data but may not exert broader rights over non-commercial digital resurrection through estate planning,” Haneman recommended.

Sheehan told Ars that a right to deletion would likely involve estate planners, too.

“If this is not addressed in an estate planning document and not specifically addressed in the statute (or deemed under the authority of the executor via statute), then the only way to address this would be to go to court,” Sheehan said. “Even with a right of deletion, the deceased would need to delete said data before death or authorize his executor to do so post death, which would require an estate planning document, statutory authority, or court authority.”

Haneman agreed that for many people, estate planners would still be involved, recommending that “the right to deletion would ideally, from the perspective of estate administration, provide for a term of deletion within 12 months.” That “allows the living to manage grief and open administration of the estate before having to address data management issues,” Haneman wrote, and perhaps adequately balances “the interests of society against the rights of the deceased.”

To Haneman, it’s also the better solution for the people left behind because “creating a right beyond data deletion to curtail unauthorized non-commercial digital resurrection creates unnecessary complexity that overreaches, as well as placing the interests of the deceased over those of the living.”

Future generations may be raised with AI ghosts

If a dystopia that experts paint comes true, Big Tech companies may one day profit by targeting grieving individuals to seize the data of the dead, which could be more easily abused since it’s granted fewer rights than data of the living.

Perhaps in that future, critics suggest, people will be tempted into free trials in moments when they’re missing their loved ones most, then forced to either pay a subscription to continue accessing the bot or else perhaps be subjected to ad-based models where their chats with AI ghosts may even feature ads in the voices of the deceased.

Today, even in a world where AI ghosts aren’t yet compelling ad clicks, some experts have warned that interacting with AI ghosts could cause mental health harms, New Scientist reported, especially if the digital afterlife industry isn’t carefully designed, AI ethicists warned. Some people may end up getting stuck maintaining an AI ghost if it’s left behind as a gift, and ethicists suggested that the emotional weight of that could also eventually take a negative toll. While saying goodbye is hard, letting go is considered a critical part of healing during the mourning process, and AI ghosts may make that harder.

But the bots can be a helpful tool to manage grief, some experts suggest, provided that their use is limited to allow for a typical mourning process or combined with therapy from a trained professional, Al Jazeera reported. Ahmad told Ars that working on his bot has not only kept his father close to him but also helped him think more deeply about relationships and memory.

Haneman noted that people have many ways of honoring the dead. Some erect statues, and others listen to saved voicemails or watch old home movies. For some, just “smelling an old sweater” is a comfort. And creating digital replicas, as creepy as some people might find them, is not that far off from these traditions, Haneman said.

“Feeding text messages and emails into existing AI platforms such as ChatGPT and asking the AI to respond in the voice of the deceased is simply a change in degree, not in kind,” Haneman said.

For Ahmad, the decision to create a digital replica of his dad was a learning experience, and perhaps his experience shows why any family or loved one weighing the option should carefully consider it before starting the process.

In particular, he warns families to be careful introducing young kids to grief bots, as they may not be able to grasp that the bot is not a real person. When he initially saw his young kids growing confused with whether their grandfather was alive or not—the introduction of the bot was complicated by the early stages of the pandemic, a time when they met many relatives virtually—he decided to restrict access to the bot until they were older. For a time, the bot only came out for special events like birthdays.

He also realized that introducing the bot also forced him to have conversations about life and death with his kids at ages younger than he remembered fully understanding those concepts in his own childhood.

Now, Ahmad’s kids are among the first to be raised among AI ghosts. To continually enhance the family’s experience, their father continuously updates his father’s digital replica. Ahmad is currently most excited about recent audio advancements that make it easier to add a voice element. He hopes that within the next year, he might be able to use AI to finally nail down his South Asian father’s accent, which up to now has always sounded “just off.” For others working in this space, the next frontier is realistic video or even augmented reality tools, Ahmad told Ars.

To this day, the bot retains sentimental value for Ahmad, but, as Haneman suggested, the bot was not the only way he memorialized his dad. He also created a mosaic, and while his father never saw it, either, Ahmad thinks his dad would have approved.

“He would have been very happy,” Ahmad said.

There’s no way to predict how future generations may view grief tech. But while Ahmad said he’s not sure he’d be interested in an augmented reality interaction with his dad’s digital replica, kids raised seeing AI ghosts as a natural part of their lives may not be as hesitant to embrace or even build new features. Talking to Ars, Ahmad fondly remembered his young daughter once saw that he was feeling sad and came up with her own AI idea to help her dad feel better.

“It would be really nice if you can just take this program and we build a robot that looks like your dad, and then add it to the robot, and then you can go and hug the robot,” she said, according to her father’s memory.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

How to draft a will to avoid becoming an AI ghost—it’s not easy Read More »

she-was-a-disney-star-with-platinum-records,-but-bridgit-mendler-gave-it-up-to-change-the-world

She was a Disney star with platinum records, but Bridgit Mendler gave it up to change the world


“The space industry has a ground bottleneck, and the problem is going to get worse.”

The Northwood Space team is all smiles after the first successful test of “Frankie.” Clockwise, from lower left: Shaurya Luthra, Marvin Shu, Josh Lehtonen, Thomas Row, Dan Meinzer, Griffin Cleverly, Bridgit Mendler. Credit: Shaurya Luthra

The Northwood Space team is all smiles after the first successful test of “Frankie.” Clockwise, from lower left: Shaurya Luthra, Marvin Shu, Josh Lehtonen, Thomas Row, Dan Meinzer, Griffin Cleverly, Bridgit Mendler. Credit: Shaurya Luthra

Bridgit Mendler was not in Hollywood anymore. Instead, she found herself in rural North Dakota, where the stars sparkled overhead rather than on the silver screen. And she was freezing.

When her team tumbled out of their rental cars after midnight, temperatures had already plummeted into the 40s. Howling winds carried their breath away before it could fog the air. So it was with no small sense of urgency that the group scrambled to assemble a jury-rigged antenna to talk to a spacecraft that would soon come whizzing over the horizon. A few hours later, the rosy light of dawn shone on the faces of a typically scrappy space startup: mostly male, mostly disheveled.

Then there was Mendler, the former Disney star and pop music sensation—and she was running the whole show.

Mendler followed an improbable path from the Disney Channel to North Dakota. She was among the brightest adolescent stars born in the early 1990s, along with Ariana Grande, Demi Lovato, and Selena Gomez, who gained fame as teenagers on the Disney Channel and Nickelodeon by enthralling Gen Z. During the first decade of the new millennium, before the rise of Musical.ly and then TikTok, television still dominated the attention of young children. And they were watching the Disney Channel in droves.

Like many of her fellow teenage stars, Mendler parlayed television fame into pop stardom, scoring a handful of platinum records. But in her mid-20s, Mendler left that world behind and threw herself into academia. She attended some of the country’s top universities and married an aerospace engineer. A couple of years ago, the two of them founded a company to address what they believed was a limiting factor in the space economy: transferring data from orbit.

Their company, Northwood Space, employed just six people when it deployed to North Dakota last October. But the team already had real hardware. On the windswept plain, they unpacked and assembled “Frankie,” their cobbled-together, phased-array satellite dish affectionately named after Mary Shelley’s masterpiece Frankenstein.

“We had the truck arrive at two o’clock in the morning,” Mendler said. “Six hours later, we were operational. We started running passes. We were able to transmit to a satellite on our first try.” The team had been up all night by then. “I guess that’s when my Celsius addiction kind of kicked in,” she said.

Guzzling energy drinks isn’t the healthiest activity, but it fits with the high-energy, frenetic rush of building a space startup. To survive without a billionaire’s backing, startups must stay lean and move quickly. And it’s not at all clear that Northwood will survive, as most space startups fail due to a lack of funding, long technology horizons, or regulatory hurdles. So within a year of seriously beginning operations, it’s notable that Northwood was already in the field, testing hardware and finding modest success.

From a technological standpoint, a space mission must usually complete three functions. A spacecraft must launch into orbit. It must deploy its solar panels, begin operations, and collect data. Finally, it must send its data back. If satellite data does not return to Earth in a timely manner, it’s worthless. This process is far more difficult than one might think—and not that many people think about it. “Ground stations,” Mendler acknowledges, are some of the most “unsexy and boring problems” in the space industry.

The 32-year-old Mendler now finds herself exactly where she wants to be. The life she has chosen—leading a startup in gritty El Segundo, California, delving into regulatory minutiae, and freezing in rural North Dakota to tackle “boring” problems—lies a world away from a seemingly glamorous life in the entertainment industry. That’s just fine with her.

“When I was growing up, I always said I wanted to be everything,” she said. “So in a certain sense, maybe I wouldn’t be surprised about where I ended up. But I would certainly be happy.”

Good Luck Charlie

Mendler may have wanted to be everything, but in her early years, what she most wanted to be was an actor. In 2001, when Mendler was eight, her parents moved across the country from Washington, DC, to the Bay Area. Her father designed fuel-efficient automobile engines, and her mother was an architect doing green design. Her mom, working from home, enrolled Mendler in an acting camp to help fill the days.

Mendler caught the bug. Although her parents were supportive of these dreams, they told her she would have to work to make it happen.

“We still had the Yellow Pages at the time, and so my little kid self was just flipping through the Yellow Pages trying to figure out how to get an agent,” she said. “And it was a long journey. Something that people outside of acting maybe don’t realize is that you encounter a shit ton of rejection. And so my introduction to acting was a ton of rejection in the entertainment industry. But I was like, ‘I’m gonna freaking figure this out.’”

After three years, Mendler began to get voice-acting roles in small films and video games. In November, 2006, she appeared on television for the first time in an episode of the soap opera General Hospital. Another three years would pass before she had a real breakthrough, appearing as a recurring character on Wizards of Waverly Place, a Disney Channel show starring Selena Gomez. She played a vampire girlfriend.

Mendler starred as “Teddy” in the Disney Channel show Good Luck Charlie. Here, she’s sharing a moment with her sister, “Charlie.”

Credit: Adam Taylor/Disney Channel via Getty Images

Mendler starred as “Teddy” in the Disney Channel show Good Luck Charlie. Here, she’s sharing a moment with her sister, “Charlie.” Credit: Adam Taylor/Disney Channel via Getty Images

Mendler impressed enough in this role to be offered the lead in a new sitcom on Disney Channel, Good Luck Charlie, playing the older sister to a toddler named Charlie. In this role, Mendler made a video diary for Charlie, offering advice on how to be a successful teenager. The warm-hearted series ran for four years. Episodes regularly averaged more than 5 million viewers.

My two daughters were among them. They were a decade younger than Mendler, who was 18 when the first episodes aired in 2010. I would sometimes watch the show with my girls. Mendler’s character was endearing, and her advice to Charlie, I believe, helped my own younger daughters anticipate their teenage years. A decade and a half later, my kids still look up to her not just for being on television but for everything else she has accomplished.

As her star soared on the Disney Channel, Mendler moved into music. She recorded gold and platinum records, including her biggest hit, “Ready or Not,” in 2012.

Prominent childhood actors have always struggled with the transition to adulthood. Disney stars like Lindsay Lohan and Demi Lovato developed serious substance abuse problems, while others, such as Miley Cyrus and Selena Gomez, abruptly adopted new, much more mature images that contrasted sharply with their characters on children’s TV shows.

Mendler chose a different path.

Making an impact

As a pre-teen, Mendler would lie in bed at night listening to her mom working upstairs in the kitchen. They lived in a small house amid the redwoods north of Sausalito, California. When Mendler awoke some mornings, her mom would still be tapping away at her architectural designs. “That’s kind of how I viewed work,” Mendler said.

One of her favorite books as a kid was Miss Rumphius, about a woman who spread lupine seeds (also known as bluebonnets) along the coast of Maine to make the countryside more beautiful. The picture book offered an empowering message: Every person has a choice about how to make an impact on the world.

This environment shaped Mendler. She saw her mom work all night, saw experimental engines built by her dad scattered around the house, and had conversations around the dinner table about the future and how she could find her place in it. As she aged into adulthood, performing before thousands of people on stage and making TV shows and movies, Mendler felt like she was missing something. In her words, life in Los Angeles felt “anemic.” She had always liked to create things herself, and she wasn’t doing that.

“The niche that I had wedged myself into was not allowing me to have my own voice and perspective,” she said. “I wound up going down a path where I was more the vessel for other people’s creations, and I wondered what it would be like to be a little bit more in charge of my voice than I was in Hollywood.”

So Mendler channeled her inner nerd. She began to bring textbooks on game theory to the set of movies and TV shows. She took a few college courses. When a topic intrigued her, she would email an author or professor or reach out to them on Twitter.

Her interest was turbocharged when she neared her 25th birthday. Throughout the mid-2010s, Mendler continued to act and release music. One day, while filming a movie called Father of the Year in Massachusetts for Netflix, she had a day off. Her uncle took Mendler to visit the famed Media Lab at the Massachusetts Institute of Technology. This research lab brings together grad students, researchers, and entrepreneurs from various disciplines to develop technology—things like socially engaging robots and biologically inspired engineering. It was a vibrant meeting space for brilliant minds who wanted to build a better future.

“I knew right then I needed to go there,” she said. “I needed to find a way.”

But there was a problem. The Media Lab only offered graduate student programs. Mender didn’t have an undergraduate degree. She’d only taken a handful of college courses. Officials at MIT told her that if she could build her own things, they would consider admitting her to the program. So she threw herself into learning how to code, working on starter projects in HTML, JavaScript, CSS, and Python. It worked.

In 2018, Mendler posted on Twitter that she was starting a graduate program at MIT to focus on better understanding social media. “As an entertainer, for years I struggled with social media because I felt like there was a more loving and human way to connect with fans. That is what I’m going to study,” she wrote. “Overall, I just hope that this time can be an adventure, and I have a thousand ideas I want to share with you so please stay tuned!”

That fall she did, in fact, start working on social media. Mendler was fascinated with it—Twitter in particular—and its role as the new public square. But at the Media Lab, there are all manner of interdisciplinary groups. The one right next to Mendler, for example, was focused on space.

Pop startup

In the months before she left Los Angeles for MIT, Mendler’s life changed in an important way. Through friends, she met an aerospace engineer named Griffin Cleverly. Southern California is swarming with aerospace engineers, but it’s perhaps indicative of the different circles between Hollywood and Hawthorne that Cleverly was the first rocket scientist Mendler had ever met.

“The conversations we had were totally different,” she said. “He has so many thoughts about so many things, both in aerospace and other topics.”

They hit it off. Not long after Mendler left for the MIT Lab, Cleverly followed her to Massachusetts, first applying himself to different projects at the lab before taking a job working on satellites for Lockheed Martin. The two married a year later, in 2019.

By the next spring, Mendler was finishing her master’s thesis at MIT on using technology to help resolve conflicts. Then the world shut down due to the COVID-19 pandemic. She and Cleverly suddenly had a lot of time on their hands.

They retreated to a lake house owned by Mendler’s family in rural New Hampshire. The house had been in the family since just after World War II, and the couple decided to experiment with antennas to see what they could do. They would periodically mask up and drive to a Home Depot in nearby Concord for supplies. They built different kinds of antennas, including parabolic and helical designs, to see what they could communicate with far away.

Mendler gave up a successful career in music and acting to earn a master’s degree at MIT.

Mendler gave up a successful career in music and acting to earn a master’s degree at MIT.

As they experimented, Mendler and Cleverly began to think about the changing nature of the space industry. At the time, SpaceX’s Starlink constellation was just coming online to deliver broadband around the world. The company’s Falcon 9 launches were ramping up. Satellites were becoming smaller and cheaper, constellations were proliferating, and companies like K2 were seeking to mass produce.

Mendler and Cleverly believed that the volume of data coming down from space was about to explode—and that existing commercial networks weren’t capable of handling it all.

“The space industry has been on even-keeled growth for a long time,” Cleverly said. “But what happens when you hit that hockey stick across the industry? Launch seemed like it was getting taken care of. Mass manufacturing of satellites appeared to be coming. We saw these trends and were trying to understand how the industry was going to react to them. When we looked at the ground side, it wasn’t clear that anyone really was thinking about the ramifications there.”

As the pandemic waned, the couple resumed more normal lives. Mendler continued her studies at MIT, but she was now thoroughly hooked on space. Her husband excelled at working with technology to communicate with satellites, so Mendler focused on the non-engineering side of the space industry. “With space, so many folks focus on how complicated it is from an engineering perspective, and for good reason, because there are massive engineering problems to solve,” she said. “But these are also really operationally complex problems.”

For example, ground systems that communicate with satellites as they travel around the world operate in different jurisdictions, necessitating contracts and transactions in many countries. Issues with liability, intellectual property, insurance, and regulations abound. So Mendler decided that the next logical step after MIT was to attend law school. Because she lacked an undergraduate degree, most schools wouldn’t admit her. But Harvard University has an exception for exceptional students.

“Harvard was one of the few schools that admitted me,” she said. “I ended up going to law school because I was curious about understanding the operational aspects of working in space.”

These were insanely busy years. In 2022, when she began law school, Mendler was still conducting research at MIT. She soon got an internship at the Federal Communications Commission that gave her a broader view of the space industry from a regulatory standpoint. And in August 2022, she and Cleverly, alongside a software expert from Capella Space named Shaurya Luthra, founded Northwood Space.

So Bridgit Mendler, while studying at MIT and Harvard simultaneously, added a new title to her CV: chief executive officer.

Wizards of Waverly Space

Initially, the founders of Northwood Space did little more than study the market and write a few research papers, assessing the demand for sending data down to Earth, whether there would be customers for a new commercial network to download this data, and if affordable technology solutions could be built for this purpose. After about a year, they were convinced.

“Here’s the vision we ended up with,” Mendler said. “The space industry has a ground bottleneck, and the problem is going to get worse. So let’s build a network that can address that bottleneck and accelerate space capabilities. The best way to go about that was building capacity.”

If you’re like most people, you don’t spend much time pondering how data gets to and from space. To the extent one thinks about Starlink, it’s probably the satellite trains and personal dishes that spring to mind. But SpaceX has also had to build large ground stations around the world, known as gateways, to pipe data into space from the terrestrial Internet. Most companies lack the resources to build global gateways, so they use a shared commercial network. This has drawbacks, though.

Getting data down in a timely manner is not a trivial problem. From the earliest days of NASA through commercial operations today, operators on Earth generally do not maintain continual contact with satellites in space. For spacecraft in a polar orbit, contact might be made several times a day, with a lag in data of perhaps 30 minutes or as high as 90 minutes in some cases.

This is not great. Let’s say you want to use satellite imagery to fight wildfires. Data on the spread of a wildfire can help operators on the ground deploy resources to fight it. But for this information to be useful in real time, it must be downlinked within minutes of its collection. The existing infrastructure incurs delays that make most currently collected data non-actionable for firefighters. So the first problem Northwood wants to solve is persistence, with a network of ground stations around the world that would allow operators to continually connect with their satellites.

After persistence, the next problem faced by satellite operators is constraints on bandwidth. Satellites collect reams of data in orbit and must either process it on board or throw a lot of it away.

Mendler said that within three years, Northwood aims to build a shared network capable of linking to 500 spacecraft at a time. This may not sound like a big deal, but it’s larger than every commercially available shared ground network and the US government’s Satellite Control Network combined. And these tracking centers took decades to build. Each of Northwood’s sites, spread across six continents, is intended to download far more data than can be brought down on commercial networks today, the equivalent of streaming tens of thousands of Blu-ray discs from space concurrently.

“Our job is to figure out how to most efficiently deliver those capabilities,” Mendler said. “We’re asking, how can we reliably deliver a new standard of connectivity to the industry, at a viable price point?”

With these aims in mind, Mendler and Cleverly got serious about their startup in the fall of 2023.

Frankie goes from Hollywood

Over the previous decade, SpaceX had revolutionized the rocket industry, and a second generation of private launch companies was maturing. Some, like Rocket Lab, were succeeding. Others, such as Virgin Orbit, had gone bankrupt. There were important lessons in these ashes for a space startup CEO.

Among the most critical for Mendler was keeping costs low. Virgin Orbit’s payroll had approached 700 people to support a rocket capable of limited revenue. That kind of payroll growth was a ticket to insolvency. She also recognized SpaceX’s relentless push to build things in-house and rapidly prototype hardware through iterative design as key to the company’s success.

By the end of 2023, Mendler was raising the company’s initial funding, a seed round worth $6.3 million. Northwood emerged from “stealth mode” in February 2024 and set about hiring a small team. Early that summer, it began pulling together components to build Frankie, a prototype for the team’s first product—modular phased-array antennas. Northwood put Frankie together in four months.

“Our goal was to build things quickly,” Mendler said. “That’s why the first thing we did after raising our seed round was to build something and put it in the field. We wanted to show people it was real.”

Unlike a parabolic dish antenna—think a DirecTV satellite dish or the large ground-based antennas that Ellie Arroway uses in Contact—phased-array antennas are electrically steerable. Instead of needing to point directly at their target to collect a signal, phased-array antennas produce a beam of radio waves that can “point” in different directions without moving the antenna. The technology is decades old, but its use in commercial applications has been limited because it’s more difficult to work with than parabolic dishes. In theory, however, phased array antennas should let Northwood build more capable ground stations, pulling down vastly more data within a smaller footprint. In business terms, the technology is “scalable.”

But before a technology can scale, it must work.

In late September 2024, the company’s six engineers, a business development director, and Mendler packed Frankie into a truck and sent it rolling off to the Dakotas. They soon followed, flying commercial to Denver and then into Devils Lake Regional Airport. On the first day of October, the party checked into Spirit Lake Casino.

That night, they drove out to a rural site owned by Planet Labs, nearly an hour away, that has a small network station to communicate with its Earth-imaging satellites. This site consisted of two large antennas, a small operations shed for the networking equipment, and a temporary trailer. The truck hauling Frankie arrived at 2 am local time.

The company’s antenna, “Frankie,” arrives early on October 2 and the team begins to unload it.

Credit: Bridgit Mendler

The company’s antenna, “Frankie,” arrives early on October 2 and the team begins to unload it. Credit: Bridgit Mendler

Before sunrise, as the team completed setup, Mendler went into the nearest town, Maddock. The village has one main establishment, Harriman’s Restaurant & Bobcat Bar. The protean facility also serves as an opera house, community library, and meeting place. When Mendler went to the restaurant’s counter and ordered eight breakfast burritos, she attracted notice. But the locals were polite.

Returning to her team, they gathered in the small Planet Labs trailer on the windswept site. There were no lights, so they carried their portable floodlights inside. The space lacked room for chairs, so they huddled around one another in what they affectionately began referring to as the “food closet.” At least it kept them out of the wind.

The team had some success on the first morning, as Frankie communicated with a SkySat flying overhead, a Planet satellite a little larger than a mini refrigerator. First contact came at 7: 34 am, and they had some additional successes throughout the day. But communication remained one-way, from the ground to space. For satellite telemetry, tracking, and command—TT&C in industry parlance—they needed to close the loop. But Frankie could not receive a clear X Band signal from space; it was coming in too weak.

“While we could command the satellite, we could not receive the acknowledgments of the command,” Mendler said.

The best satellite passes were clumped during the overnight hours. So over the next few days, the team napped in their rental cars, waiting to see if Frankie could hear satellites calling home. But as the days ticked by, they had no luck. Time was running out.

Solving their RF problems

As the Northwood engineers troubleshot the problem with low signal power, they realized that with some minor changes, they could probably boost the signal. But this would require reconfiguring and calibrating Frankie.

The team scrambled to make these changes on the afternoon of October 4, before four passes in a row that night starting at 3 am. This was one of their last, best chances to make things work. After implementing the fix, the bedraggled Northwood team ate a muted dinner at their casino hotel before heading back out to the ground station. There, they waited in nervous silence for the first pass of the night.

When the initial satellite passed overhead, the space-to-ground power finally reached the requisite level. But Northwood could not decode the message due to a coaxial cable being plugged into the wrong port.

Then they missed the second pass because an inline amplifier was mistakenly switched off.

The third satellite pass failed due to a misrouted switch in Planet’s radio-frequency equipment.

So they were down to the final pass. But this time, there were no technical snafus. The peak of the signal came in clean and, to the team’s delight, with an even higher signal-to-noise ratio than anticipated. Frankie had done it. High fives and hugs all around. The small team crashed that morning before successfully repeating the process the next day.

After that, it was time to celebrate, Dakota style. The team decamped to Harriman’s, where Mendler’s new friend Jim Walter, the proprietor, served them shots. After a while, he disappeared into the basement and returned with Bobcat Bar T-shirts he wanted them to have as mementos. Later that night, the Northwood team played blackjack at the casino and lost their money at the slot machines.

Yet in the bigger picture, they had gambled and won. Mendler wanted to build fast, to show the world that her company had technical chops. They had thrown Frankie together and rushed headlong into the rough-and-tumble countryside, plugged in the antenna, and waited to see what happened. A lot of bad things could have happened, but instead, the team hit the jackpot.

“We were able to go from the design to actually build and deploy in that four-month time period,” Mendler said. “That resulted in a lot of different customers knocking down our door and helping to shape requirements for this next version of the system that we’re going to be able to start demoing soon. So in half a year, we radically revised our product, and we will begin actually putting them out in the field and operating this year. Time is very much at the forefront of our mind.”

Can ground stations fly high?

The fundamental premise behind Northwood is that a bottleneck constrains the ability to bring down data from space and that a lean, new-space approach can disrupt the existing industry. But is this the case?

“The demand for ground-based connectivity is rising,” said Caleb Henry, director of research at Quilty Space. “And your satellites are only as effective as your gateways.”

This trend is being driven not only by the rise of satellites in general but also by higher-resolution imaging satellites like Planet’s Pelican satellites or BlackSky’s Gen-3 satellites. There has also been a corresponding increase in the volume of data from synthetic aperture radar satellites, Henry said. Recent regulatory filings, such as this one in the United Kingdom, underscore the notion that ongoing data bottlenecks persist. However, Henry said it’s not clear whether this growth in data will be linear or exponential.

The idea of switching from large, single-dish antennas to phased arrays is not new. This is partly because there are questions about how expensive it would be to build large, capable phased-array antennas to talk to satellites hundreds of miles away—and how energy intensive this would be.

Commercial satellite operators currently have a limited number of options for communicating with the ground. A Norwegian company, Kongsberg Satellite Services (or KSAT), has the largest network of ground stations. Other players include Swedish Space Systems, Leaf Space in Italy, Atlas Space Operations in Michigan, and more. Some of these companies have experimented with phased-array antennas, Henry said, but no one has made the technology the backbone of its network.

By far the largest data operator in low-Earth orbit, SpaceX, chose dish-based gateways for its ground stations around the world that talk to Starlink satellites. (The individual user terminals are phased-array antennas, however.)

Like reuse in the launch industry, a switch to phased-array antennas is potentially disruptive. Large dishes can only communicate with a single satellite at a time, whereas phased-array antennas can make multiple connections. This allows an operator to pack much more power into a smaller footprint on the ground. But as with SpaceX and reuse, the existing ground station operators seem to be waiting to see if anyone else can pull it off.

“The industry just has not trusted that the level of destruction phased-array antennas can bring is worth the cost,” Henry said. “Reusability wasn’t trusted, either, because no one could do it affordably and effectively.”

So can Northwood Space do it? One of the very first investors in SpaceX, the Founders Fund, believes so. It participated in the seed round for Northwood and again in a Series A round, valued at $30 million, which closed in April.

When Mendler first approached the fund about 18 months ago, it was an easy decision, said Delian Asparouhov, a partner at the fund.

“We probably only discussed it for about 15 minutes,” Asparouhov said. “Bridgit was perfect for this. I think we met on a Tuesday and had a term sheet signed on a Thursday night. It happened that fast.”

The Founders Fund had been studying the idea for a while. Rocket, satellites, and reentry vehicles get all of the attention, but Asparouhov said there is a huge need for ground systems and that phased-array technology has the ability to unlock a future of abundant data from space. His own company, Varda Space, is only able to communicate with its spacecraft for about 35 minutes every two hours. Varda vehicles conduct autonomous manufacturing in space, and the ability to have continuous data from its vehicles about their health and the work on board would be incredibly helpful.

“Infrastructure is not sexy,” Asparouhov said. “We needed someone who could turn that into a compelling story.”

Mendler, with her novel background, was the person. But she’s not just an eloquent spokesperson for the industry, he said. Building a company is hard, from finding facilities to navigating legal work to staffing up. Mendler appears to be acing these tasks. “Run through the LinkedIn of the team she’s recruited,” he said. “You’ll see that she’s knocked it out of the park.”

Ready or not

At Northwood, Mendler has entered a vastly different world from the entertainment industry or academia. She consults with fast-talking venture capitalists, foreign regulators, lawyers, rocket scientists, and occasionally the odd space journalist. It’s a challenging environment usually occupied by hotshot engineers—often arrogant, hard-charging men.

Mendler stands out in this setting. But her life has always been about thriving in tough environments.

Whatever happens, she has already achieved success in one important way. As an actor and singer, Mendler often felt as though she was dancing to someone else’s tune. No longer. At Northwood, she holds the microphone, but she is also a director and producer. If she fails—and let’s be honest, most new space companies do fail—it will be on her own terms.

Several weeks ago, Mendler was sitting at home, watching the movie Meet the Robinsons with her 6-year-old son. One of the main themes of the animated Disney film is that one should “keep moving forward” in life and that it’s possible to build a future that is optimistic for humanity—say, Star Trek rather than The Terminator or The Matrix.

“It shows you what the future could look like,” Mendler said of the movie. “And it gave me a little sad feeling, because it is so optimistic and beautiful. I think people can get discouraged by a dystopian outlook about what the future can look like. We need to remember we can build something positive.”

She will try to do just that.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

She was a Disney star with platinum records, but Bridgit Mendler gave it up to change the world Read More »

a-history-of-the-internet,-part-2:-the-high-tech-gold-rush-begins

A history of the Internet, part 2: The high-tech gold rush begins


The Web Era arrives, the browser wars flare, and a bubble bursts.

Welcome to the second article in our three-part series on the history of the Internet. If you haven’t already, read part one here.

As a refresher, here’s the story so far:

The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country.  Later, it evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol.

By the late 1980s, investments from the National Science Foundation (NSF) had established an “Internet backbone” supporting hundreds of thousands of users worldwide. These users were mostly professors, researchers, and graduate students.

In the meantime, commercial online services like CompuServe were growing rapidly. These systems connected personal computer users, using dial-up modems, to a mainframe running proprietary software. Once online, people could read news articles and message other users. In 1989, CompuServe added the ability to send email to anyone on the Internet.

In 1965, Ted Nelson submitted a paper to the Association for Computing Machinery. He wrote: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” The paper was part of a grand vision he called Xanadu, after the poem by Samuel Coleridge.

A decade later, in his book “Dream Machines/Computer Lib,” he described Xanadu thusly: “To give you a screen in your home from which you can see into the world’s hypertext libraries.” He admitted that the world didn’t have any hypertext libraries yet, but that wasn’t the point. One day, maybe soon, it would. And he was going to dedicate his life to making it happen.

As the Internet grew, it became more and more difficult to find things on it. There were lots of cool documents like the Hitchhiker’s Guide To The Internet, but to read them, you first had to know where they were.

The community of helpful programmers on the Internet leapt to the challenge. Alan Emtage at McGill University in Montreal wrote a tool called Archie. It searched a list of public file transfer protocol (FTP) servers. You still had to know the file name you were looking for, but Archie would let you download it no matter what server it was on.

An improved search engine was Gopher, written by a team headed by Mark McCahill at the University of Minnesota. It used a text-based menu system so that users didn’t have to remember file names or locations. Gopher servers could display a customized collection of links inside nested menus, and they integrated with other services like Archie and Veronica to help users search for more resources.

Gopher is a text-based Internet search and retrieval system. It’s still running in 2025! Jeremy Reimer

A Gopher server could provide many of the things we take for granted today: search engines, personal pages that could contain links, and downloadable files. But this wasn’t enough for a British computer scientist who was working at CERN, an intergovernmental institute that operated the world’s largest particle physics lab.

The World Wide Web

Hypertext had come a long way since Ted Nelson had coined the word in 1965. Bill Atkinson, a member of the original Macintosh development team, released HyperCard in 1987. It used the Mac’s graphical interface to let anyone develop “stacks,” collections of text, graphics, and sounds that could be connected together with clickable links. There was no networking, but stacks could be shared with other users by sending the files on a floppy disk.

The home screen of HyperCard 1.0 for Macintosh. Jeremy Reimer

Hypertext was so big that conferences were held just to discuss it in 1987 and 1988. Even Ted Nelson had finally found a sponsor for his personal dream: Autodesk founder John Walker had agreed to spin up a subsidiary to create a commercial version of Xanadu.

It was in this environment that CERN fellow Tim Berners-Lee drew up his own proposal in March 1989 for a new hypertext environment. His goal was to make it easier for researchers at CERN to collaborate and share information about new projects.

The proposal (which he called “Mesh”) had several objectives. It would provide a system for connecting information about people, projects, documents, and hardware being developed at CERN. It would be decentralized and distributed over many computers. Not all the computers at CERN were the same—there were Digital Equipment minis running VMS, some Macintoshes, and an increasing number of Unix workstations. Each of them should be able to view the information in the same way.

As Berners-Lee described it, “There are few products which take Ted Nelson’s idea of a wide ‘docuverse’ literally by allowing links between nodes in different databases. In order to do this, some standardization would be necessary.”

The original proposal document for the web, written in Microsoft Word for Macintosh 4.0, downloaded from Tim Berners-Lee’s website. Credit: Jeremy Reimer

The document ended by describing the project as “practical” and estimating that it might take two people six to 12 months to complete. Berners-Lee’s manager called it “vague, but exciting.” Robert Cailliau, who had independently proposed a hypertext system for CERN, joined Berners-Lee to start designing the project.

The computer Berners-Lee used was a NeXT cube, from the company Steve Jobs started after he was kicked out of Apple. NeXT workstations were expensive, but they came with a software development environment that was years ahead of its time. If you could afford one, it was like a coding accelerator. John Carmack would later write DOOM on a NeXT.

The NeXT workstation that Tim Berners-Lee used to create the World Wide Web. Please do not power down the World Wide Web. Credit: Coolcaesar (CC BY-SA 3.0)

Berners-Lee called his application “WorldWideWeb.” The software consisted of a server, which delivered pages of text over a new protocol called “Hypertext Transport Protocol,” or HTTP, and a browser that rendered the text. The browser translated markup code like “h1” to indicate a larger header font or “a” to indicate a link. There was also a graphical webpage editor, but it didn’t work very well and was abandoned.

The very first website was published, running on the development NeXT cube, on December 20, 1990. Anyone who had a NeXT machine and access to the Internet could view the site in all its glory.

The original WorldWideWeb browser running on NeXTstep 3, browsing the world’s first webpage. Jeremy Reimer

Because NeXT only sold 50,000 computers in total, that intersection did not represent a lot of people. Eight months later, Berners-Lee posted a reply to a question about interesting projects on the alt.hypertext Usenet newsgroup. He described the World Wide Web project and included links to all the software and documentation.

That one post changed the world forever.

Mosaic

On December 9, 1991, President George H.W. Bush signed into law the High Performance Computing Act, also known as the Gore Bill. The bill paid for an upgrade of the NSFNET backbone, as well as a separate funding initiative for the National Center for Supercomputing Applications (NCSA).

NCSA, based out of the University of Illinois, became a dream location for computing research. “NCSA was heaven,” recalled Alex Totic, who was a student there. “They had all the toys, from Thinking Machines to Crays to Macs to beautiful networks. It was awesome.” As is often the case in academia, the professors came up with research ideas but assigned most of the actual work to their grad students.

One of those students was Marc Andreessen, who joined NCSA as a part-time programmer for $6.85 an hour. Andreessen was fascinated by the World Wide Web, especially browsers. A new browser for Unix computers, ViolaWWW, was making the rounds at NCSA. No longer confined to the NeXT workstation, the web had caught the attention of the Unix community. But that community was still too small for Andreessen.

“To use the Net, you had to understand Unix,” he said in an interview with Forbes. “And the current users had no interest in making it easier. In fact, there was a definite element of not wanting to make it easier, of actually wanting to keep the riffraff out.”

Andreessen enlisted the help of his colleague, programmer Eric Bina, and started developing a new web browser in December 1992. In a little over a month, they released version 0.5 of “NCSA X Mosaic”—so called because it was designed to work with Unix’s X Window System. Ports for the Macintosh and Windows followed shortly thereafter.

Being available on the most popular graphical computers changed the trajectory of the web. In just 18 months, millions of copies of Mosaic were downloaded, and the rate was accelerating. The riffraff was here to stay.

Netscape

The instant popularity of Mosaic caused the management at NCSA to take a deeper interest in the project. Jon Mittelhauser, who co-wrote the Windows version, recalled that the small team “suddenly found ourselves in meetings with forty people planning our next features, as opposed to the five of us making plans at 2 am over pizzas and Cokes.”

Andreessen was told to step aside and let more experienced managers take over. Instead, he left NCSA and moved to California, looking for his next opportunity. “I thought I had missed the whole thing,” Andreessen said. “The overwhelming mood in the Valley when I arrived was that the PC was done, and by the way, the Valley was probably done because there was nothing else to do.”

But his reputation had preceded him. Jim Clark, the founder of Silicon Graphics, was also looking to start something new. A friend had shown him a demo of Mosaic, and Clark reached out to meet with Andreessen.

At a meeting, Andreessen pitched the idea of building a “Mosaic killer.” He showed Clark a graph that showed web users doubling every five months. Excited by the possibilities, the two men founded Mosaic Communications Corporation on April 4, 1994. Andreessen quickly recruited programmers from his former team, and they got to work. They codenamed their new browser “Mozilla” since it was going to be a monster that would devour Mosaic. Beta versions were titled “Mosaic Netscape,” but the University of Illinois threatened to sue the new company. To avoid litigation, the name of the company and browser were changed to Netscape, and the programmers audited their code to ensure none of it had been copied from NCSA.

Netscape became the model for all Internet startups to follow. Programmers were given unlimited free sodas and encouraged to basically never leave the office. “Netscape Time” accelerated software development schedules, and because updates could be delivered over the Internet, old principles of quality assurance went out the window. And the business model? It was simply to “get big fast,” and profits could be figured out later.

Work proceeded quickly, and the 1.0 version of Netscape Navigator and the Netsite web server were released on December 15, 1994, for Windows, Macintosh, and Unix systems running X Windows. The browser was priced at $39 for commercial users, but there was no charge for “academic and non-profit use, as well as for free evaluation purposes.”

Version 0.9 was called “Mosaic Netscape,” and the logo and company were still Mosaic. Jeremy Reimer

Netscape quickly became the standard. Within six months, it captured over 70 percent of the market share for web browsers. On August 9, 1995, only 16 months after the founding of the company, Netscape filed for an Initial Public Offering. A last-minute decision doubled the offering price to $28 per share, and on the first day of trading, the stock soared to $75 and closed at $58.25. The Web Era had officially arrived.

The web battles proprietary solutions

The excitement over a new way to transmit text and images to the public over phone lines wasn’t confined to the World Wide Web. Commercial online systems like CompuServe were also evolving to meet the graphical age. These companies released attractive new front-ends for their services that ran on DOS, Windows, and Macintosh computers. There were also new services that were graphics-only, like Prodigy, a cooperation between IBM and Sears, and an upstart that had sprung from the ashes of a Commodore 64 service called Quantum Link. This was America Online, or AOL.

Even Microsoft was getting into the act. Bill Gates believed that the “Information Superhighway” was the future of computing, and he wanted to make sure that all roads went through his company’s toll booth. The highly anticipated Windows 95 was scheduled to ship with a bundled dial-up online service called the Microsoft Network, or MSN.

At first, it wasn’t clear which of these online services would emerge as the winner. But people assumed that at least one of them would beat the complicated, nerdy Internet. CompuServe was the oldest, but AOL was nimbler and found success by sending out millions of free “starter” disks (and later, CDs) to potential customers. Microsoft was sure that bundling MSN with the upcoming Windows 95 would ensure victory.

Most of these services decided to hedge their bets by adding a sort of “side access” to the World Wide Web. After all, if they didn’t, their competitors would. At the same time, smaller companies (many of them former bulletin board services) started becoming Internet service providers. These smaller “ISPs” could charge less money than the big services because they didn’t have to create any content themselves. Thousands of new websites were appearing on the Internet every day, much faster than new sections could be added to AOL or CompuServe.

The tipping point happened very quickly. Before Windows 95 had even shipped, Bill Gates wrote his famous “Internet Tidal Wave” memo, where he assigned the Internet the “highest level of importance.” MSN was quickly changed to become more of a standard ISP and moved all of its content to the web. Microsoft rushed to release its own web browser, Internet Explorer, and bundled it with the Windows 95 Plus Pack.

The hype and momentum were entirely with the web now. It was the most exciting, most transformative technology of its time. The decade-long battle to control the Internet by forcing a shift to a new OSI standards model was forgotten. The web was all anyone cared about, and the web ran on TCP/IP.

The browser wars

Netscape had never expected to make a lot of money from its browser, as it was assumed that most people would continue to download new “evaluation” versions for free. Executives were pleasantly surprised when businesses started sending Netscape huge checks. The company went from $17 million in revenue in 1995 to $346 million the following year, and the press started calling Marc Andreessen “the new Bill Gates.”

The old Bill Gates wasn’t having any of that. Following his 1995 memo, Microsoft worked hard to improve Internet Explorer and made it available for free, including to business users. Netscape tried to fight back. It added groundbreaking new features like JavaScript, which was inspired by LISP but with a syntax similar to Java, the hot new programming language from Sun Microsystems. But it was hard to compete with free, and Netscape’s market share started to fall. By 1996, both browsers had reached version 3.0 and were roughly equal in terms of features. The battle continued, but when the Apache Software Foundation released its free web server, Netscape’s other source of revenue dried up as well. The writing was on the wall.

There was no better way to declare your allegiance to a web browser in 1996 than adding “Best Viewed In” above one of these icons. Credit: Jeremy Reimer

The dot-com boom

In 1989, the NSF lifted the restrictions on providing commercial access to the Internet, and by 1991, it had removed all barriers to commercial trade on the network. With the sudden ascent of the web, thanks to Mosaic, Netscape, and Internet Explorer, new companies jumped into this high-tech gold rush. But at first, it wasn’t clear what the best business strategy was. Users expected everything on the web to be free, so how could you make money?

Many early web companies started as hobby projects. In 1994, Jerry Yang and David Filo were electrical engineering PhD students at Stanford University. After Mosaic started popping off, they began collecting and trading links to new websites. Thus, “Jerry’s Guide to the World Wide Web” was born, running on Yang’s Sun workstation. Renamed Yahoo! (Yet Another Hierarchical, Officious Oracle), the site exploded in popularity. Netscape put multiple links to Yahoo on its main navigation bar, which further accelerated growth. “We weren’t really sure if you could make a business out of it, though,” Yang told Fortune. Nevertheless, venture capital companies came calling. Sequoia, which had made millions investing in Apple, put in $1 million for 25 percent of Yahoo.

Yahoo.com as it would have appeared in 1995. Credit: Jeremy Reimer

Another hobby site, AuctionWeb, was started in 1995 by Pierre Omidyar. Running on his own home server using the regular $30 per month service from his ISP, the site let people buy and sell items of almost any kind. When traffic started growing, his ISP told him it was increasing his Internet fees to $250 per month, as befitting a commercial enterprise. Omidyar decided he would try to make it a real business, even though he didn’t have a merchant account for credit cards or even a way to enforce the new 5 percent or 2.5 percent royalty charges. That didn’t matter, as the checks started rolling in. He found a business partner, changed the name to eBay, and the rest was history.

AuctionWeb (later eBay) as it would have appeared in 1995. Credit: Jeremy Reimer

In 1993, Jeff Bezos, a senior vice president at a hedge fund company, was tasked with investigating business opportunities on the Internet. He decided to create a proof of concept for what he described as an “everything store.” He chose books as an ideal commodity to sell online, since a book in one store was identical to one in another, and a website could offer access to obscure titles that might not get stocked in physical bookstores.

He left the hedge fund company, gathered investors and software development talent, and moved to Seattle. There, he started Amazon. At first, the site wasn’t much more than an online version of an existing bookseller catalog called Books In Print. But over time, Bezos added inventory data from the two major book distributors, Ingram and Baker & Taylor. The promise of access to every book in the world was exciting for people, and the company grew quickly.

Amazon.com as it would have appeared in 1995. Credit: Jeremy Reimer

The explosive growth of these startups fueled a self-perpetuating cycle. As publications like Wired experimented with online versions of their magazines, they invented and sold banner ads to fund their websites. The best customers for these ads were other web startups. These companies wanted more traffic, and they knew ads on sites like Yahoo were the best way to get it. Yahoo salespeople could then turn around and point to their exponential ad sales curves, which caused Yahoo stock to rise. This encouraged people to fund more web startups, which would all need to advertise on Yahoo. These new startups also needed to buy servers from companies like Sun Microsystems, causing those stocks to rise as well.

The crash

In the latter half of the 1990s, it looked like everything was going great. The economy was booming, thanks in part to the rise of the World Wide Web and the huge boost it gave to computer hardware and software companies. The NASDAQ index of tech-focused stocks painted a clear picture of the boom.

The NASDAQ composite index in the 1990s. Credit: Jeremy Reimer

Federal Reserve chairman Alan Greenspan called this phenomenon “irrational exuberance” but didn’t seem to be in a hurry to stop it. The fact that most new web startups didn’t have a realistic business model didn’t seem to bother investors. Sure, WebVan might have been paying more to deliver groceries than they earned from customers, but look at that growth curve!

The exuberance couldn’t last forever. The NASDAQ peaked at 8,843.87 in February 2000 and started to go down. In one month, it lost 34 percent of its value, and by August 2001, it was down to 3,253.38. Web companies laid off employees or went out of business completely. The party was over.

Andreessen said that the tech crash scarred him. “The overwhelming message to our generation in the early nineties was ‘You’re dirty, you’re all about grunge—you guys are fucking losers!’ Then the tech boom hit, and it was ‘We are going to do amazing things!’ And then the roof caved in, and the wisdom was that the Internet was a mirage. I 100 percent believed that because the rejection was so personal—both what everybody thought of me and what I thought of myself.”

But while some companies quietly celebrated the end of the whole Internet thing, others would rise from the ashes of the dot-com collapse. That’s the subject of our third and final article.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 2: The high-tech gold rush begins Read More »

ex-fcc-chair-ajit-pai-is-now-a-wireless-lobbyist—and-enemy-of-cable-companies

Ex-FCC Chair Ajit Pai is now a wireless lobbyist—and enemy of cable companies


Pai’s return as CTIA lobbyist fuels industry-wide battle over spectrum rights.

Ajit Pai, former chairman of the Federal Communications Commission, during a Senate Commerce Committee hearing on Wednesday, April 9, 2025. Credit: Getty Images | Bloomberg

Ajit Pai is back on the telecom policy scene as chief lobbyist for the mobile industry, and he has quickly managed to anger a coalition that includes both cable companies and consumer advocates.

Pai was the Federal Communications Commission chairman during President Trump’s first term and then spent several years at private equity firm Searchlight Capital. He changed jobs in April, becoming the president and CEO of wireless industry lobby group CTIA. Shortly after, he visited the White House to discuss wireless industry priorities and had a meeting with Brendan Carr, the current FCC chairman who was part of Pai’s Republican majority at the FCC from 2017 to 2021.

Pai’s new job isn’t surprising. He was once a lawyer for Verizon, and it’s not uncommon for FCC chairs and commissioners to be lobbyists before or after terms in government.

Pai’s move to CTIA means he is now battling a variety of industry players and advocacy groups over the allocation of spectrum. As always, wireless companies AT&T, Verizon, and T-Mobile want more spectrum and the exclusive rights to use it. The fight puts Pai at odds with the cable industry that cheered his many deregulatory actions when he led the FCC.

Pai wrote a May 4 op-ed in The Wall Street Journal arguing that China is surging ahead of the US in 5G deployment and that “the US doesn’t even have enough licensed spectrum available to keep up with expected consumer demand.” He said that Congress must restore the FCC’s lapsed authority to auction spectrum licenses, and auction off “at least 600 megahertz of midband spectrum for future 5G services.”

“During the first Trump administration, the US was determined to lead the world in wireless innovation—and by 2021 it did,” Pai wrote. “But that urgency and sense of purpose have diminished. With Mr. Trump’s leadership, we can rediscover both.”

Pai’s op-ed drew a quick rebuke from a group called Spectrum for the Future, which alleged that Pai mangled the facts.

“Mr. Pai’s arguments are wrong on the facts—and wrong on how to accelerate America’s global wireless leadership,” the vaguely named group said in a May 8 press release that accused Pai of “stunning hypocrisy.” Spectrum for the Future said Pai is wrong about the existence of a spectrum shortage, wrong about how much money a spectrum auction could raise, and wrong about the cost of reallocating spectrum from the military to mobile companies.

“Mr. Pai attributes the US losing its lead in 5G availability to the FCC’s lapsed spectrum auction authority. He’d be more accurate to blame his own members’ failure to build out their networks,” the group said.

Big Cable finds allies

Pai’s op-ed said that auctioning 600 MHz “could raise as much as $200 billion” to support other US government priorities. Spectrum for the Future called this an “absurd claim” that “presumes that this auction of 600 MHz could approach the combined total ($233 billion) that has been raised by every prior spectrum auction (totaling nearly 6 GHz of bandwidth) in US history combined.”

The group also said Pai “completely ignores the immense cost to taxpayers to relocate incumbent military and intelligence systems out of the bands CTIA covets for its own use.” Spectrum for the Future didn’t mention that one of the previous auctions, for the 3.7–3.98 GHz band, netted over $81 billion in winning bids.

So who is behind Spectrum for the Future? The group’s website lists 18 members , including the biggest players in the cable industry. Comcast, Charter, Cox, and lobby group NCTA-The Internet & Television Association are all members of Spectrum for the Future. (Disclosure: The Advance/Newhouse Partnership, which owns 12 percent of Charter, is part of Advance Publications, which owns Ars Technica parent Condé Nast.)

When contacted by Ars, a CTIA spokesperson criticized cable companies for “fighting competition” and said the cable firms are being “disingenuous.” Charter and Cox declined to answer our questions about their involvement in Spectrum for the Future. Comcast and the NCTA didn’t respond to requests for comment.

The NCTA and big cable companies are no strangers to lobbying the FCC and Congress and could fight for CBRS entirely on their own. But as it happens, some consumer advocates who regularly oppose the cable industry on other issues are on cable’s side in this battle.

With Spectrum for the Future, the cable industry has allied not just with consumer advocates but also small wireless ISPs and operators of private networks that use spectrum the big mobile companies want for themselves. Another group that is part of the coalition represents schools and libraries that use spectrum to provide local services.

For cable, joining with consumer groups, small ISPs, and others in a broad coalition has an obvious advantage from a public relations standpoint. “This is a lot of different folks who are in it for their own reasons. Sometimes that’s a big advantage because it makes it more authentic,” said Harold Feld, senior VP of consumer advocacy group Public Knowledge, which is part of Spectrum of the Future.

In some cases, a big company will round up nonprofits to which it has donated to make a show of broad public support for one of the company’s regulatory priorities—like a needed merger approval. That’s not what happened here, according to Feld. While cable companies probably provided most of the funding for Spectrum for the Future, the other members are keenly interested in fighting the wireless lobby over spectrum access.

“There’s a difference between cable being a tentpole member and this being cable with a couple of friends on the side,” Feld told Ars. Cable companies “have the most to lose, they have the most initial resources. But all of these other guys who are in here, I’ve been on these calls, they’re pretty active. There are a lot of diverse interests in this, which sometimes makes it easier to lobby, sometimes makes it harder to lobby because you all want to talk about what’s important to you.”

Feld didn’t help write the group’s press release criticizing Pai but said the points made are “all things I agree with.”

The “everybody but Big Mobile” coalition

Public Knowledge and New America’s Open Technology Institute (OTI), another Spectrum for the Future member, are both longtime proponents of shared spectrum. OTI’s Wireless Future Project director, Michael Calabrese, told Ars that Spectrum for the Future is basically the “everybody but Big Mobile” wireless coalition and “a very broad but ad hoc coalition.”

While Public Knowledge and OTI advocate for shared spectrum in many frequency bands, Spectrum for the Future is primarily focused on one: the Citizens Broadband Radio Service (CBRS), which spans from 3550 MHz to 3700 MHz. The CBRS spectrum is used by the Department of Defense and shared with non-federal users.

CBRS users in the cable industry and beyond want to ensure that CBRS remains available to them and free of high-power mobile signals that would crowd out lower-power operations. They were disturbed by AT&T’s October 2024 proposal to move CBRS to the lower part of the 3 GHz band, which is also used by the Department of Defense, and auction existing CBRS frequencies to 5G wireless companies “for licensed, full-power use.”

The NCTA told the FCC in December that “AT&T’s proposal to reallocate the entire 3 GHz band is unwarranted, impracticable, and unworkable and is based on the false assertion that the CBRS band is underutilized.”

Big mobile companies want the CBRS spectrum because it is adjacent to frequencies that are already licensed to them. The Department of Defense seems to support AT&T’s idea, even though it would require moving some military operations and sharing the spectrum with non-federal users.

Pentagon plan similar to AT&T’s

In a May research note provided to Ars, New Street Research Policy Advisor Blair Levin reported some details of a Department of Defense proposal for several bands of spectrum, including CBRS. The White House asked the Department of Defense “to come up with a plan to enable allocation of mid-band exclusive-use spectrum,” and the Pentagon recently started circulating its initial proposal.

The Pentagon plan is apparently similar to AT&T’s, as it would reportedly move current CBRS licensees and users to the lower 3 GHz band to clear spectrum for auctions.

“It represents the first time we can think of where the government would change the license terms of one set of users to benefit a competitor of that first set of users… While the exclusive-use spectrum providers would see this as government exercising its eminent domain rights as it has traditionally done, CBRS users, particularly cable, would see this as the equivalent of a government exercis[ing] its eminent domain rights to condemn and tear down a Costco to give the land to a Walmart,” Levin wrote.

If the proposal is implemented, cable companies would likely sue the government “on the grounds that it violates their property rights” under the priority licenses they purchased to use CBRS, Levin wrote. Levin’s note said he doesn’t think this proposal is likely to be adopted, but it shows that “the game is afoot.”

CBRS is important to cable companies because they have increasingly focused on selling mobile service as another revenue source on top of their traditional TV and broadband businesses. Cable firms got into the mobile business by reselling network access from the likes of Verizon. They’ve been increasing the use of CBRS, reducing their reliance on the major mobile companies, although a recent Light Reading article indicates that cable’s progress with CBRS deployment has been slow.

Then-FCC Chairman Ajit Pai and FCC commissioner Brendan Carr stand next to each other in a Senate committee hearing room in 2018.

Then-FCC Chairman Ajit Pai with FCC Commissioner Brendan Carr before the start of a Senate Commerce Committee hearing on Thursday, Aug. 16, 2018.

Credit: Getty Images | Bill Clark

Then-FCC Chairman Ajit Pai with FCC Commissioner Brendan Carr before the start of a Senate Commerce Committee hearing on Thursday, Aug. 16, 2018. Credit: Getty Images | Bill Clark

In its statement to Ars, CTIA said the cable industry “opposes full-power 5G access in the US at every opportunity” in CBRS and other spectrum bands. Cable companies are “fighting competition” from wireless operators “every chance they can,” CTIA said. “With accelerating losses in the marketplace, their advocacy is now more aggressive and disingenuous.”

The DoD plan that reportedly mirrors AT&T’s proposal seems to represent a significant change from the Biden-era Department of Defense’s stance. In September 2023, the department issued a report saying that sharing the 3.1 GHz band with non-federal users would be challenging and potentially cause interference, even if rules were in place to protect DoD operations.

“DoD is concerned about the high possibility that non-Federal users will not adhere to the established coordination conditions at all times; the impacts related to airborne systems, due to their range and speed; and required upgrades to multiple classes of ships,” the 2023 report said. We contacted the Department of Defense and did not receive a response.

Levin quoted Calabrese as saying the new plan “would pull the rug out from under more than 1,000 CBRS operators that have deployed more than 400,000 base stations. While they could, in theory, share DoD spectrum lower in the band, that spectrum will now be so congested it’s unclear how or when that could be implemented.”

Small ISP slams “AT&T and its cabal of telecom giants”

AT&T argues that CBRS spectrum is underutilized and should be repurposed for commercial mobile use because it “resides between two crucial, high-power, licensed 5G bands”—specifically 3.45–3.55 GHz and 3.7–3.98 GHz. It said its proposal would expand the CBRS band’s total size from 150 MHz to 200 MHz by relocating it to 3.1–3.3 GHz.

Keefe John, CEO of a Wisconsin-based wireless home Internet provider called Ethoplex, argued that “AT&T and its cabal of telecom giants” are “scheming to rip this resource from the hands of small operators and hand it over to their 5G empire. This is nothing less than a brazen theft of America’s digital future, and we must fight back with unrelenting resolve.”

John is vice chairperson of the Wireless Internet Service Providers Association (WISPA), which represents small ISPs and is a member of Spectrum for the Future. He wrote that CBRS is a “vital spectrum band that has become the lifeblood of rural connectivity” because small ISPs use it to deliver fixed wireless Internet service to underserved areas.

John called the AT&T proposal “a deliberate scheme to kneecap WISPs, whose equipment, painstakingly deployed, would be rendered obsolete in the lower band.” Instead of moving CBRS from one band to another, John said CBRS should stay on its current spectrum and expand into additional spectrum “to ensure small providers have a fighting chance.”

An AT&T spokesperson told Ars that “CBRS can coexist with incumbents in the lower 3 GHz band, and with such high demand for spectrum, it should. Thinking creatively about how to most efficiently use scarce spectrum to meet crucial needs is simply good public policy.”

AT&T said that an auction “would provide reimbursement for costs associated with” moving CBRS users to other spectrum and that “the Department of Defense has already stated that incumbents in the lower 3 GHz could share with low-power commercial uses.”

“Having a low-power use sandwiched between two high-power use cases is an inefficient use of spectrum that doesn’t make sense. Our proposal would fix that inefficiency,” AT&T said.

AT&T has previously said that under its proposal, CBRS priority license holders “would have the choice of relocating to the new CBRS band, accepting vouchers they can use toward bidding on new high-power licenses, or receiving a cash payment in exchange for the relinquishment of their priority rights.”

Democrat warns of threat to naval operations

Reallocating spectrum could require the Navy to move from the current CBRS band to the lower part of 3 GHz. US Senator Maria Cantwell (D-Wash.) sent a letter urging the Department of Defense to avoid major changes, saying the current sharing arrangement “allows the Navy to continue using high-power surveillance and targeting radars to protect vessels and our coasts, while also enabling commercial use of the band when and where the Navy does not need access.”

Moving CBRS users would “disrupt critical naval operations and homeland defense” and “undermine an innovative ecosystem of commercial wireless technology that will be extremely valuable for robotic manufacturing, precision agriculture, ubiquitous connectivity in large indoor spaces, and private wireless networks,” Cantwell wrote.

Cantwell said she is also concerned that “a substantial number of military radar systems that operate in the lower 3 GHz band” will be endangered by moving CBRS. She pointed out that the DoD’s September 2023 report said the 3.1 GHz range has “unique spectrum characteristics” that “provide long detection ranges, tracking accuracy, and discrimination capability required for DoD radar systems.” The spectrum “is low enough in the frequency range to maintain a high-power aperture capability in a transportable system” and “high enough in the frequency range that a sufficient angular accuracy can be maintained for a radar track function for a fire control capability,” the DoD report said.

Spectrum for the Future members

In addition to joining the cable industry in Spectrum for the Future, public interest groups are fighting for CBRS on their own. Public Knowledge and OTI teamed up with the American Library Association, the Benton Institute for Broadband & Society, the Schools Health & Libraries Broadband (SHLB) Coalition, and others in a November 2024 FCC filing that praised the pro-consumer virtues of CBRS.

“CBRS has been the most successful innovation in wireless technology in the last decade,” the groups said. They accused the big three mobile carriers of “seeking to cripple CBRS as a band that promotes not only innovation, but also competition.”

These advocacy groups are interested in helping cable companies and small home Internet providers compete against the big three mobile carriers because that opens new options for consumers. But the groups also point to many other use cases for CBRS, writing:

CBRS has encouraged the deployment of “open networks” designed to host users needing greater flexibility and control than that offered by traditional CMRS [Commercial Mobile Radio Services] providers, at higher power and with greater interference protection than possible using unlicensed spectrum. Manufacturing campuses (such as John Deere and Dow Chemical), transit hubs (Miami International Airport, Port of Los Angeles), supply chain and logistic centers (US Marine Corps), sporting arenas (Philadelphia’s Wells Fargo Center), school districts and libraries (Fresno Unified School District, New York Public Library) are all examples of a growing trend toward local spectrum access fueling purpose-built private LTE/5G networks for a wide variety of use cases.

The SHLB told Ars that “CBRS spectrum plays a critical role in helping anchor institutions like schools and libraries connect their communities, especially in rural and underserved areas where traditional broadband options may be limited. A number of our members rely on access to shared and unlicensed spectrum to deliver remote learning and essential digital services, often at low or no cost to the user.”

Spectrum for the Future’s members also include companies that sell services to help customers deploy CBRS networks, as well as entities like Miami International Airport that deploy their own CBRS-based private cellular networks. The NCTA featured Miami International Airport’s private network in a recent press release, saying that CBRS helped the airport “deliver more reliable connectivity for visitors while also powering a robust Internet of Things network to keep the airport running smoothly.”

Spectrum for the Future doesn’t list any staff on its website. Media requests are routed to a third-party public relations firm. An employee of the public relations firm declined to answer our questions about how Spectrum for the Future is structured and operated but said it is “a member-driven coalition with a wide range of active supporters and contributors, including innovators, anchor institutions, and technology companies.”

Spectrum for the Future appears to be organized by Salt Point Strategies, a public affairs consulting firm. Salt Point Spectrum Policy Analyst David Wright is described as Spectrum for the Future’s policy director in an FCC filing. We reached out to Wright and didn’t receive a response.

One Big Beautiful Bill is a battleground

Senator Ted Cruz at a Senate committee hearing, sitting in his seat and using his hand to move a nameplate that says

Senate Commerce Committee Chairman Ted Cruz (R-Texas) at a hearing on Tuesday, January 28, 2025.

Credit: Getty Images | Tom Williams

Senate Commerce Committee Chairman Ted Cruz (R-Texas) at a hearing on Tuesday, January 28, 2025. Credit: Getty Images | Tom Williams

The Trump-backed “One Big Beautiful Bill,” approved by the House, is one area of interest for both sides of the CBRS debate. The bill would restore the FCC’s expired authority to auction spectrum and require new auctions. One question is whether the bill will simply require the FCC to auction a minimum amount of spectrum or if it will require specific bands to be auctioned.

WISPA provided us with a statement about the version that passed the House, saying the group is glad it “excludes the 5.9 GHz and 6 GHz bands from its call to auction off 600 megahertz of spectrum” but worried because the bill “does not exclude the widely used and previously auctioned Citizens Broadband Radio Service (CBRS) band from competitive bidding, leaving it vulnerable to sale and/or major disruption.”

WISPA said that “spectrum auctions are typically designed to favor large players” and “cut out small and rural providers who operate on the front lines of the digital divide.” WISPA said that over 60 percent of its members “use CBRS to deliver high-quality broadband to hard-to-serve and previously unserved Americans.”

On June 5, Sen. Ted Cruz (R-Texas) released the text of the Senate Commerce Committee proposal, which also does not exclude the 3550–3700 MHz from potential auctions. Pai and AT&T issued statements praising Cruz’s bill.

Pai said that Cruz’s “bold approach answers President Trump’s call to keep all options on the table and provides the President with full flexibility to identify the right bands to meet surging consumer demand, safeguard our economic competitiveness, and protect national security.” AT&T said that “by renewing the FCC’s auction authority and creating a pipeline of mid-band spectrum, the Senate is taking a strong step toward meeting consumers’ insatiable demand for mobile data.”

The NCTA said it welcomed the plan to restore the FCC’s auction authority but urged lawmakers to “reject the predictable calls from large mobile carriers that seek to cripple competition and new services being offered over existing Wi-Fi and CBRS bands.”

Licensed, unlicensed, and in-between

Spectrum is generally made available on a licensed or unlicensed basis. Wireless carriers pay big bucks for licenses that grant them exclusive use of spectrum bands on which they deploy nationwide cellular networks. Unlicensed spectrum—like the bands used in Wi-Fi—can be used by anyone without a license as long as they follow rules that prevent interference with other users and services.

The FCC issued rules for the CBRS band in 2015 during the Obama administration, using a somewhat different kind of system. The FCC rules allow “for dynamic spectrum sharing in the 3.5 GHz band between the Department of Defense (DoD) and commercial spectrum users,” the National Telecommunications and Information Administration notes. “DoD users have protected, prioritized use of the spectrum. When the government isn’t using the airwaves, companies and the public can gain access through a tiered framework.”

Instead of a binary licensed-versus-unlicensed system, the FCC implemented a three-tiered system of access. Tier 1 is for incumbent users of the band, including federal users and fixed satellite service. Tier 1 users receive protection against harmful interference from Tier 2 and Tier 3 users.

Tier 2 of CBRS consists of Priority Access Licenses (PALs) that are distributed on a county-by-county basis through competitive bidding. Tier 2 users get interference protection from users of Tier 3, which is made available in a manner similar to unlicensed spectrum.

Tier 3 “is licensed-by-rule to permit open, flexible access to the band for the widest possible group of potential users,” the FCC says. Tier 3 users can operate throughout the 3550–3700 MHz band but “must not cause harmful interference to Incumbent Access users or Priority Access Licensees and must accept interference from these users. GAA users also have no expectation of interference protection from other GAA users.”

The public interest groups’ November 2024 filing with the FCC said the unique approach to spectrum sharing “allow[s] all would-be users to operate where doing so does not threaten harmful interference” and provides a happy medium between high-powered operations in exclusively licensed spectrum bands and low-powered operations in unlicensed spectrum.

CTIA wants the ability to send higher-power signals in the band, arguing that full-power wireless transmissions would help the US match the efforts of other countries “where this spectrum has been identified as central to 5G.” The public interest groups urged the FCC to reject the mobile industry proposal to increase power levels, saying it “would disrupt and diminish the expanding diversity of GAA users and use cases that represent the central purpose of CBRS’s innovative three-tier, low-power and coordinated sharing framework.”

Pai helped carriers as FCC chair

The FCC’s original plan for PALs during the Obama administration was to auction them off for individual Census tracts, small areas containing between 1,200 and 8,000 people each. During President Trump’s first term, the Pai FCC granted a CTIA request to boost the size of license areas from census tracts to counties, making it harder for small companies to win at auction.

The FCC auctioned PALs in 2020, getting bids of nearly $4.6 billion from 228 bidders. The biggest winners were Verizon, Dish Network, Charter, Comcast, and Cox.

Although Verizon uses CBRS for parts of its network, that doesn’t mean it’s on the same side as cable users in the policy debate. Verizon urged the FCC to increase the allowed power levels in the band. Dish owner EchoStar also asked for power increases. Cable companies oppose raising the power levels, with the NCTA saying that doing so would “jeopardize the continued availability of the 3.5 GHz band for lower-power operations” and harm both federal and non-federal users.

As head of CTIA, one of Pai’s main jobs is to obtain more licensed spectrum for the exclusive use of AT&T, Verizon, T-Mobile, and other mobile companies that his group represents. Pai’s Wall Street Journal op-ed said that “traffic on wireless networks is expected to triple by 2029,” driven by “AI, 5G home broadband and other emerging technologies.” Pai cited a study commissioned by CTIA to argue that “wireless networks will be unable to meet a quarter of peak demand in as little as two years.”

Spectrum for the Future countered that Pai “omits that the overwhelming share of this traffic will travel over Wi-Fi, not cellular networks.” CTIA told Ars that “the Ericsson studies we use for traffic growth projections only consider demand over commercial networks using licensed spectrum.”

Spectrum for the Future pointed to statements made by the CEOs of wireless carriers that seem to contradict Pai’s warnings of a spectrum shortage:

Mr. Pai cites a CTIA-funded study to claim “wireless networks will be unable to meet a quarter of peak demand in as little as two years.” If that’s true, then why are his biggest members’ CEOs telling Wall Street the exact opposite?

Verizon’s CEO insists he’s sitting on “a generation of spectrum”—”years and years and years” of spectrum capacity still to deploy. The CEO of Verizon’s consumer group goes even further, insisting they have “almost unlimited spectrum.” T-Mobile agrees, bragging that it has “only deployed 60 percent of our mid-band spectrum on 5G,” leaving “lots of spectrum we haven’t put into the fight yet.”

Battle could last for years

Spectrum for the Future also scoffed at Pai’s comparison of the US to China. Pai’s op-ed said that China “has accelerated its efforts to dominate in wireless and will soon boast more than four times the amount of commercial midband spectrum than the US.” Pai added that “China isn’t only deploying 5G domestically. It’s exporting its spectrum policies, its equipment vendors (such as Huawei and ZTE), and its Communist Party-centric vision of innovation to the rest of the world.”

Spectrum for the Future responded that “China’s spectrum policy goes all-in on exclusive-license frameworks, such as 5G, because they limit spectrum access to just a small handful of regime-aligned telecom companies complicit in Beijing’s censorship regime… America’s global wireless leadership, by contrast, is fueled by spectrum innovations like unlicensed Wi-Fi and CBRS spectrum sharing, whose hardware markets are dominated by American and allied companies.”

Spectrum for the Future also said that Pai and CTIA “blasting China for ‘exporting its spectrum policies’—while asking the US to adopt the same approach—is stunning hypocrisy.”

CTIA’s statement to Ars disputed Spectrum for the Future’s description. “The system of auctioning spectrum licenses was pioneered in America but is not used in China. China does, however, allocate unlicensed spectrum in a similar manner to the United States,” CTIA told Ars.

The lobbying battle and potential legal war that has Pai and CTIA lined up against the “everybody but Big Mobile” wireless coalition could last throughout Trump’s second term. Levin’s research note about the DoD proposal said, “the path from adoption to auction to making the spectrum available to the winners of an auction is likely to be at least three years.” The fight could go on a lot longer if “current licensees object and litigate,” Levin wrote.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Ex-FCC Chair Ajit Pai is now a wireless lobbyist—and enemy of cable companies Read More »

google’s-nightmare:-how-a-search-spinoff-could-remake-the-web

Google’s nightmare: How a search spinoff could remake the web


Google has shaped the Internet as we know it, and unleashing its index could change everything.

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google wasn’t around for the advent of the World Wide Web, but it successfully remade the web on its own terms. Today, any website that wants to be findable has to play by Google’s rules, and after years of search dominance, the company has lost a major antitrust case that could reshape both it and the web.

The closing arguments in the case just wrapped up last week, and Google could be facing serious consequences when the ruling comes down in August. Losing Chrome would certainly change things for Google, but the Department of Justice is pursuing other remedies that could have even more lasting impacts. During his testimony, Google CEO Sundar Pichai seemed genuinely alarmed at the prospect of being forced to license Google’s search index and algorithm, the so-called data remedies in the case. He claimed this would be no better than a spinoff of Google Search. The company’s statements have sometimes derisively referred to this process as “white labeling” Google Search.

But does a white label Google Search sound so bad? Google has built an unrivaled index of the web, but the way it shows results has become increasingly frustrating. A handful of smaller players in search have tried to offer alternatives to Google’s search tools. They all have different approaches to retrieving information for you, but they agree that spinning off Google Search could change the web again. Whether or not those changes are positive depends on who you ask.

The Internet is big and noisy

As Google’s search results have changed over the years, more people have been open to other options. Some have simply moved to AI chatbots to answer their questions, hallucinations be damned. But for most people, it’s still about the 10 blue links (for now).

Because of the scale of the Internet, there are only three general web search indexes: Google, Bing, and Brave. Every search product (including AI tools) relies on one or more of these indexes to probe the web. But what does that mean?

“Generally, a search index is a service that, when given a query, is able to find relevant documents published on the Internet,” said Brave’s search head Josep Pujol.

A search index is essentially a big database, and that’s not the same as search results. According to JP Schmetz, Brave’s chief of ads, it’s entirely possible to have the best and most complete search index in the world and still show poor results for a given query. Sound like anyone you know?

Google’s technological lead has allowed it to crawl more websites than anyone else. It has all the important parts of the web, plus niche sites, abandoned blogs, sketchy copies of legitimate websites, copies of those copies, and AI-rephrased copies of the copied copies—basically everything. And the result of this Herculean digital inventory is a search experience that feels increasingly discombobulated.

“Google is running large-scale experiments in ways that no rival can because we’re effectively blinded,” said Kamyl Bazbaz, head of public affairs at DuckDuckGo, which uses the Bing index. “Google’s scale advantage fuels a powerful feedback loop of different network effects that ensure a perpetual scale and quality deficit for rivals that locks in Google’s advantage.”

The size of the index may not be the only factor that matters, though. Brave, which is perhaps best known for its browser, also has a search engine. Brave Search is the default in its browser, but you can also just go to the URL in your current browser. Unlike most other search engines, Brave doesn’t need to go to anyone else for results. Pujol suggested that Brave doesn’t need the scale of Google’s index to find what you need. And admittedly, Brave’s search results don’t feel meaningfully worse than Google’s—they may even be better when you consider the way that Google tries to keep you from clicking.

Brave’s index spans around 25 billion pages, but it leaves plenty of the web uncrawled. “We could be indexing five to 10 times more pages, but we choose not to because not all the web has signal. Most web pages are basically noise,” said Pujol.

The freemium search engine Kagi isn’t worried about having the most comprehensive index. Kagi is a meta search engine. It pulls in data from multiple indexes, like Bing and Brave, but it has a custom index of what founder and CEO Vladimir Prelovac calls the “non-commercial web.”

When you search with Kagi, some of the results (it tells you the proportion) come from its custom index of personal blogs, hobbyist sites, and other content that is poorly represented on other search engines. It’s reminiscent of the days when huge brands weren’t always clustered at the top of Google—but even these results are being pushed out of reach in favor of AI, ads, Knowledge Graph content, and other Google widgets. That’s a big part of why Kagi exists, according to Prelovac.

A Google spinoff could change everything

We’ve all noticed the changes in Google’s approach to search, and most would agree that they have made finding reliable and accurate information harder. Regardless, Google’s incredibly deep and broad index of the Internet is in demand.

Even with Bing and Brave available, companies are going to extremes to syndicate Google Search results. A cottage industry has emerged to scrape Google searches as a stand-in for an official index. These companies are violating Google’s terms, yet they appear in Google Search results themselves. Google could surely do something about this if it wanted to.

The DOJ calls Google’s mountain of data the “essential raw material” for building a general search engine, and it believes forcing the firm to license that material is key to breaking its monopoly. The sketchy syndication firms will evaporate if the DOJ’s data remedies are implemented, which would give competitors an official way to utilize Google’s index. And utilize it they will.

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech.

Credit: Ryan Whitwam

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech. Credit: Ryan Whitwam

According to Prelovac, this could lead to an explosion in search choices. “The whole purpose of the Sherman Act is to proliferate a healthy, competitive marketplace. Once you have access to a search index, then you can have thousands of search startups,” said Prelovac.

The Kagi founder suggested that licensing Google Search could allow entities of all sizes to have genuinely useful custom search tools. Cities could use the data to create deep, hyper-local search, and people who love cats could make a cat-specific search engine, in both cases pulling what they want from the most complete database of online content. And, of course, general search products like Kagi would be able to license Google’s tech for a “nominal fee,” as the DOJ puts it.

Prelovac didn’t hesitate when asked if Kagi, which offers a limited number of free searches before asking users to subscribe, would integrate Google’s index. “Yes, that is something we would do,” he said. “And that’s what I believe should happen.”

There may be some drawbacks to unleashing Google’s search services. Judge Amit Mehta has expressed concern that blocking Google’s search placement deals could reduce browser choice, and there is a similar issue with the data remedies. If Google is forced to license search as an API, its few competitors in web indexing could struggle to remain afloat. In a roundabout way, giving away Google’s search tech could actually increase its influence.

The Brave team worries about how open access to Google’s search technology could impact diversity on the web. “If implemented naively, it’s a big problem,” said Brave’s ad chief JP Schmetz, “If the court forces Google to provide search at a marginal cost, it will not be possible for Bing or Brave to survive until the remedy ends.”

The landscape of AI-based search could also change. We know from testimony given during the remedy trial by OpenAI’s Nick Turley that the ChatGPT maker tried and failed to get access to Google Search to ground its AI models—it currently uses Bing. If Google were suddenly an option, you can be sure OpenAI and others would rush to connect Google’s web data to their large language models (LLMs).

The attempt to reduce Google’s power could actually grant it new monopolies in AI, according to Brave Chief Business Officer Brian Brown. “All of a sudden, you would have a single monolithic voice of truth across all the LLMs, across all the web,” Brown said.

What if you weren’t the product?

If white labeling Google does expand choice, even at the expense of other indexes, it will give more kinds of search products a chance in the market—maybe even some that shun Google’s focus on advertising. You don’t see much of that right now.

For most people, web search is and always has been a free service supported by ads. Google, Brave, DuckDuckGo, and Bing offer all the search queries you want for free because they want eyeballs. It’s been said often, but it’s true: If you’re not paying for it, you’re the product. This is an arrangement that bothers Kagi’s founder.

“For something as important as information consumption, there should not be an intermediary between me and the information, especially one that is trying to sell me something,” said Prelovac.

Kagi search results acknowledge the negative impact of today’s advertising regime. Kagi users see a warning next to results with a high number of ads and trackers. According to Prelovac, that is by far the strongest indication that a result is of low quality. That icon also lets you adjust the prevalence of such sites in your personal results. You can demote a site or completely hide it, which is a valuable option in the age of clickbait.

Kagi search gives you a lot of control.

Credit: Ryan Whitwam

Kagi search gives you a lot of control. Credit: Ryan Whitwam

Kagi’s paid approach to search changes its relationship with your data. “We literally don’t need user data,” Prelovac said. “But it’s not only that we don’t need it. It’s a liability.”

Prelovac admitted that getting people to pay for search is “really hard.” Nevertheless, he believes ad-supported search is a dead end. So Kagi is planning for a future in five or 10 years when more people have realized they’re still “paying” for ad-based search with lost productivity time and personal data, he said.

We know how Google handles user data (it collects a lot of it), but what does that mean for smaller search engines like Brave and DuckDuckGo that rely on ads?

“I’m sure they mean well,” said Prelovac.

Brave said that it shields user data from advertisers, relying on first-party tracking to attribute clicks to Brave without touching the user. “They cannot retarget people later; none of that is happening,” said Brave’s JP Schmetz.

DuckDuckGo is a bit of an odd duck—it relies on Bing’s general search index, but it adds a layer of privacy tools on top. It’s free and ad-supported like Google and Brave, but the company says it takes user privacy seriously.

“Viewing ads is privacy protected by DuckDuckGo, and most ad clicks are managed by Microsoft’s ad network,” DuckDuckGo’s Kamyl Bazbaz said. He explained that DuckDuckGo has worked with Microsoft to ensure its network does not track users or create any profiles based on clicks. He added that the company has a similar privacy arrangement with TripAdvisor for travel-related ads.

It’s AI all the way down

We can’t talk about the future of search without acknowledging the artificially intelligent elephant in the room. As Google continues its shift to AI-based search, it’s tempting to think of the potential search spin-off as a way to escape that trend. However, you may find few refuges in the coming years. There’s a real possibility that search is evolving beyond the 10 blue links and toward an AI assistant model.

All non-Google search engines have AI integrations, with the most prominent being Microsoft Bing, which has a partnership with OpenAI. But smaller players have AI search features, too. The folks working on these products agree with Microsoft and Google on one important point: They see AI as inevitable.

Today’s Google alternatives all have their own take on AI Overviews, which generates responses to queries based on search results. They’re generally not as in-your-face as Google AI, though. While Google and Microsoft are intensely focused on increasing the usage of AI search, other search operators aren’t pushing for that future. They are along for the ride, though.

AI overview on phone

AI Overviews are integrated with Google’s search results, and most other players have their own version.

Credit: Google

AI Overviews are integrated with Google’s search results, and most other players have their own version. Credit: Google

“We’re finding that some people prefer to start in chat mode and then jump into more traditional search results when needed, while others prefer the opposite,” Bazbaz said. “So we thought the best thing to do was offer both. We made it easy to move between them, and we included an off switch for those who’d like to avoid AI altogether.”

The team at Brave views AI as a core means of accessing search and one that will continue to grow. Brave generates AI answers for many searches and prominently cites sources. You can also disable Brave’s AI if you prefer. But according to search chief Josep Pujol, the move to AI search is inevitable for a pretty simple reason: It’s convenient, and people will always choose convenience. So AI is changing the web as we know it, for better or worse, because LLMs can save a smidge of time, especially for more detailed “long-tail” queries. These AI features may give you false information while they do it, but that’s not always apparent.

This is very similar to the language Google uses when discussing agentic search, although it expresses it in a more nuanced way. By understanding the task behind a query, Google hopes to provide AI answers that save people time, even if the model needs a few ticks to fan out and run multiple searches to generate a more comprehensive report on a topic. That’s probably still faster than running multiple searches and manually reviewing the results, and it could leave traditional search as an increasingly niche service, even in a world with more choices.

“Will the 10 blue links continue to exist in 10 years?” Pujol asked. “Actually, one question would be, does it even exist now? In 10 years, [search] will have evolved into more of an AI conversation behavior or even agentic. That is probably the case. What, for sure, will continue to exist is the need to search. Search is a verb, an action that you do, and whether you will do it directly or whether it will be done through an agent, it’s a search engine.”

Vlad from Kagi sees AI becoming the default way we access information in the long term, but his search engine doesn’t force you to use it. On Kagi, you can expand the AI box for your searches and ask follow-ups, and the AI will open automatically if you use a question mark in your search. But that’s just the start.

“You watch Star Trek, nobody’s clicking on links there—I do believe in that vision in science fiction movies,” Prelovac said. “I don’t think my daughter will be clicking links in 10 years. The only question is if the current technology will be the one that gets us there. LLMs have inherent flaws. I would even tend to say it’s likely not going to get us to Star Trek.”

If we think of AI mainly as a way to search for information, the future becomes murky. With generative AI in the driver’s seat, questions of authority and accuracy may be left to language models that often behave in unpredictable and difficult-to-understand ways. Whether we’re headed for an AI boom or bust—for continued Google dominance or a new era of choice—we’re facing fundamental changes to how we access information.

Maybe if we get those thousands of search startups, there will be a few that specialize in 10 blue links. We can only hope.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google’s nightmare: How a search spinoff could remake the web Read More »

meta-and-yandex-are-de-anonymizing-android-users’-web-browsing-identifiers

Meta and Yandex are de-anonymizing Android users’ web browsing identifiers


Abuse allows Meta and Yandex to attach persistent identifiers to detailed browsing histories.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Tracking code that Meta and Russia-based Yandex embed into millions of websites is de-anonymizing visitors by abusing legitimate Internet protocols, causing Chrome and other browsers to surreptitiously send unique identifiers to native apps installed on a device, researchers have discovered. Google says it’s investigating the abuse, which allows Meta and Yandex to convert ephemeral web identifiers into persistent mobile app user identities.

The covert tracking—implemented in the Meta Pixel and Yandex Metrica trackers—allows Meta and Yandex to bypass core security and privacy protections provided by both the Android operating system and browsers that run on it. Android sandboxing, for instance, isolates processes to prevent them from interacting with the OS and any other app installed on the device, cutting off access to sensitive data or privileged system resources. Defenses such as state partitioning and storage partitioning, which are built into all major browsers, store site cookies and other data associated with a website in containers that are unique to every top-level website domain to ensure they’re off-limits for every other site.

A blatant violation

“One of the fundamental security principles that exists in the web, as well as the mobile system, is called sandboxing,” Narseo Vallina-Rodriguez, one of the researchers behind the discovery, said in an interview. “You run everything in a sandbox, and there is no interaction within different elements running on it. What this attack vector allows is to break the sandbox that exists between the mobile context and the web context. The channel that exists allowed the Android system to communicate what happens in the browser with the identity running in the mobile app.”

The bypass—which Yandex began in 2017 and Meta started last September—allows the companies to pass cookies or other identifiers from Firefox and Chromium-based browsers to native Android apps for Facebook, Instagram, and various Yandex apps. The companies can then tie that vast browsing history to the account holder logged into the app.

This abuse has been observed only in Android, and evidence suggests that the Meta Pixel and Yandex Metrica target only Android users. The researchers say it may be technically feasible to target iOS because browsers on that platform allow developers to programmatically establish localhost connections that apps can monitor on local ports.

In contrast to iOS, however, Android imposes fewer controls on local host communications and background executions of mobile apps, the researchers said, while also implementing stricter controls in app store vetting processes to limit such abuses. This overly permissive design allows Meta Pixel and Yandex Metrica to send web requests with web tracking identifiers to specific local ports that are continuously monitored by the Facebook, Instagram, and Yandex apps. These apps can then link pseudonymous web identities with actual user identities, even in private browsing modes, effectively de-anonymizing users’ browsing habits on sites containing these trackers.

Meta Pixel and Yandex Metrica are analytics scripts designed to help advertisers measure the effectiveness of their campaigns. Meta Pixel and Yandex Metrica are estimated to be installed on 5.8 million and 3 million sites, respectively.

Meta and Yandex achieve the bypass by abusing basic functionality built into modern mobile browsers that allows browser-to-native app communications. The functionality lets browsers send web requests to local Android ports to establish various services, including media connections through the RTC protocol, file sharing, and developer debugging.

A conceptual diagram representing the exchange of identifiers between the web trackers running on the browser context and native Facebook, Instagram, and Yandex apps for Android.

A conceptual diagram representing the exchange of identifiers between the web trackers running on the browser context and native Facebook, Instagram, and Yandex apps for Android.

While the technical underpinnings differ, both Meta Pixel and Yandex Metrica are performing a “weird protocol misuse” to gain unvetted access that Android provides to localhost ports on the 127.0.0.1 IP address. Browsers access these ports without user notification. Facebook, Instagram, and Yandex native apps silently listen on those ports, copy identifiers in real time, and link them to the user logged into the app.

A representative for Google said the behavior violates the terms of service for its Play marketplace and the privacy expectations of Android users.

“The developers in this report are using capabilities present in many browsers across iOS and Android in unintended ways that blatantly violate our security and privacy principles,” the representative said, referring to the people who write the Meta Pixel and Yandex Metrica JavaScript. “We’ve already implemented changes to mitigate these invasive techniques and have opened our own investigation and are directly in touch with the parties.”

Meta didn’t answer emailed questions for this article, but provided the following statement: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”

Yandex representatives didn’t answer an email seeking comment.

How Meta and Yandex de-anonymize Android users

Meta Pixel developers have abused various protocols to implement the covert listening since the practice began last September. They started by causing apps to send HTTP requests to port 12387. A month later, Meta Pixel stopped sending this data, even though Facebook and Instagram apps continued to monitor the port.

In November, Meta Pixel switched to a new method that invoked WebSocket, a protocol for two-way communications, over port 12387.

That same month, Meta Pixel also deployed a new method that used WebRTC, a real-time peer-to-peer communication protocol commonly used for making audio or video calls in the browser. This method used a complicated process known as SDP munging, a technique for JavaScript code to modify Session Description Protocol data before it’s sent. Still in use today, the SDP munging by Meta Pixel inserts key _fbp cookie content into fields meant for connection information. This causes the browser to send that data as part of a STUN request to the Android local host, where the Facebook or Instagram app can read it and link it to the user.

In May, a beta version of Chrome introduced a mitigation that blocked the type of SDP munging that Meta Pixel used. Within days, Meta Pixel circumvented the mitigation by adding a new method that swapped the STUN requests with the TURN requests.

In a post, the researchers provided a detailed description of the _fbp cookie from a website to the native app and, from there, to the Meta server:

1. The user opens the native Facebook or Instagram app, which eventually is sent to the background and creates a background service to listen for incoming traffic on a TCP port (12387 or 12388) and a UDP port (the first unoccupied port in 12580–12585). Users must be logged-in with their credentials on the apps.

2. The user opens their browser and visits a website integrating the Meta Pixel.

3. At this stage, some websites wait for users’ consent before embedding Meta Pixel. In our measurements of the top 100K website homepages, we found websites that require consent to be a minority (more than 75% of affected sites does not require user consent)…

4. The Meta Pixel script is loaded and the _fbp cookie is sent to the native Instagram or Facebook app via WebRTC (STUN) SDP Munging.

5. The Meta Pixel script also sends the _fbp value in a request to https://www.facebook.com/tr along with other parameters such as page URL (dl), website and browser metadata, and the event type (ev) (e.g., PageView, AddToCart, Donate, Purchase).

6. The Facebook or Instagram apps receive the _fbp cookie from the Meta JavaScripts running on the browser and transmits it to the GraphQL endpoint (https://graph[.]facebook[.]com/graphql) along with other persistent user identifiers, linking users’ fbp ID (web visit) with their Facebook or Instagram account.

Detailed flow of the way the Meta Pixel leaks the _fbp cookie from Android browsers to it’s Facebook and Instagram apps.

Detailed flow of the way the Meta Pixel leaks the _fbp cookie from Android browsers to it’s Facebook and Instagram apps.

The first known instance of Yandex Metrica linking websites visited in Android browsers to app identities was in May 2017, when the tracker started sending HTTP requests to local ports 29009 and 30102. In May 2018, Yandex Metrica also began sending the data through HTTPS to ports 29010 and 30103. Both methods remained in place as of publication time.

An overview of Yandex identifier sharing

An overview of Yandex identifier sharing

A timeline of web history tracking by Meta and Yandex

A timeline of web history tracking by Meta and Yandex

Some browsers for Android have blocked the abusive JavaScript in trackers. DuckDuckGo, for instance, was already blocking domains and IP addresses associated with the trackers, preventing the browser from sending any identifiers to Meta. The browser also blocked most of the domains associated with Yandex Metrica. After the researchers notified DuckDuckGo of the incomplete blacklist, developers added the missing addresses.

The Brave browser, meanwhile, also blocked the sharing of identifiers due to its extensive blocklists and existing mitigation to block requests to the localhost without explicit user consent. Vivaldi, another Chromium-based browser, forwards the identifiers to local Android ports when the default privacy setting is in place. Changing the setting to block trackers appears to thwart browsing history leakage, the researchers said.

Tracking blocker settings in Vivaldi for Android.

There’s got to be a better way

The various remedies DuckDuckGo, Brave, Vivaldi, and Chrome have put in place are working as intended, but the researchers caution they could become ineffective at any time.

“Any browser doing blocklisting will likely enter into a constant arms race, and it’s just a partial solution,” Vallina Rodriguez said of the current mitigations. “Creating effective blocklists is hard, and browser makers will need to constantly monitor the use of this type of capability to detect other hostnames potentially abusing localhost channels and then updating their blocklists accordingly.”

He continued:

While this solution works once you know the hostnames doing that, it’s not the right way of mitigating this issue, as trackers may find ways of accessing this capability (e.g., through more ephemeral hostnames). A long-term solution should go through the design and development of privacy and security controls for localhost channels, so that users can be aware of this type of communication and potentially enforce some control or limit this use (e.g., a permission or some similar user notifications).

Chrome and most other Chromium-based browsers executed the JavaScript as Meta and Yandex intended. Firefox did as well, although for reasons that aren’t clear, the browser was not able to successfully perform the SDP munging specified in later versions of the code. After blocking the STUN variant of SDP munging in the early May beta release, a production version of Chrome released two weeks ago began blocking both the STUN and TURN variants. Other Chromium-based browsers are likely to implement it in the coming weeks. Firefox didn’t respond to an email asking if it has plans to block the behavior in that browser.

The researchers warn that the current fixes are so specific to the code in the Meta and Yandex trackers that it would be easy to bypass them with a simple update.

“They know that if someone else comes in and tries a different port number, they may bypass this protection,” said Gunes Acar, the researcher behind the initial discovery, referring to the Chrome developer team at Google. “But our understanding is they want to send this message that they will not tolerate this form of abuse.”

Fellow researcher Vallina-Rodriguez said the more comprehensive way to prevent the abuse is for Android to overhaul the way it handles access to local ports.

“The fundamental issue is that the access to the local host sockets is completely uncontrolled on Android,” he explained. “There’s no way for users to prevent this kind of communication on their devices. Because of the dynamic nature of JavaScript code and the difficulty to keep blocklists up to date, the right way of blocking this persistently is by limiting this type of access at the mobile platform and browser level, including stricter platform policies to limit abuse.”

Got consent?

The researchers who made this discovery are:

  • Aniketh Girish, PhD student at IMDEA Networks
  • Gunes Acar, assistant professor in Radboud University’s Digital Security Group & iHub
  • Narseo Vallina-Rodriguez, associate professor at IMDEA Networks
  • Nipuna Weerasekara, PhD student at IMDEA Networks
  • Tim Vlummens, PhD student at COSIC, KU Leuven

Acar said he first noticed Meta Pixel accessing local ports while visiting his own university’s website.

There’s no indication that Meta or Yandex has disclosed the tracking to either websites hosting the trackers or end users who visit those sites. Developer forums show that many websites using Meta Pixel were caught off guard when the scripts began connecting to local ports.

“Since 5th September, our internal JS error tracking has been flagging failed fetch requests to localhost: 12387,” one developer wrote. “No changes have been made on our side, and the existing Facebook tracking pixel we use loads via Google Tag Manager.”

“Is there some way I can disable this?” another developer encountering the unexplained local port access asked.

It’s unclear whether browser-to-native-app tracking violates any privacy laws in various countries. Both Meta and companies hosting its Meta Pixel, however, have faced a raft of lawsuits in recent years alleging that the data collected violates privacy statutes. A research paper from 2023 found that Meta pixel, then called the Facebook Pixel, “tracks a wide range of user activities on websites with alarming detail, especially on websites classified as sensitive categories under GDPR,” the abbreviation for the European Union’s General Data Protection Regulation.

So far, Google has provided no indication that it plans to redesign the way Android handles local port access. For now, the most comprehensive protection against Meta Pixel and Yandex Metrica tracking is to refrain from installing the Facebook, Instagram, or Yandex apps on Android devices.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Meta and Yandex are de-anonymizing Android users’ web browsing identifiers Read More »