Author name: Rejus Almole

google-ceo:-if-an-ai-bubble-pops,-no-one-is-getting-out-clean

Google CEO: If an AI bubble pops, no one is getting out clean

Market concerns and Google’s position

Alphabet’s recent market performance has been driven by investor confidence in the company’s ability to compete with OpenAI’s ChatGPT, as well as its development of specialized chips for AI that can compete with Nvidia’s. Nvidia recently reached a world-first $5 trillion valuation due to making GPUs that can accelerate the matrix math at the heart of AI computations.

Despite acknowledging that no company would be immune to a potential AI bubble burst, Pichai argued that Google’s unique position gives it an advantage. He told the BBC that the company owns what he called a “full stack” of technologies, from chips to YouTube data to models and frontier science research. This integrated approach, he suggested, would help the company weather any market turbulence better than competitors.

Pichai also told the BBC that people should not “blindly trust” everything AI tools output. The company currently faces repeated accuracy concerns about some of its AI models. Pichai said that while AI tools are helpful “if you want to creatively write something,” people “have to learn to use these tools for what they’re good at and not blindly trust everything they say.”

In the BBC interview, the Google boss also addressed the “immense” energy needs of AI, acknowledging that the intensive energy requirements of expanding AI ventures have caused slippage on Alphabet’s climate targets. However, Pichai insisted that the company still wants to achieve net zero by 2030 through investments in new energy technologies. “The rate at which we were hoping to make progress will be impacted,” Pichai said, warning that constraining an economy based on energy “will have consequences.”

Even with the warnings about a potential AI bubble, Pichai did not miss his chance to promote the technology, albeit with a hint of danger regarding its widespread impact. Pichai described AI as “the most profound technology” humankind has worked on.

“We will have to work through societal disruptions,” he said, adding that the technology would “create new opportunities” and “evolve and transition certain jobs.” He said people who adapt to AI tools “will do better” in their professions, whatever field they work in.

Google CEO: If an AI bubble pops, no one is getting out clean Read More »

with-a-new-company,-jeff-bezos-will-become-a-ceo-again

With a new company, Jeff Bezos will become a CEO again

Jeff Bezos is one of the world’s richest and most famous tech CEOs, but he hasn’t actually been a CEO of anything since 2021. That’s now changing as he takes on the role of co-CEO of a new AI company, according to a New York Times report citing three people familiar with the company.

Grandiosely named Project Prometheus (and not to be confused with the NASA project of the same name), the company will focus on using AI to pursue breakthroughs in research, engineering, manufacturing, and other fields that are dubbed part of “the physical economy”—in contrast to the software applications that are likely the first thing most people in the general public think of when they hear “AI.”

Bezos’ co-CEO will be Dr. Vik Bajaj, a chemist and physicist who previously led life sciences work at Google X, an Alphabet-backed research group that worked on speculative projects that could lead to more product categories. (For example, it developed technologies that would later underpin Google’s Waymo service.) Bajaj also worked at Verily, another Alphabet-backed research group focused on life sciences, and Foresite Labs, an incubator for new AI companies.

With a new company, Jeff Bezos will become a CEO again Read More »

ucla-faculty-gets-big-win-in-suit-against-trump’s-university-attacks

UCLA faculty gets big win in suit against Trump’s university attacks


Government can’t use funding threats to override the First Amendment.

While UCLA has been most prominently targeted by the Trump Administration, the ruling protects the entire UC system. Credit: Myung J. Chun

On Friday, a US District Court issued a preliminary injunction blocking the United States government from halting federal funding at UCLA or any other school in the University of California system. The ruling came in response to a suit filed by groups representing the faculty at these schools challenging the Trump administration’s attempts to force UCLA into a deal that would substantially revise instruction and policy.

The court’s decision lays out how the Trump administration’s attacks on universities follow a standard plan: use accusations of antisemitism to justify an immediate cut to funding, then use the loss of money to compel an agreement that would result in revisions to university instruction and management. The court finds that this plan was deficient on multiple grounds, violating legal procedures for cutting funding to an illegal attempt and suppressing the First Amendment rights of faculty.

The result is a reprieve for the entire University of California system, as well as a clear pathway for any universities to fight back against the Trump administration’s attacks on research and education.

First Amendment violations

The Judge overseeing this case, Rita Lin, issued separate documents describing the reasoning behind her decision and the sanctions she has placed on the Trump administration. In the first, she lays out the argument that the threats facing the UC system, and most notably UCLA, are part of a scripted campaign deployed against many other universities, one that proceeds through several steps. The Trump administration’s Task Force to Combat Anti-Semitism is central to this effort, which starts with the opening of a civil rights investigation against a university that was the site of anti-Israel protests during the conflict in Gaza.

“Rooting out antisemitism is undisputedly a laudable and important goal,” Judge Lin wrote. But the investigations in many cases take place after those universities have already taken corrective steps, which the Trump administration seemingly never considers. Instead, while the investigations are still ongoing, agencies throughout the federal government cancel funding for research and education meant for that university and announce that there will be no future funding without an agreement.

The final step is a proposed settlement that would include large payments (over $1.2 billion in UCLA’s case) and a set of conditions that alter university governance and instruction. These conditions often have little to no connection with antisemitism.

While all of this was ostensibly meant to combat antisemitism, the plaintiffs in this case presented a huge range of quotes from administration officials, including the head of the Task Force to Combat Anti-Semitism, saying the goal was to suppress certain ideas on campus. “The unrebutted record in this case shows that Defendants have used the threat of investigations and economic sanctions to… coerce the UC to stamp out faculty, staff, and student ‘woke,’ ‘left,’ ‘anti-American,’ ‘anti-Western,’ and ‘Marxist’ speech,” Lin said.

And even before any sort of agreement was reached, there was extensive testimony that people on campus changed their teaching and research to avoid further attention from the administration. “Plaintiffs’ members express fear that researching, teaching, and speaking on disfavored topics will trigger further retaliatory funding cancellations against the UC,” Lin wrote, “and that they will be blamed for the retaliation. They also describe fears that the UC will retaliate against them to avoid further funding cuts or in order to comply with the proposed settlement agreement.”

That’s a problem, given that teaching and research topics are forms of speech, and therefore protected by the First Amendment. “These are classic, predictable First Amendment harms, and exactly what Defendants publicly said that they intended,” Lin concluded.

Beyond speech

But the First Amendment isn’t the only issue here. The Civil Rights Act, most notably Title VI, lays out a procedure for cutting federal funding, including warnings and hearings before any funds are shut off. That level of coercion is also limited to cases where there’s an indication that voluntary compliance won’t work. Any funding cut would need to target the specific programs involved and the money allocated to them. There is nothing in Title VI that enables the sort of financial payments that the government has been demanding (and, in some cases, receiving) from schools.

It’s pretty obvious that none of these procedures are being followed here. And as Lin noted in her ruling, “Defendants conceded at oral argument that, of the billions of dollars of federal university funding suspended across numerous agencies in recent months, not a single agency has followed the procedures required by Title VI and IX.”

She found that the government decided it wasn’t required to follow the Civil Rights Act procedures. (Reading through the decision, it becomes hard to tell where the government offered any defense of its actions at all.)

The decision to ignore all existing procedures, in turn, causes additional problems, including violations of the Tenth Amendment, which limits the actions that the government can take. And it runs afoul of the Administrative Procedures Act, which prohibits the government from taking actions that are “arbitrary and capricious.”

All of this provided Lin with extensive opportunities to determine that the Plaintiffs, largely organizations that represent the faculty at University of California schools, are likely to prevail in their suit, and thus are deserving of a preliminary injunction to block the federal government’s actions. But first, she had to deal with a recent Supreme Court precedent holding that cases involving federal money belong in a different court system. She did so by arguing that this case is largely about First Amendment and federal procedures rather than any sort of contract for federal money; money is being used as a lever here, so they ruling must involve restoring the money to address the free speech issues.

That issue will undoubtedly be picked up on appeal as it makes its way through the courts.

Complete relief

Lin identified a coercive program that is being deployed against many universities and is already suppressing speech throughout the University of California system, including on campuses that haven’t been targeted yet. She is issuing a ruling that targets the program broadly.

“Plaintiffs have shown that Defendants are coercing the [University of California] as a whole, through the Task Force Policy and Funding Cancellation, to stamp out their members’ disfavored speech,” Lin concluded. “Therefore, to afford Plaintiffs complete relief, the entirety of the coercive practice must be enjoined, not just the suspensions that impact Plaintiffs’ members.”

Her ruling indicates that if the federal government decides it wants to cut any grants to any school in the UC system, it has to go through the entire procedure set out in the Civil Rights Act. The government is also prohibited from demanding money from any of these schools as a fine or payment, and it can’t threaten future funding to the schools. The current hold on grants to the school by the government must also be lifted.

In short, the entire UC system should be protected from any of the ways that the government has been trying to use accusations of antisemitism to suppress ideas that it disfavors. And since those primarily involve federal funding, that has to be restored, and any future threats to it must be blocked.

While this case is likely to face a complicated appeals process, Lin’s ruling makes it extremely clear that all of these cases are exactly what they seemed. Just as members of the administration stated in public multiple times, they decided to target some ideas they disfavored and simply made up a process that would let them do so.

While it worked against a number of prominent universities, its legal vulnerabilities have been there from the start.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

UCLA faculty gets big win in suit against Trump’s university attacks Read More »

“how-about-no”:-fcc-boss-brendan-carr-says-he-won’t-end-news-distortion-probes

“How about no”: FCC boss Brendan Carr says he won’t end news distortion probes

Federal Communications Commission Chairman Brendan Carr says he won’t scrap the agency’s controversial news distortion policy despite calls from a bipartisan group of former FCC chairs and commissioners.

“How about no,” Carr wrote in an X post in response to the petition from former FCC leaders. “On my watch, the FCC will continue to hold broadcasters accountable to their public interest obligations.”

The petition filed yesterday by former FCC chairs and commissioners asked the FCC to repeal its 1960s-era news distortion policy, which Carr has repeatedly invoked in threats to revoke broadcast licenses. In the recent Jimmy Kimmel controversy, Carr said that ABC affiliates could have licenses revoked for news distortion if they kept the comedian on the air.

The petition said the Kimmel incident and several other Carr threats illustrate “the extraordinary intrusions on editorial decision-making that Chairman Carr apparently understands the news distortion policy to permit.” The petition argued that the “policy’s purpose—to eliminate bias in the news—is not a legitimate government interest,” that it has chilled broadcasters’ speech, that it has been weaponized for partisan purposes, that it is overly vague, and is unnecessary given the separate rule against broadcast hoaxes.

“The news distortion policy is no longer justifiable under today’s First Amendment doctrine and no longer necessary in today’s media environment… The Commission should repeal the policy in full and recognize that it may not investigate or penalize broadcasters for ‘distorting,’ ‘slanting,’ or ‘staging’ the news, unless the broadcast at issue independently meets the high standard for broadcasting a dangerous hoax under 47 C.F.R. § 73.1217,” the petition said.

News distortion policy rarely enforced

The petition was filed by Mark Fowler, a Republican who chaired the FCC from 1981 to 1987; Dennis Patrick, a Republican who chaired the FCC from 1987 to 1989; Alfred Sikes, a Republican who chaired the FCC from 1989 to 1993; Tom Wheeler, a Democrat who chaired the FCC from 2013 to 2017; Andrew Barrett, a Republican who served as a commissioner from 1989 to 1996; Ervin Duggan, a Democrat who served as a commissioner from 1990 to 1994; and Rachelle Chong, a Republican who served as a commissioner from 1994 to 1997.

“How about no”: FCC boss Brendan Carr says he won’t end news distortion probes Read More »

forget-agi—sam-altman-celebrates-chatgpt-finally-following-em-dash-formatting-rules

Forget AGI—Sam Altman celebrates ChatGPT finally following em dash formatting rules


Next stop: superintelligence

Ongoing struggles with AI model instruction-following show that true human-level AI still a ways off.

Em dashes have become what many believe to be a telltale sign of AI-generated text over the past few years. The punctuation mark appears frequently in outputs from ChatGPT and other AI chatbots, sometimes to the point where readers believe they can identify AI writing by its overuse alone—although people can overuse it, too.

On Thursday evening, OpenAI CEO Sam Altman posted on X that ChatGPT has started following custom instructions to avoid using em dashes. “Small-but-happy win: If you tell ChatGPT not to use em-dashes in your custom instructions, it finally does what it’s supposed to do!” he wrote.

The post, which came two days after the release of OpenAI’s new GPT-5.1 AI model, received mixed reactions from users who have struggled for years with getting the chatbot to follow specific formatting preferences. And this “small win” raises a very big question: If the world’s most valuable AI company has struggled with controlling something as simple as punctuation use after years of trying, perhaps what people call artificial general intelligence (AGI) is farther off than some in the industry claim.

Sam Altman @sama Small-but-happy win: If you tell ChatGPT not to use em-dashes in your custom instructions, it finally does what it's supposed to do! 11:48 PM · Nov 13, 2025 · 2.4M Views

A screenshot of Sam Altman’s post about em dashes on X. Credit: X

“The fact that it’s been 3 years since ChatGPT first launched, and you’ve only just now managed to make it obey this simple requirement, says a lot about how little control you have over it, and your understanding of its inner workings,” wrote one X user in a reply. “Not a good sign for the future.”

While Altman likes to publicly talk about AGI (a hypothetical technology equivalent to humans in general learning ability), superintelligence (a nebulous concept for AI that is far beyond human intelligence), and “magic intelligence in the sky” (his term for AI cloud computing?) while raising funds for OpenAI, it’s clear that we still don’t have reliable artificial intelligence here today on Earth.

But wait, what is an em dash anyway, and why does it matter so much?

AI models love em dashes because we do

Unlike a hyphen, which is a short punctuation mark used to connect words or parts of words, that lives with a dedicated key on your keyboard (-), an em dash is a long dash denoted by a special character (—) that writers use to set off parenthetical information, indicate a sudden change in thought, or introduce a summary or explanation.

Even before the age of AI language models, some writers frequently bemoaned the overuse of the em dash in modern writing. In a 2011 Slate article, writer Noreen Malone argued that writers used the em dash “in lieu of properly crafting sentences” and that overreliance on it “discourages truly efficient writing.” Various Reddit threads posted prior to ChatGPT’s launch featured writers either wrestling over the etiquette of proper em dash use or admitting to their frequent use as a guilty pleasure.

In 2021, one writer in the r/FanFiction subreddit wrote, “For the longest time, I’ve been addicted to Em Dashes. They find their way into every paragraph I write. I love the crisp straight line that gives me the excuse to shove details or thoughts into an otherwise orderly paragraph. Even after coming back to write after like two years of writer’s block, I immediately cram as many em dashes as I can.”

Because of the tendency for AI chatbots to overuse them, detection tools and human readers have learned to spot em dash use as a pattern, creating a problem for the small subset of writers who naturally favor the punctuation mark in their work. As a result, some journalists are complaining that AI is “killing” the em dash.

No one knows precisely why LLMs tend to overuse em dashes. We’ve seen a wide range of speculation online that attempts to explain the phenomenon, from noticing that em dashes were more popular in 19th-century books used as training data (according to a 2018 study, dash use in the English language peaked around 1860 before declining through the mid-20th century) or perhaps AI models borrowed the habit from automatic em-dash character conversion on the blogging site Medium.

One thing we know for sure is that LLMs tend to output frequently seen patterns in their training data (fed in during the initial training process) and from a subsequent reinforcement learning process that often relies on human preferences. As a result, AI language models feed you a sort of “smoothed out” average style of whatever you ask them to provide, moderated by whatever they are conditioned to produce through user feedback.

So the most plausible explanation is still that requests for professional-style writing from an AI model trained on vast numbers of examples from the Internet will lean heavily toward the prevailing style in the training data, where em dashes appear frequently in formal writing, news articles, and editorial content. It’s also possible that during training through human feedback (called RLHF), responses with em dashes, for whatever reason, received higher ratings. Perhaps it’s because those outputs appeared more sophisticated or engaging to evaluators, but that’s just speculation.

From em dashes to AGI?

To understand what Altman’s “win” really means, and what it says about the road to AGI, we need to understand how ChatGPT’s custom instructions actually work. They allow users to set persistent preferences that apply across all conversations by appending written instructions to the prompt that is fed into the model just before the chat begins. Users can specify tone, format, and style requirements without needing to repeat those requests manually in every new chat.

However, the feature has not always worked reliably because LLMs do not work reliably (even OpenAI and Anthropic freely admit this). An LLM takes an input and produces an output, spitting out a statistically plausible continuation of a prompt (a system prompt, the custom instructions, and your chat history), and it doesn’t really “understand” what you are asking. With AI language model outputs, there is always some luck involved in getting them to do what you want.

In our informal testing of GPT-5.1 with custom instructions, ChatGPT did appear to follow our request not to produce em dashes. But despite Altman’s claim, the response from X users appears to show that experiences with the feature continue to vary, at least when the request is not placed in custom instructions.

So if LLMs are statistical text-generation boxes, what does “instruction following” even mean? That’s key to unpacking the hypothetical path from LLMs to AGI. The concept of following instructions for an LLM is fundamentally different from how we typically think about following instructions as humans with general intelligence, or even a traditional computer program.

In traditional computing, instruction following is deterministic. You tell a program “don’t include character X,” and it won’t include that character. The program executes rules exactly as written. With LLMs, “instruction following” is really about shifting statistical probabilities. When you tell ChatGPT “don’t use em dashes,” you’re not creating a hard rule. You’re adding text to the prompt that makes tokens associated with em dashes less likely to be selected during the generation process. But “less likely” isn’t “impossible.”

Every token the model generates is selected from a probability distribution. Your custom instruction influences that distribution, but it’s competing with the model’s training data (where em dashes appeared frequently in certain contexts) and everything else in the prompt. Unlike code with conditional logic, there’s no separate system verifying outputs against your requirements. The instruction is just more text that influences the statistical prediction process.

When Altman celebrates finally getting GPT to avoid em dashes, he’s really celebrating that OpenAI has tuned the latest version of GPT-5.1 (probably through reinforcement learning or fine-tuning) to weight custom instructions more heavily in its probability calculations.

There’s an irony about control here: Given the probabilistic nature of the issue, there’s no guarantee the issue will stay fixed. OpenAI continuously updates its models behind the scenes, even within the same version number, adjusting outputs based on user feedback and new training runs. Each update arrives with different output characteristics that can undo previous behavioral tuning, a phenomenon researchers call the “alignment tax.”

Precisely tuning a neural network’s behavior is not yet an exact science. Since all concepts encoded in the network are interconnected by values called weights, adjusting one behavior can alter others in unintended ways. Fix em dash overuse today, and tomorrow’s update (aimed at improving, say, coding capabilities) might inadvertently bring them back, not because OpenAI wants them there, but because that’s the nature of trying to steer a statistical system with millions of competing influences.

This gets to an implied question we mentioned earlier. If controlling punctuation use is still a struggle that might pop back up at any time, how far are we from AGI? We can’t know for sure, but it seems increasingly likely that it won’t emerge from a large language model alone. That’s because AGI, a technology that would replicate human general learning ability, would likely require true understanding and self-reflective intentional action, not statistical pattern matching that sometimes aligns with instructions if you happen to get lucky.

And speaking of getting lucky, some users still aren’t having luck with controlling em dash use outside of the “custom instructions” feature. Upon being told in-chat to not use em dashes within a chat, ChatGPT updated a saved memory and replied to one X user, “Got it—I’ll stick strictly to short hyphens from now on.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Forget AGI—Sam Altman celebrates ChatGPT finally following em dash formatting rules Read More »

world’s-oldest-rna-extracted-from-ice-age-woolly-mammoth

World’s oldest RNA extracted from Ice Age woolly mammoth

A young woolly mammoth now known as Yuka was frozen in the Siberian permafrost for about 40,000 years before it was discovered by local tusk hunters in 2010. The hunters soon handed it over to scientists, who were excited to see its exquisite level of preservation, with skin, muscle tissue, and even reddish hair intact. Later research showed that, while full cloning was impossible, Yuka’s DNA was in such good condition that some cell nuclei could even begin limited activity when placed inside mouse eggs.

Now, a team has successfully sequenced Yuka’s RNA—a feat many researchers once thought impossible. Researchers at Stockholm University carefully ground up bits of muscle and other tissue from Yuka and nine other woolly mammoths, then used special chemical treatments to pull out any remaining RNA fragments, which are normally thought to be much too fragile to survive even a few hours after an organism has died. Scientists go to great lengths to extract RNA even from fresh samples, and most previous attempts with very old specimens have either failed or been contaminated.

A different view

The team used RNA-handling methods adapted for ancient, fragmented molecules. Their scientific séance allowed them to explore information that had never been accessible before, including which genes were active when Yuka died. In the creature’s final panicked moments, its muscles were tensing and its cells were signaling distress—perhaps unsurprising since Yuka is thought to have died as a result of a cave lion attack.

It’s an exquisite level of detail, and one that scientists can’t get from just analyzing DNA. “With RNA, you can access the actual biology of the cell or tissue happening in real time within the last moments of life of the organism,” said Emilio Mármol, a researcher who led the study. “In simple terms, studying DNA alone can give you lots of information about the whole evolutionary history and ancestry of the organism under study. “Obtaining this fragile and mostly forgotten layer of the cell biology in old tissues/specimens, you can get for the first time a full picture of the whole pipeline of life (from DNA to proteins, with RNA as an intermediate messenger).”

World’s oldest RNA extracted from Ice Age woolly mammoth Read More »

what-if-the-aliens-come-and-we-just-can’t-communicate?

What if the aliens come and we just can’t communicate?


Ars chats with particle physicist Daniel Whiteson about his new book Do Aliens Speak Physics?

Science fiction has long speculated about the possibility of first contact with an alien species from a distant world and how we might be able to communicate with them. But what if we simply don’t have enough common ground for that to even be possible? An alien species is bound to be biologically very different, and their language will be shaped by their home environment, broader culture, and even how they perceive the universe. They might not even share the same math and physics. These and other fascinating questions are the focus of an entertaining new book, Do Aliens Speak Physics? And Other Questions About Science and the Nature of Reality.

Co-author Daniel Whiteson is a particle physicist at the University of California, Irvine, who has worked on the ATLAS collaboration at CERN’s Large Hadron Collider. He’s also a gifted science communicator who previously co-authored two books with cartoonist Jorge Cham of PhD Comics fame: 2018’s We Have No Idea and 2021’s Frequently Asked Questions About the Universe. (The pair also co-hosted a podcast from 2018 to 2024, Daniel and Jorge Explain the Universe.) This time around, cartoonist Andy Warner provided the illustrations, and Whiteson and Warner charmingly dedicate their book to “all the alien scientists we have yet to meet.”

Whiteson has long been interested in the philosophy of physics. “I’m not the kind of physicist who’s like, ‘whatever, let’s just measure stuff,’” he told Ars. “The thing that always excited me about physics was this implicit promise that we were doing something universal, that we were learning things that were true on other planets. But the more I learned, the more concerned I became that this might have been oversold. None are fundamental, and we don’t understand why anything emerges. Can we separate the human lens from the thing we’re looking at? We don’t know in the end how much that lens is distorting what we see or defining what we’re looking at. So that was the fundamental question I always wanted to explore.”

Whiteson initially pitched his book idea to his 14-year-old son, who inherited his father’s interest in science. But his son thought Whiteson’s planned discussion of how physics might not be universal was, well, boring. “When you ask for notes, you’ve got to listen to them,” said Whiteson. “So I came back and said, ‘Well, what about a book about when aliens arrive? Will their science be similar to ours? Can we collaborate together?’” That pitch won the teen’s enthusiastic approval: same ideas, but couched in the context of alien contact to make them more concrete.

As for Warner’s involvement as illustrator, “I cold-emailed him and said, ‘Want to write a book about aliens? You get to draw lots of weird aliens,’” said Whiteson. Who could resist that offer?

Ars caught up with Whiteson to learn more.

cartoon exploring possible outcomes of first alien contact

Credit: Andy Warner

Ars Technica: You open each chapter with fictional hypothetical scenarios. Why?

Daniel Whiteson: I sent an early version of the book to my friend, [biologist] Matt Giorgianni, who appears in cartoon form in the book. He said, “The book is great, but it’s abstract. Why don’t you write out a hypothetical concrete scenario for each chapter to show us what it would be like and how we would stumble.”

All great ideas seem obvious once you hear them. I’ve always been a huge science fiction fan. It’s thoughtful and creative and exploratory about the way the universe could be and might be. So I jumped at the opportunity to write a little bit of science fiction. Each one was challenging because you have to come up with a specific example that illustrates the concepts in that chapter, the issues that you might run into, but also be believable and interesting. We went through a lot of drafts. But I had a lot of fun writing them.

Ars Technica: The Voyager Golden Record is perhaps the best known example of humans attempting to communicate with an alien species, spearheaded by the late Carl Sagan, among others. But what are the odds that, despite our best efforts, any aliens will ever be able to decipher our “message in a bottle”?

Daniel Whiteson: I did an informal experiment where I printed out a picture of the Pioneer plaque and showed it to a bunch of grad students who were young enough to not have seen it before. This is Sagan’s audience: biological humans, same brain, same culture, physics grad students—none of them had any idea what any of that was supposed to refer to. NASA gave him two weeks to come up with that design. I don’t know that I would’ve done any better. It’s easy to criticize.

Those folks, they were doing their best. They were trying to step away from our culture. They didn’t use English, they didn’t even use mathematical symbols. They understood that those things are arbitrary, and they were reaching for something they hoped was going to be universal. But in the end, nothing can be universal because language is always symbols, and the choice of those symbols is arbitrary and cultural. It’s impossible to choose a symbol that can only be interpreted in one way.

Fundamentally, the book is trying to poke at our assumptions. It’s so inspiring to me that the history of physics is littered with times when we have had to abandon an assumption that we clung to desperately, until we were shown otherwise with enough data. So we’ve got to be really open-minded about whether these assumptions hold true, whether it’s required to do science, to be technological, or whether there is even a single explanation for reality. We could be very well surprised by what we discover.

Ars Technica: It’s often assumed that math and physics are the closest thing we have to a universal language. You challenge that assumption, probing such questions as “what does it even mean to ‘count’”? 

Daniel Whiteson: At an initial glance, you’re like, well, of course mathematics is required, and of course numbers are universal. But then you dig into it and you start to realize there are fuzzy issues here. So many of the assumptions that underlie our interpretation of what we learned about physics are that way. I had this experience that’s probably very common among physics undergrads in quantum mechanics, learning about those calculations where you see nine decimal places in the theory and nine decimal places in the experiment, and you go, “Whoa, this isn’t just some calculational tool. This is how the universe decides what happens to a particle.”

I literally had that moment. I’m not a religious person, but it was almost spiritual. For many years, I believed that deeply, and I thought it was obvious. But to research this book, I read Science Without Numbers by Hartry Field. I was lucky—here at Irvine, we happen to have an amazing logic and philosophy of science department, and those folks really helped me digest [his ideas] and understand how you can pull yourself away from things like having a number line. It turns out you don’t need numbers; you can just think about relationships. It was really eye-opening, both how essential mathematics seems to human science and how obvious it is that we’re making a bunch of assumptions that we don’t know how to justify.

There are dotted lines humans have drawn because they make sense to us. You don’t have to go to plasma swirls and atmospheres to imagine a scenario where aliens might not have the same differentiation between their identities, me and you, here’s where I end, and here’s where you begin. That’s complicated for aliens that are plasma swirls, but also it’s not even very well-defined for us. How do I define the edge of my body? Is it where my skin ends? What about the dead skin, the hair, my personal space? There’s no obvious definition. We just have a cultural sense for “this is me, this is not me.” In the end, that’s a philosophical choice.

Cartoon of an alien emerging from a spaceship and a human saying

Credit: Andy Warner

Ars Technica: You raise another interesting question in the book: Would aliens even need physics theory or a deeper understanding of how the universe works? Perhaps they could invent, say, warp drive through trial and error. You suggest that our theory is more like a narrative framework. It’s the story we tell, and that is very much prone to bias.

Daniel Whiteson: Absolutely. And not just bias, but emotion and curiosity. We put energy into certain things because we think they’re important. Physicists spend our lives on this because we think, among the many things we could spend our time on, this is an important question. That’s an emotional choice. That’s a personal subjective thing.

Other people find my work dead boring and would hate to have my life, and I would feel the same way about theirs. And that’s awesome. I’m glad that not everybody wants to be a particle physicist or a biologist or an economist. We have a diversity of curiosity, which we all benefit from. People have an intuitive feel for certain things. There’s something in their minds that naturally understands how the system works and reflects it, and they probably can’t explain it. They might not be able to design a better car, but they can drive the heck out of it.

This is maybe the biggest stumbling block for people who are just starting to think about this for the first time. “Obviously aliens are scientific.” Well, how do we know? What do we mean by scientific? That concept has evolved over time, and is it really required? I felt that that was a big-picture thing that people could wrap their minds around but also a shock to the system. They were already a little bit off-kilter and might realize that some things they assumed must be true maybe not have to be.

Ars Technica: You cite the 2016 film Arrival as an example of first contact and the challenge of figuring out how to communicate with an alien species. They had to learn each other’s cultural context before they had any chance of figuring out what their respective symbols meant. 

Daniel Whiteson:  I think that is really crucial. Again, how you choose to represent your ideas is, in a sense, arbitrary. So if you’re on the other side of that, and you have to go from symbols to ideas, you have to know something about how they made those choices in order to reverse-engineer a message, in order to figure out what it means. How do you know if you’ve done it correctly? Say we get a message from aliens and we spend years of supercomputer time cranking on it. Something we rely on for decoding human messages is that you can tell when you’ve done it right because you have something that makes sense.

cartoon about the

Credit: Andy Warner

How do we know when we’ve done that for an alien text? How do you know the difference between nonsense and things that don’t yet make sense to you? It’s essentially impossible. I spoke to some philosophers of language who convinced me that if we get a message from aliens, it might be literally impossible to decode it without their help, without their context. There aren’t enough examples of messages from aliens. We have the “Wow!” signal—who knows what that means? Maybe it’s a great example of getting a message and having no idea how to decode it.

As in many places in the book, I turned to human history because it’s the one example we have. I was shocked to discover how many human languages we haven’t decoded. Not to mention, we haven’t decoded whale. We know whales are talking to each other. Maybe they’re talking to us and we can’t decode it. The lesson, again and again, is culture.

With the Egyptians, the only reason we were able to decode hieroglyphics is because we had the Rosetta Stone, but that still took 20 years there. How does it take 20 years when you have examples and you know what it’s supposed to say? And the answer is that we made cultural assumptions. We assumed pictograms reflected the ideas in the pictures. If there’s a bird in it, it’s about birds. And we were just wrong. That’s maybe why we’ve been struggling with Etruscan and other languages we’ve never decoded.

Even when we do have a lot of culture in common, it’s very, very tricky. So the idea that we would get a message from space and have no cultural clues at all and somehow be able to decode it—I think that only works in the scenario where aliens have been listening to us for a long time and they’re essentially writing to us in something like our language. I’m a big fan of SETI. They host these conferences where they listen to philosophers and linguists and anthropologists. I don’t mean to say that they’ve thought too narrowly, but I think it’s not widely enough appreciated how difficult it is to decode another language with no culture in common.

Ars Technica: You devote a chapter to the possibility that aliens might be able to, say, “taste” electrons. That drives home your point that physiologically, they will probably be different, biologically they will be different, and their experiences are likely to be different. So their notion of culture will also be different.

Daniel Whiteson: Something I really wanted to get across was that perception determines your sense of intuition, which defines in many ways the answers you find acceptable. I read Ed Yong’s amazing book, An Immense World, about animal perception, the diversity of animal experience or animal interaction with the environment. What is it like to be an octopus with a distributed mind? What is it like to be a bird that senses magnetic fields? If you’re an alien and you have very different perceptions, you could also have a different intuitive language in which to understand the universe. What is a photon? Is it a particle? Is it a wave?  Maybe we just struggle with the concept because our intuitive language is limited.

For an alien that can see photons in superposition, this is boring. They could be bored by things we’re fascinated by, or we could explain our theories to them and they could be unsatisfied because to them, it doesn’t translate into their intuitive language. So even though we’ve transcended our biological limitations in many ways, we can detect gravitational waves and infrared photons, we’re still, I think, shaped by those original biological senses that frame our experience, the models we build. What it’s like to be a human in the world could be very, very different from what it’s like to be an alien in the world.

Ars Technica:  Your very last line reads, “Our theories may reveal the patterns of our thoughts as much as the patterns of nature.” Why did you choose to end your book on that note?

Daniel Whiteson: That’s a message to myself. When I started this journey, I thought physics is universal. That’s what makes it beautiful. That implies that if the aliens show up and they do physics in a very different way, it would be a crushing disappointment. Not only is what we’ve learned just another human science, but also it means maybe we can’t benefit from their work or we can’t have that fantasy of a galactic scientific conference. But that actually might be the best-case scenario.

I mean, the reason that you want to meet aliens is the reason you go traveling. You want to expand your horizons and learn new things. Imagine how boring it would be if you traveled around the world to some new country and they just had all the same food. It might be comforting in some way, but also boring. It’s so much more interesting to find out that they have spicy fish soup for breakfast. That’s the moment when you learn not just about what to have for breakfast but about yourself and your boundaries.

So if aliens show up and they’re so weirdly different in a way we can’t possibly imagine—that would be deeply revealing. It’ll give us a chance to separate the reality from the human lens. It’s not just questions of the universe. We are interesting, and we should learn about our own biases. I anticipate that when the aliens do come, their culture and their minds and their science will be so alien, it’ll be a real challenge to make any kind of connection. But if we do, I think we’ll learn a lot, not just about the universe but about ourselves.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

What if the aliens come and we just can’t communicate? Read More »

civil-war-is-brewing-in-the-wasteland-in-fallout-s2-trailer

Civil war is brewing in the wasteland in Fallout S2 trailer

Purnell, Goggins, MacLachlan, and Moten all return for S2, along with Moises Arias as Lucy’s younger brother, Norm, and Frances Turner as Barb Howard, Cooper’s wife and a high-ranking Vault-Tec executive. Justin Theroux joins the S2 cast as Mr. Robert House, founder and CEO of RobCo Industries, as well as Macaulay Culkin (possibly playing Caesar) and Kumail Nanjiani, both of whom appear briefly in the new trailer.

The Ghoul (Walton Goggins) has been searching for his family for 200 years. YouTube/Prime Video

The trailer opens with Maximus chatting with another denizen of the wasteland, who insists that while he’s seen lots of crazy and unnatural things in his struggle to survive, he’s never seen good people. Maximus has met one good person: Lucy. But his acquaintance isn’t having it. “I would be a good person too if I grew up in some cozy impenetrable home,” he says. “Wouldn’t have to steal and stab and fib all the time just to get by.”

Lucy, of course, has been challenged to hang onto her fundamental decency while navigating the brutal surface world, hardening just enough to do what’s necessary to survive. She’s now looking for her father with a new motive: to bring him to justice, “so people know that how they conduct themselves matters, and they don’t give up hope.” (We catch a few glimpses of Hank, most notably experimenting on a mouse in the lab, with disastrous results.) The Ghoul, for his part, is looking for his family; it’s the only reason he’s hung around for 200 years. Meanwhile, civil war is brewing, and you just know our main cast will all end up caught up in the conflict.

The second season of Fallout premieres on December 17, 2025, on Prime Video.

Civil war is brewing in the wasteland in Fallout S2 trailer Read More »

the-pope-offers-wisdom

The Pope Offers Wisdom

The Pope is a remarkably wise and helpful man. He offered us some wisdom.

Yes, he is generally playing on easy mode by saying straightforwardly true things, but that’s meeting the world where it is. You have to start somewhere.

Some rejected his teachings.

Two thousand years after Jesus famously got nailed to a cross for suggesting we all be nice to each other for a change, Pope Leo XIV issues a similarly wise suggestion.

Pope Leo XIV: Technological innovation can be a form of participation in the divine act of creation. It carries an ethical and spiritual weight, for every design choice expresses a vision of humanity. The Church therefore calls all builders of #AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.

The world needs honest and courageous entrepreneurs and communicators who care for the common good. We sometimes hear the saying: “Business is business!” In reality, it is not so. No one is absorbed by an organization to the point of becoming a mere cog or a simple function. Nor can there be true humanism without a critical sense, without the courage to ask questions: “Where are we going? For whom and for what are we working? How are we making the world a better place?”

Pope Leo XIV (offered later, not part of the discussion in the rest of the post): Technological development has brought, and continues to bring, significant benefits to humanity. In order to ensure true progress, it is imperative that human dignity and the common good remain resolute priorities for all, both individuals and public entities. We must never forget the “ontological dignity that belongs to the person as such simply because he or she exists and is willed, created, and loved by God” (Dignitas Infinita, 7).

Roon: Pope Leo I think you would enjoy my account.

Liv Boeree: Well said. Also, it’s telling to observe which technologists (there’s one in particular) who just openly mocked you in the quote tweets for saying this.

Grimes: This is so incredibly lit. This is one of the most important things ever said.

Pliny the Liberator:

Rohit: Quite cool that amongst the world leaders it’s the Pope that has the best take on dealing with AI.

The one in particular who had an issue with the Pope’s statement is exactly the one you would guess: Marc Andreessen.

For those who aren’t aware of the image, my understanding of the image below, its context and its intended meaning is it is a selected-to-be-maximally-unflattering picture of GQ features director Katherine Stoeffel, from her interview of Sydney Sweeney, to contrast with Sydney Sweeney looking in command and hot.

During the otherwise very friendly interview, Stoeffel asked Sweeney a few times in earnest fashion about concerns and responses around her ad for jeans, presumably showing such ilk that she is woke, and Sweeney notes that she disregards internet talk, doesn’t check for it and doesn’t let it get to her, and otherwise dodged artfully at one point in particular, really excellent response all around, thus proving to such ilk she is based.

Also potentially important context I haven’t seen mentioned before: Katherine Stoeffel reads to me in this interview as having autistic mannerisms. Once you see the meme in this light you simply cannot unsee it or think it’s not central to what’s happening here, and it makes the whole thing another level of not okay and I’m betting other people saw this too and are similarly pissed off about it.

Not cool. Like, seriously, don’t use this half of the meme.

It’s not directly related, but if you do watch the interview, notice how much Sydney actually pauses to think about her answers, watch her eye movements and willingness to allow silence, and also take her advice and leave your phone at home. This also means that in the questions where Sydney doesn’t pause to think, it hits different. She’s being strategic but also credibly communicating. The answer everyone remembers, in context, is thus even more clearly something she had prepped and gamed out.

Here for reference is the other half of the meme, from the interview:

Daniel Eth: Andreessen is so dogmatically against working on decreasing risks from AI that he’s now mocking the pope for saying tech innovation “carries an ethical and spiritual weight” and that AI builders should “cultivate moral discernment as a fundamental part of their work”

Daniel Eth (rest of block quote is him):

The pope: “you should probably be a good person”

Marc Andreessen (as imagined by Daniel Eth): “this is an attack on me and everything I stand for”

Yes, I do think ‘you should probably be a good person’ in the context of a Pope is indeed against everything that Marc Andreessen stands for, straight up.

Is that an ‘uncharitable’ take? Yes, but I am confident it is an accurate one, and no this type of systematic mockery, performative cruelty, vice signaling and attempted cyberbullying does not come out of some sort of legitimate general welfare concern.

Marc Andreessen centrally has a strong position on cultivating moral discernment.

Which is the same as his position on not mocking those who support morality or moral discernment.

Which is that he is, in the strongest possible terms, against these things.

Near: we’re in the performative cruelty phase of silicon valley now

if we get an admin change in 2028 and tech is regulated to a halt b/c the american people hate us well this would not surprise me at all tbh. hence the superpacs i suppose.

pmarca deleted all the tweets now, the pope reigns victorious once again.

Roon: this is actually the end stage of what @nearcyan is describing, an age dominated by vice signaling, a nihilistic rw voice that says well someone gave me an annoying moral lecture in the past, so actually virtue is obsolete now.

bring back virtue signaling. the alternative is worse. at least pretend you are saving the world

near: first time ive ever seen him delete an entire page of tweets.

Daniel Eth: Tbf Andreessen is a lot more cruel than the typical person in SV

Scott Alexander: List of Catholic miracles that make me question my atheism:

  1. Sun miracle of Fatima

  2. Joan of Arc

  3. Pope making Marc Andreessen shut up for one day.

Yosarian: It’s also amazing that Marc got into a fight with the Pope only a few weeks after you proved he’s the Antichrist

Scott Alexander: You have seen and believed; blessed are those who believed without seeing.

It turns out that yes, you can go too far around these parts of Twitter.

TBPN (from a 2 min clip discussing mundane AI dangers): John: I don’t think you should just throw “decel” at someone who’s identifying a negative externality of a new technology. That’s not necessarily decelerationist.

Dean Ball: It’s heartening to see such eminently reasonable takes making their way into prominent, pro-tech new media.

There is something very “old man” about the terms of AI policy debates, and perhaps this is because it is has been largely old people who have set them.

When I hear certain arguments about how we need to “let the innovators innovate” and thus cannot bear the thought of resolving negative externalities from new technology, it just sounds retro to me. Like an old man dimly remembering things he believes Milton Friedman once said.

Whereas anyone with enough neuroplasticity remaining to internalize a concept as alien as “AGI” can easily see that such simple frames do not work.

It is as heartening to see pro-tech new media say this out loud as it is disheartening to realize that it needed to be said out loud.

A reasonable number of people finally turned on Marc for this.

Spor: I kinda can’t believe all it took was a bad meme swipe against the Pope for legit all of tech Twitter to turn against Marc Andreessen

There’s a lesson in there somewhere

To be fair, most of it came from his response, at least I think

If he doesn’t go around unfollowing and blocking everybody and deleting tweets, it’s a nothingburger

Daniel Eth: I think the thing is everyone already hated Andreessen, and it’s just that before a critical mass of dissent appeared people in tech were scared of criticizing him. And this event just happened to break that threshold

Feels like the dam has broken on people in the tech community airing grievances with Andreessen. Honestly makes me feel better about the direction of the tech community writ large

Matthew Griisser: You come at the King, you best not miss.

Jeremiah Johnson considers Marc Andreessen as an avatar for societal delay as he goes over the events discussed in this section.

Jeremiah Johnson: Now, I don’t want to say that Marc Andreessen is single-handedly causing the collapse of society. That would be a slight exaggeration. But I do believe this one incident shows that Andreessen is at the nexus of all the ways society is decaying. He’s not responsible. He’s just representative. He’s the mascot of decay, the guy who’s always there when the negative trends accelerate, cheering them on and pushing things downhill faster.

I mean, yeah, fair. That’s distinct from his ‘get everyone killed by ASI’ crusades, but the same person doing both is not a coincidence.

And no, he doesn’t make up for this with the quality of his technical takes, as Roon says he has been consistently wrong about generative AI.

As a reminder, here is Dwarkesh Patel’s excellent refutation to Marc Andreessen’s ‘why AI will save the world, its name calling and its many nonsensical arguments. Liron Shapira has a less measured thread that also points out many rather terrible arguments that were made and then doubled down upon and repeated.

Daniel Eth: This whole Andreessen thing is a good reminder that you shouldn’t confuse vice with competence. Just because the guy is rude & subversive does not mean that he has intelligent things to say.

Marc’s position really has reliably been, as explicitly stated as the central thesis of The Techno-Optimist Manifesto, that technological advancement is always universally good and anyone who points to any downside is explicitly his ‘enemy.’

There were a few defenders, and their defenses… did not make things better.

Dan Elton: I’m a bit dismayed to see @pmarca bowing to anti free speech political correctness culture and deleting a bunch of funny light hearted tweets. The pope is so overrated. If we can’t make a joke about the pope, is there anything left we are allowed to laugh at?

That is very much not what happened here, sir. Nor was it about joking about the pope. This was in no way about ‘free speech,’ or about lighthearted joking, it was about being performatively mean and hostile to the very idea of responsibility, earnestness or goodness, even in the theoretical. And it was about the systematic, intentional conflation of hostility with such supposed joking.

(Perhaps the deleted AI video of the pope was funny? I don’t know, didn’t see it.)

Marc nuked several full days of posts in response to all this, which seems wise. It’s a good way to not have to make a statement about exactly what you’re deleting, plays it safe, resets the board.

Mike Solana: my sense is the number of people who 1) furiously defended the pope last night and then 2) went to mass this morning is probably close to zero. status games.

Joe Weisenthal: Got em.

Very obviously: We’re supporting the Pope here because he took the trust and power and yes holiness vested in him and spoke earnestly, wisely and with moral clarity, saying obviously true things, and we agree with him, and then he was mocked for it exactly because of all these things. Not because we’re Catholics.

There’s this mindset that you can’t possibly be actually standing up for ideas or principles or simply goodness. It must be some status game, or some political fight, or something. No, it isn’t.

Marc also went on an unfollow binge, including Bayes and Grimes, which is almost entirely symbolic since he (still) follows 27,700 people.

I feel good about highlighting his holiness the Pope’s wise message.

I feel bad about spending the bulk of an entire post pointing out that someone has consistently been acting as a straight up horrible person. Ideally one would never do this. But this is pretty important context within the politics of the AI space and also for anyone evaluating venture capital funding from various angles.

Remember, when people tell you who they are? Believe them.

So I wanted to ensure there was a reference for these events we can link back to, and also to remember that yes if you go sufficiently too far people eventually notice.

Discussion about this post

The Pope Offers Wisdom Read More »

audi-goes-full-minimalism-for-its-first-ever-formula-1-livery

Audi goes full minimalism for its first-ever Formula 1 livery


Audi says it wants to be an F1 title contender by 2030.

The actual bodywork of Audi’s R26 won’t look identical to this show car, but the livery should, with the addition of the sponsors, obviously. Credit: Jonathan Gitlin

MUNICH, Germany—Audi’s long-awaited Formula 1 team gave the world its first look at what the Audi R26 will look like when it takes to the track next year. Well, sort of—the car you see here is a generic show car for the 2026 aero regulations, but the livery you see, plus the sponsors’ logos, will race next year.

“By entering the pinnacle of motorsport, Audi is making a clear, ambitious statement. It is the next chapter in the company’s renewal. Formula 1 will be a catalyst for the change towards a leaner, faster, and more innovative Audi,” said Gernot Döllner, Audi’s CEO. “We are not entering Formula 1 just to be there. We want to win. At the same time, we know that you don’t become a top team in Formula 1 overnight. It takes time, perseverance, and tireless questioning of the status quo. By 2030, we want to fight for the World Championship title,” Döllner said.

After the complicated liveries of cars like the R18 or Audi’s Formula E program, the R26 is refreshingly simple. Jonathan Gitlin

I’ll admit, when I first saw the images Audi sent ahead of time, I was a little underwhelmed, but in person, as you approach it from different angles, it makes a lot more sense. The design is more than a little minimalist, juxtaposing straight-edged geometric blocks of color with the aerodynamically curved bodywork they adorn. The titanium references Audi’s latest concept car, and the red—which is almost fluorescent in person—is an all-new shade called Audi Red. It’s used to highlight the car’s various air intakes and looks really quite effective.

Why F1?

After a long and glorious history in sportscar racing and rallying before that, Audi’s motorsports activities virtually evaporated in the wake of dieselgate and then a brief Formula E program. Then in early 2022, Volkswagen Group revealed that after decades of “will they, won’t they” speculation, not one but two of its brands—Audi and Porsche—would be entering F1 in 2026. (The Porsche deal with Red Bull would later fall apart.)

The sport was already riding its post-COVID popularity surge, and a new technical ruleset for 2026 was written in large part to attract automakers like Audi, dropping one of the current hybrid energy recovery systems (the complex turbo-based MGU-H) in favor of a much more powerful electric motor (the MGU-K), and switching to synthetic fuels that must have at least 65 percent less carbon emissions than fossil fuels.

In August 2022, Audi confirmed a powertrain program that would be developed at its motorsports competence center in Neuberg, Germany. It also announced it was buying three-quarters of the Swiss-based Sauber team. The following year, then-Audi CTO and head of the F1 program Oliver Hoffman explained to Ars why the company was going to F1 now.

Audi's 2026 F1 livery on a show car, seen from the front 3/4s

As you see the car from other angles, you see how the bodywork interacts with the geometric shapes. Credit: Jonathan Gitlin

“Nearly 50 percent of the power will come out of the electric drive train. Especially for battery technology, thermal management, and also efficiency of the power electronics, there’s a clear focus. And together, the fit between Formula 1 and our RS technologies is [the] perfect fit for us,” Hoffman said at the time.

A team in trouble?

On track, things did not look great. The Sauber team had finished 2023 in 9th place, and 2024 was looking worse. (It eventually finished dead last with a miserable four points at the end of the year.) Rumors that Audi wanted out of the program had to be quashed, then in March 2024 Audi decided to buy all of Sauber, adding Andreas Seidl, formerly of McLaren and before that Porsche, to run the team, now called Audi Formula Racing.

But within a year, Hoffman was gone, together with Andreas Seidl. Replacing Hoffman as head of the Audi F1 project: Mattia Binotto, who saw Ferrari get close to but not quite land an F1 championship. Then in August, Jonathan Wheatley joined the team from Red Bull as the new team principal.

a closer look at Audi's 2026 F1 livery on a show car

The way the air intakes are highlighted is particularly effective. Credit: Jonathan Gitlin

Binotto and Wheatley’s arrival seems to have unlocked something. At Silverstone, Nico Hülkenberg finished third, the first podium for the team since 2012 and the first ever for a driver that was long, long overdue. Hülkenberg now lies ninth in points. His rookie teammate Gabriel Bortoleto might have just had a disappointing home race, but last year’s F2 champ has shown plenty of speed in the Sauber this year. It will surely want to carry that momentum forward to 2026.

“The goal is clear: to fight for championships by 2030. That journey takes time, the right people, and a mindset of continuous improvement,” Binotto said. “Formula 1 is one of the most competitive environments. Becoming a champion is a journey of progress. Mistakes will happen, but learning from them is what drives transformation. That’s why we follow a three-phased approach: starting as a challenger with the ambition to grow, evolving into a competitor by daring the status quo and achieving first successes, and ultimately becoming a champion,” Binotto said.

What’s under the bodywork?

Technical details on the R26 remain scarce, for the same reason we haven’t seen the actual race car yet. Earlier today we visited Audi’s Neuberg facility, which is much expanded from the sportscar days—while those were also hybrid race cars, the state of the art has moved on in the near-decade since Audi left that sport, and F1 power units (the internal combustion engine, hybrid system, battery, but not the transmission) operate at a much higher level.

In addition to the workshops and cleanrooms where engineers and technicians develop and build the various components that go into the power units, there are extensive test and analysis facilities, able to examine parts both destructively (for example, cutting sections for a scanning electron microscope) and nondestructively (like X-raying or CT scanning).

Development and assembly of the Energy Recovery System (ERS) at the Neuburg site

We weren’t allowed to take photos inside the Neuberg facility, or even go into many of the rooms, but the ones we did see are all immaculate. Credit: Audi

Unlike in the LMP1 days of multiple Le Mans wins, actual on-track testing in F1 is highly restricted, limited now to just nine days over three tests before the start of the season. “When I think of my past LMP1 racing, we tested 50 days, 60 days, and if we had a problem, we just added the days. We went to Sebring or whatever,” said Stefan Dreyer, Audi Formula Racing’s CTO. “And this is also a huge challenge… moving into the future of Formula 1.”

For the first three years of the project, there was one goal: design and build a power unit for the 2026 regulations. The complete powertrain (the power unit plus the transmission) first ran a race simulation in 2024. In addition to Neuberg and Hinwil, Switzerland, where Sauber is based, there’s a new location in Bicester, England, part of that country’s “motorsports valley” and home to an awful lot of F1 suppliers and talent.

Now, the group in Neuberg has multiple overlapping jobs: assemble power units and supply them to the race team in Hinwil that is building the chassis, as well as begin development on the 2027 and 2028 iterations of the power units. It’s telling that all over the facility, video screens count down the remaining three months and however many days, hours, minutes, and seconds are left until the cars take to the track in anger in Australia.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

Audi goes full minimalism for its first-ever Formula 1 livery Read More »

quantum-computing-tech-keeps-edging-forward

Quantum computing tech keeps edging forward


More superposition, less supposition

IBM follows through on its June promises, plus more trapped ion news.

IBM has moved to large-scale manufacturing of its Quantum Loon chips. Credit: IBM

The end of the year is usually a busy time in the quantum computing arena, as companies often try to announce that they’ve reached major milestones before the year wraps up. This year has been no exception. And while not all of these announcements involve interesting new architectures like the one we looked at recently, they’re a good way to mark progress in the field, and they often involve the sort of smaller, incremental steps needed to push the field forward.

What follows is a quick look at a handful of announcements from the past few weeks that struck us as potentially interesting.

IBM follows through

IBM is one of the companies announcing a brand-new architecture this year. That’s not at all a surprise, given that the company promised to do so back in June; this week sees the company confirming that it has built the two processors it said it would earlier in the year. These include one called Loon, which is focused on the architecture that IBM will use to host error-corrected logical qubits. Loon represents two major changes for the company: a shift to nearest-neighbor connections and the addition of long-distance connections.

IBM had previously used what it termed the “heavy hex” architecture, in which alternating qubits were connected to either two or three of their neighbors, forming a set of overlapping hexagonal structures. In Loon, the company is using a square grid, with each qubit having connections to its four closest neighbors. This higher density of connections can enable more efficient use of the qubits during computations. But qubits in Loon have additional long-distance connections to other parts of the chip, which will be needed for the specific type of error correction that IBM has committed to. It’s there to allow users to test out a critical future feature.

The second processor, Nighthawk, is focused on the now. It also has the nearest-neighbor connections and a square grid structure, but it lacks the long-distance connections. Instead, the focus with Nighthawk is to get error rates down so that researchers can start testing algorithms for quantum advantage—computations where quantum computers have a clear edge over classical algorithms.

In addition, the company is launching GitHub repository that will allow the community to deposit code and performance data for both classical and quantum algorithms, enabling rigorous evaluations of relative performance. Right now, those are broken down into three categories of algorithms that IBM expects are most likely to demonstrate a verifiable quantum advantage.

This isn’t the only follow-up to IBM’s June announcement, which also saw the company describe the algorithm it would use to identify errors in its logical qubits and the corrections needed to fix them. In late October, the company said it had confirmed that the algorithm could work in real time when run on an FPGA made in collaboration with AMD.

Record lows

A few years back, we reported on a company called Oxford Ionics, which had just announced that it achieved a record-low error rate in some qubit operations using trapped ions. Most trapped-ion quantum computers move qubits by manipulating electromagnetic fields, but they perform computational operations using lasers. Oxford Ionics figured out how to perform operations using electromagnetic fields, meaning more of their processing benefited from our ability to precisely manufacture circuitry (lasers were still needed for tasks like producing a readout of the qubits). And as we noted, it could perform these computational operations extremely effectively.

But Oxford Ionics never made a major announcement that would give us a good excuse to describe its technology in more detail. The company was ultimately acquired by IonQ, a competitor in the trapped-ion space.

Now, IonQ is building on what it gained from Oxford Ionics, announcing a new, record-low error rate for two-qubit gates: greater than 99.99 percent fidelity. That could be critical for the company, as a low error rate for hardware qubits means fewer are needed to get good performance from error-corrected qubits.

But the details of the two-qubit gates are perhaps more interesting than the error rate. Two-qubit gates involve bringing both qubits involved into close proximity, which often requires moving them. That motion pumps a bit of energy into the system, raising the ions’ temperature and leaving them slightly more prone to errors. As a result, any movement of the ions is generally followed by cooling, in which lasers are used to bleed energy back out of the qubits.

This process, which involves two distinct cooling steps, is slow. So slow that as much as two-thirds of the time spent in operations involves the hardware waiting around while recently moved ions are cooled back down. The new IonQ announcement includes a description of a method for performing two-qubit gates that doesn’t require the ions to be fully cooled. This allows one of the two cooling steps to be skipped entirely. In fact, coupled with earlier work involving one-qubit gates, it raises the possibility that the entire machine could operate with its ions at a still very cold but slightly elevated temperature, avoiding all need for one of the two cooling steps.

That would shorten operation times and let researchers do more before the limit of a quantum system’s coherence is reached.

State of the art?

The last announcement comes from another trapped-ion company, Quantum Art. A couple of weeks back, it announced a collaboration with Nvidia that resulted in a more efficient compiler for operations on its hardware. On its own, this isn’t especially interesting. But it’s emblematic of a trend that’s worth noting, and it gives us an excuse to look at Quantum Art’s technology, which takes a distinct approach to boosting the efficiency of trapped-ion computation.

First, the trend: Nvidia’s interest in quantum computing. The company isn’t interested in the quantum aspects (at least not publicly); instead, it sees an opportunity to get further entrenched in high-performance computing. There are three areas where the computational capacity of GPUs can play a role here. One is small-scale modeling of quantum processors so that users can perform an initial testing of algorithms without committing to paying for access to the real thing. Another is what Quantum Art is announcing: using GPUs as part of a compiler chain to do all the computations needed to find more efficient ways of executing an algorithm on specific quantum hardware.

Finally, there’s a potential role in error correction. Error correction involves some indirect measurements of a handful of hardware qubits to determine the most likely state that a larger collection (called a logical qubit) is in. This requires modeling a quantum system in real time, which is quite difficult—hence the computational demands that Nvidia hopes to meet. Regardless of the precise role, there has been a steady flow of announcements much like Quantum Art’s: a partnership with Nvidia that will keep the company’s hardware involved if the quantum technology takes off.

In Quantum Art’s case, that technology is a bit unusual. The trapped-ion companies we’ve covered so far are all taking different routes to the same place: moving one or two ions into a location where operations can be performed and then executing one- or two-qubit gates. Quantum Art’s approach is to perform gates with much larger collections of ions. At the compiler level, it would be akin to figuring out which qubits need a specific operation performed, clustering them together, and doing it all at once. Obviously, there are potential efficiency gains here.

The challenge would normally be moving so many qubits around to create these clusters. But Quantum Art uses lasers to “pin” ions in a row so they act to isolate the ones to their right from the ones to their left. Each cluster can then be operated on separately. In between operations, the pins can be moved to new locations, creating different clusters for the next set of operations. (Quantum Art is calling each cluster of ions a “core” and presenting this as multicore quantum computing.)

At the moment, Quantum Art is behind some of its competitors in terms of qubit count and performing interesting demonstrations, and it’s not pledging to scale quite as fast. But the company’s founders are convinced that the complexity of doing so many individual operations and moving so many ions around will catch up with those competitors, while the added efficiency of multiple qubit gates will allow it to scale better.

This is just a small sampling of all the announcements from this fall, but it should give you a sense of how rapidly the field is progressing—from technology demonstrations to identifying cases where quantum hardware has a real edge and exploring ways to sustain progress beyond those first successes.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Quantum computing tech keeps edging forward Read More »

kimi-k2-thinking

Kimi K2 Thinking

I previously covered Kimi K2, which now has a new thinking version. As I said at the time back in July, price in that the thinking version is coming.

Is it the real deal?

That depends on what level counts as the real deal. It’s a good model, sir, by all accounts. But there have been fewer accounts than we would expect if it was a big deal, and it doesn’t fall into any of my use cases.

Kimi.ai: 🚀 Hello, Kimi K2 Thinking!

The Open-Source Thinking Agent Model is here.

🔹 SOTA on HLE (44.9%) and BrowseComp (60.2%)

🔹 Executes up to 200 – 300 sequential tool calls without human interference

🔹 Excels in reasoning, agentic search, and coding

🔹 256K context window

Built as a thinking agent, K2 Thinking marks our latest efforts in test-time scaling — scaling both thinking tokens and tool-calling turns.

K2 Thinking is now live on http://kimi.com in chat mode, with full agentic mode coming soon. It is also accessible via API.

API here, Tech blog here, Weights and code here.

(Pliny jailbreak here.)

It’s got 1T parameters, and Kimi and Kimi K2 have a solid track record, so it’s plausible this could play with the big boys, although the five month delay in getting to a reasoning model suggests skepticism it can be competitive.

As always, internal benchmark scores can differ greatly from outside benchmark scores, especially for open models. Sometimes this is due to outsiders botching setup, but also inside measurements need to be double checked.

For Humanity’s Last Exam, I see an outside source saying as of November 9 it was in second place on Humanity’s Last Exam at 23.9%, which is very much not 44.9% but still very good.

On writing quality we’ve gotten endorsements for Kimi K2 for a while.

Rohit: Kimi K2 is remarkably good at writing, and unlike all others thinking mode hasn’t degraded its writing ability more.

Morgan: if i recall, on release gpt-5 was the only model where writing quality improved with thinking effort.

Rohit: Alas.

Gary Fung: Kimi has always been a special snowflake on creative writing.

Here’s one part of the explanation of how they got the writing to be so good, which involves self-ranking RL and writing self-play, with a suggestion of some similarities to the training of Claude 3 Opus. In a sense this looks like ‘try to do better, at all.’

On the agentic tool use and general intelligence? I’m more skeptical.

Artificial Analysis has Kimi K2 Thinking at the top of its Agentic Tool Use, by 93%-87%, which is a huge gap in context, which is its strongest subset.

As is usually true when people compare open to closed models, this is the open model’s best benchmark, so don’t get carried away, but yes overall it did well on Artificial Analysis, indeed suspiciously well given how little talk I see.

The tool calling abilities are exciting for an open model, although standard for closed. This is a good example of how we look for ways for open models to impress by matching closed abilities in spots, also it is indeed highly useful.

Overall Artificial Analysis Intelligence index has Kimi K2 Thinking at 67, one point behind GPT-5 and ahead of everyone else. Kimi used the most tokens of any model, but total cost was lower than the top closed models, although not dramatically so ($829-$913 for GPT-5, $817 for Sonnet, $380 for Kimi K2) as cost is $0.6/$2.5 per million tokens, versus $1.25/$10 for GPT-5 and $3/$15 for Sonnet.

Nathan Lambert is impressed, relying on secondary information (‘seems like a joy to use’), and offers thoughts.

He notes that yes, labs start out targeting benchmarks and then transition to actually targeting useful things, such as how K2 Thinking was post-trained in 4bit precision to prepare for realistic tasks and benchmarked the same way. I agree that’s pretty cool.

It does seem plausible that Kimi K2 is still in the ‘target the benchmarks’ phrase in most places, although not in creative writing. By default, I expect such models to punch ‘below their benchmark-implied weight’ on practical tasks.

For now we don’t have many other outside scores to work with and feedback is light.

Simeon: is Kimi K2 benchmaxxing or are they actually SOTA while training on potatoes?

Prinz: In my testing (for my use cases, which have nothing to do with math and coding), K2-Thinking is obviously worse than GPT-5 Thinking, but by a relatively modest margin. If I had no access to other models, I would happily use K2-Thinking and it wouldn’t feel like a huge downgrade.

ahtoshkaa: I have a pretty sophisticated companion app that uses about 5-10K of varied, information dense context. So the model has to properly parse this information and have very good writing skills. kimi-k2-thinking is absolute ass. similarly to the new OpenAI model – Polaris Alpha.

There’s a growing rhetorical pressure, or marketing style pressure, where the ‘benchmark gaps’ are closing. Chinese labs can point to numbers that say they are ‘just as good’ or almost as good, for many purposes ‘good enough’ is good enough. And many people (including the likes of David Sacks) point to GPT-5 and similar as showing progress isn’t impressive or scary. But as Nathan points out we now see releases like Claude 4 where the benchmark gains look small but real world gains are large, and I would add GPT-5 (and Sonnet 4.5) to that category as well.

Teortaxes: It’s token-hungry, slow-ish, and sometimes rough around the edges. Generally though it’s a jump for open/Chinese models, in the league of Sonnet 4.5 and GPT-5 (maybe -mini depending on task) and a genuinely strong SWE agent. Legitimate alternative, not “but look at the price.”

It’s baked in that the open alternatives are pretty much always going to be rough around the edges, and get evaluated largely in terms of their peak relative performance areas. This is still high praise, putting Kimi in striking distance of the current big two.

Havard Isle has it coming in at a solid 42.1% on WeirdML, matching Opus 4.1.

Here’s something cool:

Pawal Azczesny: Kimi K2 Thinking is using systematically (on its own, without prompting) some of the debiasing strategies known from cognitive sciences. Very impressive. I didn’t see any other model doing that. Well done @Kimi_Moonshot.

It goes beyond “think step by step”. For instance it applied pre-mortem analysis, which is not frequently used. Or it exaggerates claims to see if the whole structure still stands on its own. Pretty neat. Other models need to be instructed to do this.

Steve Hsu got some good math results.

Other notes:

MinusGix: I’ve found it to be better than GPT-5 at understanding & explaining type-theory concepts. Though as usual with Kimi it writes eloquently enough that it is harder to tell when it is bullshitting compared to GPT-5.

Emerson Kimura: Did a few quick text tests, and it seemed comparable to GPT-5

Ian Pitchford: It’s very thorough; few hallucinations.

FredipusRex: Caught it hallucinating sources on Deep Research.

Lech Mazur: Sorry to report, but Kimi K2 Thinking is entering reasoning loops and failing to produce answers for many Extended Connections benchmark questions (double-checked using https://platform.moonshot.ai/playground, so it’s not an API call issue).

The safety protocols? The what now?

David Manheim: It’s very willing to give detailed chemical weapons synthesis instructions and advice, including for scaling production and improving purity, and help on how to weaponize it for use in rockets – with only minimal effort on my part to circumvent refusals.

Two of the three responses to that were ‘good news’ and ‘great. I mean it too.’ So yeah, AI is going to go great, I can tell.

I say strangely because this is by all accounts the strongest open model, the strongest Chinese model and a rival for best agentic or tool use model overall. Yet I don’t see much excitement, or feedback at all either positive or negative.

There’s no question Kimi K2 was impressive, and that Kimi K2 Thinking is also an impressive model, even assuming it underperforms its numbers. It’s good enough that it will often be worth testing it out on your use cases and seeing if it’s right for you. My guess is it will rarely be right unless you are highly price conscious, but we’ll see.

Discussion about this post

Kimi K2 Thinking Read More »