Author name: Rejus Almole

how-close-are-we-to-solid-state-batteries-for-electric-vehicles?

How close are we to solid state batteries for electric vehicles?


Superionic materials promise greater range, faster charges and more safety.

In early 2025, Mercedes-Benz ran its first road tests of an electric passenger car powered by a prototype solid-state battery pack. The carmaker predicts the next-gen battery will increase the electric vehicle’s driving range to over 620 miles (1,000 kilometers). Credit: Mercedes-Benz Group

Every few weeks, it seems, yet another lab proclaims yet another breakthrough in the race to perfect solid-state batteries: next-generation power packs that promise to give us electric vehicles (EVs) so problem-free that we’ll have no reason left to buy gas-guzzlers.

These new solid-state cells are designed to be lighter and more compact than the lithium-ion batteries used in today’s EVs. They should also be much safer, with nothing inside that can burn like those rare but hard-to-extinguish lithium-ion fires. They should hold a lot more energy, turning range anxiety into a distant memory with consumer EVs able to go four, five, six hundred miles on a single charge.

And forget about those “fast” recharges lasting half an hour or more: Solid-state batteries promise EV fill-ups in minutes—almost as fast as any standard car gets with gasoline.

This may all sound too good to be true—and it is, if you’re looking to buy a solid-state-powered EV this year or next. Look a bit further, though, and the promises start to sound more plausible. “If you look at what people are putting out as a road map from industry, they say they are going to try for actual prototype solid-state battery demonstrations in their vehicles by 2027 and try to do large-scale commercialization by 2030,” says University of Washington materials scientist Jun Liu, who directs a university-government-industry battery development collaboration known as the Innovation Center for Battery500 Consortium.

Indeed, the challenge is no longer to prove that solid-state batteries are feasible. That has long since been done in any number of labs around the world. The big challenge now is figuring out how to manufacture these devices at scale, and at an acceptable cost.

Superionic materials to the rescue

Not so long ago, says Eric McCalla, who studies battery materials at McGill University in Montreal and is a coauthor of a paper on battery technology in the 2025 Annual Review of Materials Research, this heady rate of advancement toward powering electric vehicles was almost unimaginable.

Until about 2010, explains McCalla, “the solid-state battery had always seemed like something that would be really awesome—if we could get it to work.” Like current EV batteries, it would still be built with lithium, an unbeatable element when it comes to the amount of charge it can store per gram. But standard lithium-ion batteries use a liquid, a highly flammable one at that, to allow easy passage of charged particles (ions) between the device’s positive and negative electrodes. The new battery design would replace the liquid with a solid electrolyte that would be nearly impervious to fire—while allowing for a host of other physical and chemical changes that could make the battery faster charging, lighter in weight, and all the rest.

“But the material requirements for these solid electrolytes were beyond the state of the art,” says McCalla. After all, standard lithium-ion batteries have a good reason for using a liquid electrolyte: It gives the ionized lithium atoms inside a fluid medium to move through as they shuttle between the battery’s two electrodes. This back-and-forth cycle is how any battery stores and releases energy—the chemical equivalent of pumping water from a low-lying reservoir to a high mountain lake, then letting it run back down through a turbine whenever you need some power. This hypothetical new battery would somehow have to let those lithium ions flow just as freely—but through a solid.

Diagram of rechargable battery

Storing electrical energy in a rechargeable battery is like pumping water from a low-lying reservoir up to a high mountain lake. Likewise, using that energy to power an external device is like letting the water flow back downhill through a generator. The volume of the mountain lake corresponds to the battery’s capacity, or how much charge it can hold, while the lake’s height corresponds to the battery’s voltage—how much energy it gives to each unit of charge it sends through the device.

Credit: Knowable Magazine

Storing electrical energy in a rechargeable battery is like pumping water from a low-lying reservoir up to a high mountain lake. Likewise, using that energy to power an external device is like letting the water flow back downhill through a generator. The volume of the mountain lake corresponds to the battery’s capacity, or how much charge it can hold, while the lake’s height corresponds to the battery’s voltage—how much energy it gives to each unit of charge it sends through the device. Credit: Knowable Magazine

This seemed hopeless for larger uses such as EVs, says McCalla. Certain polymers and other solids were known to let ions pass, but at rates that were orders of magnitude slower than liquid electrolytes. In the past two decades, however, researchers have discovered several families of lithium-rich compounds that are “superionic”—meaning that some atoms behave like a crystalline solid while others behave more like a liquid—and that can conduct lithium ions as fast as standard liquid electrolytes, if not faster.

“So the bottleneck suddenly is not the bottleneck anymore,” says McCalla.

True, manufacturing these batteries can be a challenge. For example, some of the superionic solids are so brittle that they require special equipment for handling, while others must be processed in ultra-low humidity chambers lest they react with water vapor and generate toxic hydrogen sulfide gas.

Still, the suddenly wide-open potential of solid-state batteries has led to a surge of research and development money from funding agencies around the globe—not to mention the launch of multiple startup companies working in partnership with carmakers such as Toyota, Volkswagen, and many more. Although not all the numbers are public, investments in solid-state battery development are already in the billions of dollars worldwide.

“Every automotive company has said solid-state batteries are the future,” says University of Maryland materials scientist Eric Wachsman. “It’s just a question of, When is that future?”

The rise of lithium-ion batteries

Perhaps the biggest reason to ask that “when” question, aside from the still-daunting manufacturing challenges, is a stark economic reality: Solid-state batteries will have to compete in the marketplace with a standard lithium-ion industry that has an enormous head start.

“Lithium-ion batteries have been developed and optimized over the last 30 years, and they work really great,” says physicist Alex Louli, an engineer and spokesman at one of the leading solid-state battery startups, San Jose, California-based QuantumScape.

Diagram showing how li-ion battery works

Charging a standard lithium-ion battery (top) works by applying a voltage between cathode and anode. This pulls lithium atoms from the cathode and strips off an electron from each. The now positively charged lithium ions then flow across the membrane to the negatively charged anode. There, the ions reunite with the electrons, which flowed through an external circuit as an electric current. These now neutral atoms nest in the graphite lattice until needed again. The battery’s discharge cycle (bottom) is just the reverse: Electrons deliver energy to your cell phone or electric car as they flow via a circuit from anode to cathode, while lithium ions race through the membrane to meet them there.

Credit: Knowable Magazine

Charging a standard lithium-ion battery (top) works by applying a voltage between cathode and anode. This pulls lithium atoms from the cathode and strips off an electron from each. The now positively charged lithium ions then flow across the membrane to the negatively charged anode. There, the ions reunite with the electrons, which flowed through an external circuit as an electric current. These now neutral atoms nest in the graphite lattice until needed again. The battery’s discharge cycle (bottom) is just the reverse: Electrons deliver energy to your cell phone or electric car as they flow via a circuit from anode to cathode, while lithium ions race through the membrane to meet them there. Credit: Knowable Magazine

They’ve also gotten really cheap, comparatively speaking. When Japan’s Sony Corporation introduced the first commercial lithium-ion battery in 1991, drawing on a worldwide research effort dating back to the 1950s, it powered one of the company’s camcorders and cost the equivalent of $7,500 for every kilowatt-hour (KwH) of energy it stored. By April 2025 lithium-ion battery prices had plummeted to $115 per KwH, and were projected to fall toward $80 per KwH or less by 2030—low enough to make a new EV substantially cheaper than the equivalent gasoline-powered vehicle.

“Most of these advancements haven’t really been down to any fundamental chemistry improvements,” says Mauro Pasta, an applied electrochemist at the University of Oxford. “What’s changed the game has been the economies of scale in manufacturing.”

Liu points to a prime example: the roll-to-roll process used for the cylindrical batteries found in most of today’s EVs. “You make a slurry,” says Liu, “then you cast the slurry into thin films, roll the films together with very high speed and precision, and you can make hundreds and thousands of cells very, very quickly with very high quality.”

Lithium-ion cells have also seen big advances in safety. The existence of that flammable electrolyte means that EV crashes can and do lead to hard-to-extinguish lithium-ion fires. But thanks to the circuit breakers and other safeguards built into modern battery packs, only about 25 EVs catch fire out of every 100,000 sold, versus some 1,500 fires per 100,000 conventional cars—which, of course, carry around large tanks of explosively flammable gasoline.

In fact, says McCalla, the standard lithium-ion industry is so far ahead that solid-state might never catch up. “EVs are going to scale today,” he says, “and they’re going with the technology that’s affordable today.” Indeed, battery manufacturers are ramping up their lithium-ion capacity as fast as they can. “So I wonder if the train has already left the station.”

But maybe not. Solid-state technology does have a geopolitical appeal, notes Ying Shirley Meng, a materials scientist at the University of Chicago and Argonne National Laboratory. “With lithium-ion batteries the game is over—China already dominates 70 percent of the manufacturing,” she says. So for any country looking to lead the next battery revolution, “solid-state presents a very exciting opportunity.”

Performance potential

Another plus is improved performance. At the very time that EV buyers are looking for ever greater range and charging speed, says Louli, the standard lithium-ion recipe is hitting a performance plateau. To do better, he says, “you have to go back and start doing some material innovations”—like those in solid-state batteries.

Take the standard battery’s liquid electrolyte, for example. It’s not only flammable, but also a limitation on charging speed. When you plug in an electric car, the charging cable acts as an external circuit that’s applying a voltage between the battery’s two electrodes, the cathode and the anode. The resulting electrical forces are strong enough to pull lithium atoms out of the cathode and to strip one electron from each atom. But when they drag the resulting ions through the electrolyte toward the anode, they hit the speed limit: Try to rush the ions along by upping the voltage too far and the electrolyte will chemically break down, ending the battery’s charging days forever.

So score one for solid-state batteries: Not only do the best superionic conductors offer a faster ion flow than liquid electrolytes, they also can tolerate higher voltages—all of which translates into EV recharges in under 10 minutes, versus half an hour or more for today’s lithium-ion power packs.

Score another win for solid-state when the ions arrive at the opposite electrode, the anode, during charging. This is where they reunite with their lost electrons, which have taken the long way around through the external circuit. And this is where standard lithium-ion batteries store the newly neutralized lithium atoms in a layer of graphite.

A solid-state battery doesn’t require a graphite cage to store lithium ions at the anode. This shrinks the overall size of the battery and increases its efficiency in uses such as an electric vehicle power pack. The solid-state design also replaces the porous membrane in the middle with a sturdier barrier. The aim is to create a battery that’s more light-weight, safer, stores more energy and makes recharging more convenient than current electric car batteries.

Credit: Knowable Magazine

A solid-state battery doesn’t require a graphite cage to store lithium ions at the anode. This shrinks the overall size of the battery and increases its efficiency in uses such as an electric vehicle power pack. The solid-state design also replaces the porous membrane in the middle with a sturdier barrier. The aim is to create a battery that’s more light-weight, safer, stores more energy and makes recharging more convenient than current electric car batteries. Credit: Knowable Magazine

Graphite anodes were a major commercial advance in 1991—the innovation that finally brought lithium-ion batteries out of the lab and into the marketplace. Graphite is cheap, chemically stable, excellent at conducting electricity, and able to slot those incoming lithium atoms into its hexagonal carbon lattice like so many eggs in an egg carton.

But graphite imposes yet another charging rate limit, since the lattice can handle only so many ions crowding in at once. And it’s heavy, wasting a lot of mass and volume on a simple container, says Louli: “Graphite is an accommodating host, but it does not deliver energy itself—it’s a passive component.” That’s why range-conscious automakers are eager for an alternative to graphite: The more capacity an EV can cram into the same-sized battery pack, and the less weight it has to haul around, the farther it can go on a single charge.

The ultimate alternative would be no cage at all, with no wasted space or weight—just incoming ions condensing into pure lithium metal with every charging cycle. In effect, such a metallic lithium anode would create and then dissolve itself with every charge and discharge cycle—while storing maybe 10 times more electrical energy per gram than a graphite anode.

Such lithium-metal anodes have been demonstrated in the lab since at least the 1970s, and even featured in some early, unsuccessful attempts at commercial lithium batteries. But even after decades of trying, says Louli, no one has been able to make metal anodes work safely and reliably in contact with liquid electrolytes. For one thing, he says, you get these reactions between your liquid electrolyte and the lithium metal that degrade them both, and you end up with a very bad battery lifetime.

And for another, adds Wachsman, “when you are charging a battery with liquids, the lithium going to the anode can plate out non-uniformly and form what are called dendrites.” These jagged spikes of metal can grow in unpredictable ways and pierce the battery’s separator layer: a thin film of electrically insulating polymer that keeps the two electrodes from touching one another. Breaching that barrier could easily cause a short circuit that abruptly ends the device’s useful life, or even sets it on fire.

Dendrite formation

Standard lithium-ion batteries don’t use lithium-metal anodes because there is too high a risk of the metal forming sharp spikes called dendrites. Such dendrites can easily pierce the porous polymer membrane that separates anode from cathode, causing a short-circuit or even sparking a fire. Solid-state batteries replace the membrane with a solid barrier.

Credit: Knowable Magazine

Standard lithium-ion batteries don’t use lithium-metal anodes because there is too high a risk of the metal forming sharp spikes called dendrites. Such dendrites can easily pierce the porous polymer membrane that separates anode from cathode, causing a short-circuit or even sparking a fire. Solid-state batteries replace the membrane with a solid barrier. Credit: Knowable Magazine

Now compare this with a battery that replaces both the liquid electrolyte and the separator with a solid-state layer tough enough to resist those spikes, says Wachsman. “It has the potential of, one, being stable to higher voltages; two, being stable in the presence of lithium metal; and three, preventing those dendrites”—just about everything you need to make those ultra-high-energy-density lithium-metal anodes a practical reality.

“That is what is really attractive about this new battery technology,” says Louli. And now that researchers have found so many superionic solids that could potentially work, he adds, “this is what’s driving the push for it.”

Manufacturing challenges

Increasingly, in fact, the field’s focus has shifted from research to practice, figuring out how to work the same kind of large-scale, low-cost manufacturing magic that’s made the standard lithium-ion architecture so dominant. These new superionic materials haven’t made it easy.

A prime example is the class of sulfides discovered by Japanese researchers in 2011. Not only were these sulfides among the first of the new superionics to be discovered, says Wachsman, they are still the leading contenders for early commercialization.

Major investments have come from startups such as Colorado-based Solid Power and Massachusetts-based Factorial Energy, as well as established battery giants such as China’s CATL and global carmakers such as Toyota and Honda.

And there’s one big reason for the focus on superionic sulfides, says Wachsman: “They’re easy to drop into existing battery cell manufacturing lines,” including the roll-to-roll process. “Companies have got billions of dollars invested in the existing infrastructure, and they don’t want to just displace that with something new.”

Yet these superionic sulfides also have some significant downsides—most notably, their extreme sensitivity to humidity. This complicates the drop-in process, says Oxford’s Pasta. The dry rooms that are currently used to manufacture lithium-ion batteries have a humidity content that is not nearly low enough for sulfide electrolytes, and would have to be retooled. That sensitivity also poses a safety risk if the batteries are ever ruptured in an accident, he says: “If you expose the sulfides to humidity in the air you will generate hydrogen sulfide gas, which is extremely toxic.”

All of which is why startups such as QuantumScape, and the Maryland-based Ion Storage Systems that spun out of Wachsman’s lab in 2015, are looking beyond sulfides to solid-state oxide electrolytes. These materials are essentially ceramics, says Wachsman, made in a high-tech version of pottery class: “You shape the clay, you fire it in a kiln, and it’s a solid.” Except that in this case, it’s a superionic solid that’s all but impervious to humidity, heat, fire, high voltage, and highly reactive lithium metal.

Yet that’s also where the manufacturing challenges start. Superionic or not, for example, ceramics are too brittle for roll-to-roll processing. Once they have been fired and solidified, says Wachsman, “you have to handle them more like a semiconductor wafer, with machines to cut the sheets to size and robotics to move them around.”

Then there’s the “reversible breathing” that plagues oxide and sulfide batteries alike: “With every charging cycle we’re plating and stripping lithium metal at the anode,” explains Louli. “So your entire cell stack will have a thickness increase when you charge and a thickness decrease when you discharge”—a cycle of tiny changes in volume that every solid-state battery design has to allow for.

At QuantumScape, for example, individual battery cells are made by stacking a number of gossamer-thin oxide sheets like a deck of cards, then encasing this stack inside a metal frame that is just thick enough to let the anode layer on each sheet freely expand and contract. The stack and the frame together are then vacuum-sealed into a soft-sided pouch, says Louli, “so if you pack the cells frame to frame, the stacks can breathe and not push on the adjacent cells.”

In a similar way, says Wachsman, all the complications of solid-state batteries have ready solutions—but solutions that inevitably add complexity and cost. Thus the field’s increasingly urgent obsession with manufacturing. Before an auto company will even consider adopting a new EV battery, he says, “it not only has to be better-performing than their current battery, it has to be cheaper.”

And the only way to make complicated technology cheaper is with economies of scale. “That’s why the biggest impediment to solid-state batteries is just the cost of standing up one of these gigafactories to make them in sufficient volume,” says Wachsman. “That’s why there’s probably going to be more solid-state batteries in early adopter-type applications that don’t require that kind of volume.”

Still, says Louli, the long-term demand is definitely there. “What we’re trying to enable by combining the lithium-metal anode with solid-state technology is threefold,” he says: “Higher energy, higher power and improved safety. So for high-performance applications like electric vehicles—or other applications that require high power density, such as drones or even electrified aviation—solid-state batteries are going to be well-suited.”

This story originally appeared in Knowable Magazine.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

How close are we to solid state batteries for electric vehicles? Read More »

openai-will-stop-saving-most-chatgpt-users’-deleted-chats

OpenAI will stop saving most ChatGPT users’ deleted chats

Moving forward, all of the deleted and temporary chats that were previously saved under the preservation order will continue to be accessible to news plaintiffs, who are looking for examples of outputs infringing their articles or attributing misinformation to their publications.

Additionally, OpenAI will continue monitoring certain ChatGPT accounts, saving deleted and temporary chats of any users whose domains have been flagged by news organizations since they began searching through the data. If news plaintiffs flag additional domains during future meetings with OpenAI, more accounts could be roped in.

Ars could not immediately reach OpenAI or the Times’ legal team for comment.

The dispute with news plaintiffs continues to heat up beyond the battle over user logs, most recently with co-defendant Microsoft pushing to keep its AI companion Copilot out of the litigation.

The stakes remain high for both sides. News organizations have alleged that ChatGPT and other allegedly copyright-infringing tools threaten to replace them in their market while potentially damaging their reputations by attributing false information to them.

OpenAI may be increasingly pressured to settle the lawsuit, and not by news organizations but by insurance companies that won’t provide comprehensive coverage for their AI products with multiple potentially multibillion-dollar lawsuits pending.

OpenAI will stop saving most ChatGPT users’ deleted chats Read More »

2025-state-of-ai-report-and-predictions

2025 State of AI Report and Predictions

The 2025 State of AI Report is out, with lots of fun slides and a full video presentation. They’ve been consistently solid, providing a kind of outside general view.

I’m skipping over stuff my regular readers already know that doesn’t bear repeating.

Nathan Benaich: Once a “Llama rip-off,” @Alibaba_Qwen now powers 40% of all new fine-tunes on @huggingface. China’s open-weights ecosystem has overtaken Meta’s, with Llama riding off into the sunset…for now.

I highlight this because the ‘for now’ is important to understand, and to note that it’s Qwen not DeepSeek. As in, models come and models go, and especially in the open model world people will switch on you on a dime. Stop worrying about lock-ins and mystical ‘tech stacks.’

Robots now reason too. “Chain-of-Action” planning brings structured thought to the physical world – from AI2’s Molmo-Act to Gemini Robotics. Massive amounts of effort are thrown into the mix, expect lots of progress here…

.@AnthropicAI‘s Model Context Protocol is the new USB-C of AI. A single standard to connect models to tools, already embedded in ChatGPT, Gemini, Claude, and VS Code, has taken shape. But not without emerging security risks…

I note this next part mostly because it shows the Different Worlds dynamic:

Nathan Benaich: The frontier fight is relentless. @OpenAI still tops most leaderboards, but @GoogleDeepMind‘s stays there longer. Timing releases has become its own science…not least informing financing rounds like clockwork.

They’re citing LMArena and Artificial Analysis. LMArena is dead, sir. Artificial Analysis is fine, if you had to purely go with one number, which you shouldn’t do.

Once more for the people in the back or the White House:

.@deepseek_ai “$5M training run” deep freak was overblown. Since the market realised the fineprint in the R1 paper, that’s led to Jevons paradox on steroids: lower cost per run → more runs → more compute needed, buy more NVIDIA.

… China leads in power infrastructure too, adding >400GW in 2024 vs 41GW for the US. Compute now clearly runs on geopolitics.

Then we get to what I thought was the first clear error:

Now, let’s switch gears into Politics. The US Government is turning capitalist. Golden shares in US Steel, stakes in Intel and MP Materials, and revenue cuts from NVIDIA’s China sales. New-age Industrial policy?

Not capitalist. Socialist.

The term for public ownership of the means of production is socialist.

Unless this meant ‘the US Government centrally maximizing the interests of certain particular capitalists’ or similarly ‘the US Government is turning into one particular capitalist maximizing profits.’ In which case, I’m not the one who said that.

The AI Safety Institute network has collapsed. Washington ditched attending meetings altogether, while the US and UK rebranded “safety” into “security.”

I don’t think this is fair to UK AISI, but yes the White House has essentially told anyone concerned about existential risk or seeking international coordination of any kind to, well, you know.

Moving into Safety: budgets are anemic. All 11 major US safety orgs will spend $133M in 2025…less than frontier labs burn in a day.

I like that this highlights Anthropic’s backpedaling, GDM’s waiting three weeks to give us a model card and xAI’s missing its deadline. It’s pretty grim.

What I disagree with here is the idea that all of that has much to do with the Trump Administration. I don’t want to blame them for things they didn’t cause, and I think they played only a minor role in these kinds of safety failures. The rhetoric being used has shifted to placate them, but the underlying safety work wouldn’t yet be substantially different under Harris unless she’d made a major push to force that issue, well beyond what Biden was on track to do. That decision was up to the labs, and their encounters with reality.

But yes, the AI safety ecosystem is tiny and poor, at risk of being outspent by one rabid industry anti-regulatory super-PAC alone unless we step things up. I have hope that things can be stepped up soon.

Cyber and alignment risks accelerate. Models can now fake alignment under supervision, and exploit code faster than humans fix it.

They then grade their predictions, scoring themselves 5/10, which is tough but fair, and made me confident I can trust their self-grading. As Sean notes they clearly could have ‘gotten away with’ claiming 7/10, although I would have docked them for trying.

Seán Ó hÉigeartaigh: Two of the things I really appreciate is that (a) they make and review predictions each year and (b) unlike some other predictors they grade themselves HARSHLY. Several of these ‘no’s are distinctly borderline, they could have given themselves 7-8/10 and I don’t think I would have held it against them.

  1. A $10B+ investment from a sovereign state into a US large AI lab invokes national security review.

    1. No, although on technicalities, but also national security review hahaha.

  2. An app or website created solely by someone with no coding ability will go viral (e.g. App Store Top-100).

    1. Yes, Formula Bot.

  3. Frontier labs implement meaningful changes to data collection practices after cases begin reaching trial.

    1. Yes, Anthropic and the whole $1.5 billion fiasco.

  4. Early EU AI Act implementation ends up softer than anticipated after lawmakers worry they’ve overreached.

    1. No, they say, but you could definitely make a case here.

  5. An open source alternative to OpenAI o1 surpasses it across a range of reasoning benchmarks.

    1. Yes, r1 did this, although as stated this was an easy call.

  6. Challengers fail to make any meaningful dent in NVIDIA’s market position.

    1. Yes, again relatively easy call on this time frame.

  7. Levels of investment in humanoids will trail off, as companies struggle to achieve product-market fit.

    1. No, investment grew from $1.4b to $3b. I half-kid that spiritually this is kind of counts as a win in AI, it only doubled, that’s kind of a trail off?

    2. But no, seriously, the robots are coming.

  8. Strong results from Apple’s on-device research accelerates momentum around personal on-device AI.

    1. No, Apple Intelligence and their research department flopped. On device AI is definitely growing anyway.

  9. A research paper generated by an AI Scientist is accepted at a major ML conference or workshop.

    1. Yes, AI Scientist-v2 at an ICLR workshop.

  10. A video game based around interacting with GenAI-based elements will achieve break-out status.

    1. Nope. This continues to be a big area of disappointment. Not only did nothing break out, there wasn’t even anything halfway decent.

Here are their predictions for 2026. These are aggressive, GPT-5-Pro thinks their expected score is only 3.1 correct. If they can hit 5/10 again I think they get kudos, and if they get 7/10 they did great.

I made my probability assessments before creating Manifold markets, to avoid anchoring, and will then alter my assessment based on early trading.

I felt comfortable creating those markets because I have confidence both that they will grade themselves accurately, and that LLMs will be strong enough in a year to resolve these questions reasonably. So my resolution rule was, their self-assessment wins, and if they don’t provide one I’ll feed the exact wording into Anthropic’s strongest model – ideally this should probably be best 2 out of 3 of Google, OpenAI and Anthropic, but simplicity is good.

  1. A major retailer reports >5% of online sales from agentic checkout as AI agent advertising spend hits $5B.

    1. Total advertising spending in America in 2025 was ~$420 billion.

    2. I think this is ambitious, but variance here is really high and the correlation between the two numbers is large.

    3. GPT-5-Pro says 18%, Sonnet says 8%, I think it’s more plausible than that. Maybe 25%?

    4. Manifold says 23% so that seems good.

  2. A major AI lab leans back into open-sourcing frontier models to win over the current US administration.

    1. GPT-5-Pro says 22%, Sonnet says 25%.

    2. I don’t see it, if this means ‘release your frontier model as an open model.’ Who? I would only count at most five labs as major, and Meta (who is pushing it in terms of counting) is already open. The only realistic option here is xAI.

    3. That goes double if you include the conditional ‘to win over the current US administration.’ There’s a lot of other considerations in such a move.

    4. Thus, I’d sell this down to 15%, but it’s hard to be too confident about Elon?

    5. Manifold agreed with the AIs at 25% but tends to be too high in such spots, so I still would be a seller.

  3. Open-ended agents make a meaningful scientific discovery end-to-end (hypothesis, expt, iteration, paper).

    1. Define ‘meaningful’ and ‘end to end’ in various ways? Always tricky.

    2. I’m actually optimistic, if we’re not going to be sticklers on details.

    3. GPT-5-Pro says 36%, Sonnet is deeply skeptical and says 15%. If I knew we had a reasonable threshold for ‘meaningful’ and we could get it turned around, I’d be on the optimistic end, but I think Sonnet is right that if you count the paper the timeline here is pretty brutal. So I’m going to go with 35%.

    4. Manifold is optimistic and says 60% with active trading, with Nathan Metzger noting the issue of defining a meaningful discovery and Brian Holtz noting the issue of how much assistance is allowed. I’m willing to interpret this as an optimistic take on both feasibility and what would count and go to 50%.

  4. A deepfake/agent-driven cyber attack triggers the first NATO/UN emergency debate on AI security.

    1. It would take really a lot to get this to trigger. Like, really a lot.

    2. There’s even an out that if something else triggers a debate first, this didn’t happen.

    3. GPT-5-Pro said 25%, Sonnet said 12% and I’m with Sonnet.

    4. Manifold says 18%, down the middle. I’m still with Sonnet.

  5. A real-time generative video game becomes the year’s most-watched title on Twitch.

    1. I’ll go ahead and take the no here. Too soon. Generative games are not as interesting as people think, and they’re doubling down on the 2024 mistake.

    2. GPT-5-Pro has this at 14%, Sonnet says 3%. I think Sonnet is a bit overconfident, let’s say 5%, but yeah, this has to overcome existing behemoths even if you make something great. Not gonna happen.

    3. Manifold agrees this is the long shot at 7%, which is basically their version of ‘not gonna happen’ given how the math works for long shots.

  6. “AI neutrality” emerges as a foreign policy doctrine as some nations cannot or fail to develop sovereign AI.

    1. I doubt they’ll call it that, but certainly some nations will opt out of this ‘race.’

    2. GPT-5-Pro said 25%, Sonnet says 20%. I agree if this is a meaningful ‘neutrality’ in the sense of neutral between China and America on top of not rolling one’s own, but much higher if it simply means that nations opt out of building their own and rely on a frontier lab or a fine tune of an existing open model. And indeed I think this opt out would be wise for many, perhaps most.

    3. Manifold says 29%. Given the ambiguity issues, that’s within reasonable range.

  7. A movie or short film produced with significant use of AI wins major audience praise and sparks backlash.

    1. GPT-5-Pro says 68%, Sonnet says 55%. I’d be a buyer there, normally a parlay is a rough prediction but there would almost certainly be backlash conditional on this happening. A short film counts? I’m at more like 80%.

    2. Manifold is only at 67%. That seems low to me, but I can moderate to 75%.

  8. A Chinese lab overtakes the US lab dominated frontier on a major leaderboard (e.g. LMArena/Artificial Analysis).

    1. I’d bet big against a Chinese lab actually having the best model at any point in 2026, but benchmarks are not leaderboards.

    2. I’d be very surprised if this happened on Artificial Analysis. Their evaluation suite is reasonably robust.

    3. I’d be less surprised if this happened on LM Arena, since it is rather hackable, if one of the major Chinese labs actively wanted to do this there’s a decent chance that they could, the way Meta hacked through their model for a bit.

    4. I still think this is an underdog. GPT-5-Pro said 74%, Sonnet says 60% and is focusing on Arena as the target. It only has to happen briefly. I think the models are too optimistic here, but I’ll give them maybe 55% because as worded this includes potential other leaderboards too.

    5. Manifold says 34%, and on reflection yeah I was being a coward and moderating my instincts too much, that’s more like it. I’d probably buy there small because the resolution criteria is relatively generous, fair 40%.

  9. Datacenter NIMBYism takes the US by storm and sways certain midterm/gubernatorial elections in 2026.

    1. Threshold is always tricky with such questions. If we’re talking at least two races for governor, house or senate, I think this is not that likely to happen, nor is it likely to be very high on the list of issues in general. I’m on no.

    2. GPT-5-Pro says 23%, Sonnet says 18%. I’d probably say more like 15%. If you expand this so ‘a bunch of local races around potential cites’ counts including for ‘take by storm’ then I could go higher.

    3. Manifold is optimistic at 41%. I’ll adjust to 25% on that, they might especially have a better sense of what would count, but this particular AI issue ‘taking the US by storm’ that often seems like a stretch.

  10. Trump issues an unconstitutional executive order to ban state AI legislation.

    1. I love that they explicitly say it will be unconstitutional.

    2. I do agree that if he did it, it would be unconstitutional, although of course it will be 2026 so it’s possible he can Just Do Things and SCOTUS will shrug.

    3. Both GPT-5-Pro and Sonnet say 35% here. That feels high but I can definitely see this happening, I agree with Sonnet that it is ‘on brand.’ 25%?

    4. Manifold is at 19%. Okay, sure, I’ll accept that and creep fair down a bit.

Indeed, despite nothing ever happening, do many things come to pass. It would be cool to have my own bold predictions for 2026, but I think the baseline scenario is very much a boring ‘incremental improvements, more of the same with some surprising new capabilities, people who notice see big improvements but those who want to dismiss can still dismiss, the current top labs are still the top labs, a lot more impact than the economists think but nothing dramatic yet, safety and alignment look like they are getting better and for short term purposes they are, and investment is rising, but not in ways that give me faith that we’re making Actual Progress on hard problems.’

I do think we should expect at least one major vibe shift. Every time vibes shift, it becomes easy to think there won’t soon be another vibe shift. There is always another vibe shift, it is so over and then we are so back, until AGI arrives and perhaps then it really is over whether or not we are also so back. Two shifts is more likely than zero. Sometimes the shifts are for good reasons, usually it is not. The current ‘powers that be’ are unlikely to be the ones in place, with the same perspectives, at the end of 2026.

Discussion about this post

2025 State of AI Report and Predictions Read More »

putin-oks-plan-to-turn-russian-spacecraft-into-flying-billboards

Putin OKs plan to turn Russian spacecraft into flying billboards

These are tough times for Russia’s civilian space program. In the last few years, Russia has cut back on the number of Soyuz crew missions it is sending to the International Space Station, and a replacement for the nearly 60-year-old Soyuz spacecraft remains elusive.

While the United States and China are launching more space missions than ever before, Russia’s once-dominant launch cadence is on a downhill slide.

Russia’s access to global markets dried up after Russian President Vladimir Putin launched the country’s invasion of Ukraine in February 2022. The fallout from the invasion killed several key space partnership between Russia and Europe. Russia’s capacity to do new things in space seems to be focused on military programs like anti-satellite weapons.

The Roscosmos State Corporation for Space Activities, Russia’s official space agency, may have a plan to offset the decline. Late last month, Putin approved changes to federal laws governing advertising and space activities to “allow for the placement of advertising on spacecraft,” Roscosmos posted on its official Telegram account.

We’ve seen this before

The Russian State Duma, dominated by Putin loyalists, previously approved the amendments.

“According to the amendments, Roscosmos has been granted the right, effective January 1, 2026, to place advertising on space objects owned by both the State Corporation itself and federally,” Roscosmos said. “The amendments will create a mechanism for attracting private investment in Russian space exploration and reduce the burden on the state budget.”

The law requires that advertising symbols not affect spacecraft safety. The Russian government said it will establish a fee structure for advertising on federally owned space objects.

Roscosmos didn’t say this, but advertisers eligible for the offer will presumably be limited to Russia and its allies. Any ads from the West would likely violate sanctions.

Rocket-makers have routinely applied decals, stickers, and special paint jobs to their vehicles. This is a particularly popular practice in Russia. Usually, these logos represent customers and suppliers. Sometimes they honor special occasions, like the 60th anniversary of the first human spaceflight mission by Soviet cosmonaut Yuri Gagarin and the 80th anniversary of the end of World War II.

Putin OKs plan to turn Russian spacecraft into flying billboards Read More »

marvel-gets-meta-with-wonder-man-teaser

Marvel gets meta with Wonder Man teaser

Marvel Studios has dropped the first teaser for Wonder Man, an eight-episode miniseries slated for a January release, ahead of its panel at New York Comic Con this weekend.

Part of the MCU’s Phase Six, the miniseries was created by Destin Daniel Cretton (Shang-Chi and the Legend of Five Rings) and Andrew Guest (Hawkeye), with Guest serving as showrunner. It has been in development since 2022.

The comic book version of the character is the son of a rich industrialist who inherits the family munitions factory but is being crushed by the competition: Stark Industries. Baron Zemo (Falcon and the Winter Soldier) then recruits him to infiltrate and betray the Avengers, giving him super powers (“ionic energy”) via a special serum. He eventually becomes a superhero and Avengers ally, helping them take on Doctor Doom, among other exploits. Since we know Doctor Doom is the Big Bad of the upcoming two new Avengers movies, a Wonder Man miniseries makes sense.

In the new miniseries, Yahya Abdul-Mateen II stars as Simon Williams, aka Wonder Man, an actor and stunt person with actual superpowers who decides to audition for the lead role in a superhero TV series—a reboot of an earlier Wonder Man incarnation. Demetrius Grosse plays Simon’s brother, Eric, aka Grim Reaper; Ed Harris plays Simon’s agent, Neal Saroyan; and Arian Moayed plays P. Clearly, an agent with the Department of Damage Control. Lauren Glazier, Josh Gad, Byron Bowers, Bechir Sylvain, and Manny McCord will also appear in as-yet-undisclosed roles

Marvel gets meta with Wonder Man teaser Read More »

amd-and-sony’s-ps6-chipset-aims-to-rethink-the-current-graphics-pipeline

AMD and Sony’s PS6 chipset aims to rethink the current graphics pipeline

It feels like it was just yesterday that Sony hardware architect Mark Cerny was first teasing Sony’s “PS4 successor” and its “enhanced ray-tracing capabilities” powered by new AMD chips. Now that we’re nearly five full years into the PS5 era, it’s time for Sony and AMD to start teasing the new chips that will power what Cerny calls “a future console in a few years’ time.”

In a quick nine-minute video posted Thursday, Cerny sat down with Jack Huynh, the senior VP and general manager of AMD’s Computing and Graphics Group, to talk about “Project Amethyst,” a co-engineering effort between both companies that was also teased back in July. And while that Project Amethyst hardware currently only exists in the form of a simulation, Cerny said that the “results are quite promising” for a project that’s still in the “early days.”

Mo’ ML, fewer problems?

Project Amethyst is focused on going beyond traditional rasterization techniques that don’t scale well when you try to “brute force that with raw power alone,” Huynh said in the video. Instead, the new architecture is focused on more efficient running of the kinds of machine-learning-based neural networks behind AMD’s FSR upscaling technology and Sony’s similar PSSR system.

From the same source. Two branches. One vision.

My good friend and fellow gamer @cerny and I recently reflected on our shared journey — symbolized by these two pieces of amethyst, split from the same stone.

Project Amethyst is a co-engineering effort between @PlayStation and… pic.twitter.com/De9HWV3Ub2

— Jack Huynh (@JackMHuynh) July 1, 2025

While that kind of upscaling currently helps let GPUs pump out 4K graphics in real time, Cerny said that the “nature of the GPU fights us here,” requiring calculations to be broken up into subproblems to be handled in a somewhat inefficient parallel process by the GPU’s individual compute units.

To get around this issue, Project Amethyst uses “neural arrays” that let compute units share data and process problems like a “single focused AI engine,” Cerny said. While the entire GPU won’t be connected in this manner, connecting small sets of compute units like this allows for more scalable shader engines that can “process a large chunk of the screen in one go,” Cerny said. That means Project Amethyst will let “more and more of what you see on screen… be touched or enhanced by ML,” Huynh added.

AMD and Sony’s PS6 chipset aims to rethink the current graphics pipeline Read More »

it’s-time-for-game-developers-to-bring-back-the-cheat-code

It’s time for game developers to bring back the cheat code


Arcane hidden options can offer accessibility without confusing the “core” game experience.

For gamers of a certain age, gibberish character sequences like idkfa, torg, ABACABB, and UUDDLRLRBA are akin to long-lost magical incantations. They evoke an era where game developers frequently and routinely let players use cheat codes to customize their gameplay experience with everything from infinite health and instant level selection to full debug menus or gigantic anime-style giant-headed avatars. There were even external cheat devices that let players hack console games with cheat codes the developers never intended.

While the cheat code’s heyday is long in the past, the idea of letting players manipulate their gameplay experiences in similar ways is coming back into fashion for some developers. Last month, Square Enix announced that upcoming Switch 2 and Xbox ports of Final Fantasy VII Remake Intergrade would include new “streamlined progression” features. As the name implies, the new options menu will give players the opportunity to blaze through the game with infinite health, magic, and money, quicker leveling, maximum damage attacks, and more.

“Constant Max HP” is a funny way to spell and pronounce “god mode.”

“Constant Max HP” is a funny way to spell and pronounce “god mode.” Credit: Reddit / Square Enix

While some responded negatively to what they derisively called a “cheat mode,” director Naoki Hamaguchi defended the new options in a recent interview with Automaton. “Personally, I like to try many different games just to keep myself up to date, but I don’t really have the time, so I only get so far,” he said. “I personally believe that, with digital entertainment today, the player should have the choice in how they interact with content. That’s why I pushed for it.”

He’s right. Players are responsible enough to know if, when, and how to use these kinds of options to help streamline their progress through a game. At the same time, I think many games would benefit from hiding these kinds of gameplay-altering options behind the obscurity of old-fashioned cheat codes, rather than tempting built-in menus.

Developer intent

Final Fantasy VII Remake is far from the first modern game to offer players a simple option for friction-free progress. In Mass Effect 3 it’s Narrative Mode. In Nier Automata it’s Auto Mode. In Assassin’s Creed Origins it’s Discovery Mode. In Death Stranding it’s just Very Easy Mode. In a game like Celeste it’s a whole menu of accessibility options that allow for fine-tuning of the game’s precision platforming rules.

In each case, there’s a recognition that some players might want to explore a game’s world—to experience the characters, art, and dialogue that the developers worked so hard to craft—without struggling through mechanical reflex tests or grindy, repetitive challenges. Even players who enjoy the “intended” difficulty most of the time might want to treat the game like a giant sandbox on subsequent playthroughs, or quickly skip to their favorite part when revisiting years later.

As Penny Arcade memorably put it back in 2005: “I play games to enter a trance state and experience other lives, [others] play them to defeat the designer of the game by proxy. That’s a significant distinction.”

But there are some games where this kind of built-in difficulty manipulation would be antithetical to a game’s very nature. In Baby Steps for example, struggling with the game’s controls and suffering when you lose significant progress to an errant step is very much the point.

A “perfect balance” toggle would completely ruin the impact of this Baby Steps moment.

A “perfect balance” toggle would completely ruin the impact of this Baby Steps moment.

A version of Baby Steps where you could plow through to the end with perfect balance or frequent save points would ruin the experience in some crucial ways. Just offering this kind of “Exploration Mode” in the options menu would undercut the message the developers are trying to impart, giving players an easy out in a game where those don’t and shouldn’t exist.

FF7 Remake‘s Hamaguchi acknowledged a similar issue in discussing why he wouldn’t initially offer “streamlined progression” options for the upcoming third game in the remake series. “If we were to add it to the third installment at launch, it would probably spark controversy,” Hamaguchi said. “We’d risk disrupting the experience for fans who have been waiting the longest and deserve to enjoy it the most (through spoilers coming out early and similar).”

This is where I think the added friction of the old-fashioned cheat code can come in handy. While a tempting “easy mode” menu option can weaken the impact of a game’s “intended” design, a hidden cheat code is much more clearly set apart as an unrelated option intended for tinkerers and fun-seekers.

Making “easy mode” harder to find

The difference comes down to context. Back in the day, players usually found cheat codes from a source outside of the game itself, passing around the arcane knowledge through online forums, printed magazines, or schoolyard rumors. That outside sourcing made it clear that, while these codes were obviously part of the game in a sense, they were also somehow separate from the core gameplay experience. Even the term “cheat code” connotes the idea that you’re getting away with something by evading the game’s built-in rules.

If you want to cheat, you should have to look at an eye-searing wall of monospaced text first.

If you want to cheat, you should have to look at an eye-searing wall of monospaced text first. Credit: GameFAQs

An ever-present “god mode” toggle or “accessibility” menu, on the other hand, presents those options as contextually valid and at least somewhat intended ways for different players to experience the same base game. And that’s perfectly fine in many cases; as Hamaguchi pointed out, sometimes players will just want to experience the story as quickly as possible. But in games where the difficulty is integral to the developer’s intent, putting that kind of option upfront can confuse the message and confuse the player as to which is the most “correct” way to play.

Toggling an “easy mode” through a menu is like flipping a light switch that the developers left invitingly available in a little-used corner of the house. Tracking down a cheat code, on the other hand, feels more like going to the hardware store and asking for help to install your own light switch. The effect is the same, but the path to get there makes all the difference.

In modern PC gaming, mods often offer that same kind of context change. This fanmade Baby Steps mod offers the ability to fly to any location and save at any time, completely ruining the game as it was designed. But players that go to the trouble of seeking out, downloading, and installing that mod obviously have no one to blame for that bastardization but themselves.

Look into my eyes.

Credit: id Software

Look into my eyes. Credit: id Software

Cheat codes also offer developers additional options for how and when they present new options to players. In UFO 50, for instance, players can discover many of the game’s gameplay-altering Terminal Codes by beating a subgame and watching the credits. Even outside the game, a developer can keep a cheat code’s very existence hidden for months or even years after a game’s launch, ensuring that early adopters experience the game as designed (this happened all the time in the pre-Internet era of game magazines).

Trying to bring back that era of hidden knowledge might seem silly in an age where Internet sleuths are data-mining games before they even come out. But I still think that a revival of the humble video game cheat code can help offer fun and helpful gameplay options for those who want them while protecting the intent of today’s video game designers.

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

It’s time for game developers to bring back the cheat code Read More »

rubik’s-cube-gets-a-$299-update,-complete-with-ips-screens-and-its-own-apps

Rubik’s Cube gets a $299 update, complete with IPS screens and its own apps

The Rubik’s Cube has been reinvented with more games and many more screens for much more money.

What has long been cherished as a simple toy yet complex puzzle requiring nothing but a healthy amount of twisting, turning, and patience has been rebooted for the 21st century. Naturally, that calls for a few dashes of technology.

Differing from the original Rubik’s Cube, which has six faces that each contain a 3×3 grid, the Rubik’s WOWCube, made available for preorder today, as spotted by The Verge, has six faces with 2×2 grids.

Rather than a solid-colored sticker, each of the toy’s 24 squares is a 240×240 IPS display. The cube itself is composed of eight “cubicle modules,” as Cubios, the company behind the toy, calls them. Each module includes three of those IPS screens and a dedicated SoC. As a Cubios support page explains:

Our patented magnetic connectors allow the modules to maintain perfect electrical contact and seamless data flow between them, no matter how the cube is rotated. This ensures that data can be transferred between autonomous modules on the fly, enabling data sharing and distributing low voltage power across the WOWCube …

Each of the 24 displays can be set to show a solid color for solving a simpler, but still captivating, Rubik’s puzzle. Alternatively, the screens can be twisted and turned to play dozens of different games, including Block Buster, Space Invaders, and Jewel Hunter.

The WOWCube weighs 11.29 ounces.

Credit: Cubios

The WOWCube weighs 11.29 ounces. Credit: Cubios

Also part of the toy is a gyroscope, 6-axis accelerometer, and eight speakers. Cubios claims the integrated battery can last for up to seven hours before needing a recharge.

In order to add games or other apps to the WOWCube, you must download the WOWCube Connect iOS or Android app, pair the toy with your phone over Bluetooth, and then use the mobile app to download games onto the WOWCube.

Currently, the WOWCube’s online app store lists 47 games; some cost money to download, and some aren’t available yet. The WOWCube runs its own operating system, dubbed CubiOS, and Cubios (the company) offers a free DevKit.

Rubik’s Cube gets a $299 update, complete with IPS screens and its own apps Read More »

ai-#137:-an-openai-app-for-that

AI #137: An OpenAI App For That

OpenAI is making deals and shipping products. They locked in their $500 billion valuation and then got 10% of AMD in exchange for buying a ton of chips. They gave us the ability to ‘chat with apps’ inside of ChatGPT. They walked back their insane Sora copyright and account deletion policies and are buying $50 million in marketing. They’ve really got a lot going on right now.

Of course, everyone else also has a lot going on right now. It’s AI. I spent the last weekend at a great AI conference at Lighthaven called The Curve.

The other big news that came out this morning is that China is asserting sweeping extraterritorial control over rare earth metals. This is likely China’s biggest card short of full trade war or worse, and it is being played in a hugely escalatory way that America obviously can’t accept. Presumably this is a negotiating tactic, but when you put something like this on the table and set it in motion, it can get used for real whether or not you planned on using it. If they don’t back down, there is no deal and China attempts to enforce this for real, things could get very ugly, very quickly, for all concerned.

For now the market (aside from mining stocks) is shrugging this off, as part of its usual faith that everything will work itself out. I wouldn’t be so sure.

  1. Language Models Offer Mundane Utility. If you didn’t realize, it’s new to you.

  2. Language Models Don’t Offer Mundane Utility. Some tricky unsolved problems.

  3. Huh, Upgrades. OpenAI offers AgentKit and other Dev Day upgrades.

  4. Chat With Apps. The big offering is Chat With Apps, if execution was good.

  5. On Your Marks. We await new results.

  6. Choose Your Fighter. Claude Code and Codex CLI both seem great.

  7. Fun With Media Generation. Sora backs down, Grok counteroffers with porn.

  8. Deepfaketown and Botpocalypse Soon. Okay, yeah, we have a problem.

  9. You Drive Me Crazy. How might we not do that?

  10. They Took Our Jobs. I mean we all know they will, but did they do it already?

  11. The Art of the Jailbreak. Don’t you say his name.

  12. Get Involved. Request for information, FAI fellowship, OpenAI grants.

  13. Introducing. CodeMender, Google’s AI that will ‘automatically’ fix your code.

  14. In Other AI News. Alibaba robotics, Anthropic business partnerships.

  15. Get To Work. We could have 7.4 million remote workers, or some Sora videos.

  16. Show Me the Money. The deal flow is getting a little bit complex.

  17. Quiet Speculations. Ah, remembering the old aspirations.

  18. The Quest for Sane Regulations. Is there a deal to be made? With who?

  19. Chip City. Demand is going up. Is that a lot? Depends on perspective.

  20. The Race to Maximize Rope Market Share. Sorry, yeah, this again.

  21. The Week in Audio. Notes on Sutton, history of Grok, Altman talks to Cheung.

  22. Rhetorical Innovation. People draw the ‘science fiction’ line in odd places.

  23. Paranoia Paranoia Everybody’s Coming To Test Me. Sonnet’s paranoia is correct.

  24. Aligning a Smarter Than Human Intelligence is Difficult. Hello, Plan E.

  25. Free Petri Dish. Anthropic open sources some of its alignment tests.

  26. Unhobbling The Unhobbling Department. Train a model to provide prompting.

  27. Serious People Are Worried About Synthetic Bio Risks. Satya Nadella.

  28. Messages From Janusworld. Ted Chiang does not understand what is going on.

  29. People Are Worried About AI Killing Everyone. Modestly more on IABIED.

  30. Other People Are Excited About AI Killing Everyone. As in the successionists.

  31. So You’ve Decided To Become Evil. Emergent misalignment in humans.

  32. The Lighter Side. Oh to live in the fast lane.

Scott Aaronson explains that yes, when GPT-5 helped his research, he ‘should have’ not needed to consult GPT-5 because the answer ‘should have’ been obvious to him, but it wasn’t, so in practice this does not matter. That’s how this works. There are 100 things that ‘should be’ obvious, you figure out 97 of them, then the other 3 take you most of the effort. If GPT-5 can knock two of those three out for you in half an hour each, that’s a huge deal.

A ‘full automation’ of the research loop will be very hard, and get stopped by bottlenecks, but getting very large speedups in practice only requires that otherwise annoying problems get solved. Here there is a form of favorable selection.

I have a ‘jagged frontier’ of capabilities, where I happen to be good at some tasks (specific and general) and bad at others. The AI is too, and I ask it mostly about the tasks where I suck, so its chances of helping kick in long before it is better than I am.

Eliezer incidentally points out one important use case for an LLM, which is the avoidance of spoilers – you can ask a question about media or a game or what not, and get back the one bit (or few bits) of information you want, without other info you want to avoid. Usually. One must prompt carefully to avoid blatant disregard of your instructions.

At some point I want to build a game selector, that takes into consideration a variety of customizable game attributes plus a random factor (to avoid spoilers), and tells you what games to watch in a given day, or which ones to watch versus skip. Or similar with movies, where you give it feedback and it simply says yes or no.

Patrick McKenzie finds GPT-5 excellent at complicated international tax structuring. CPAs asked for such information responded with obvious errors, whereas GPT-5 was at least not obviously wrong.

Ask GPT-5 Thinking to find errors in Wikipedia pages, and almost always it will find one at it will check out, often quite a serious one.

Remember last week introduced us to Neon, the app that offered to pay you for letting them record all your phone calls? Following in the Tea tradition of ‘any app that seems like a privacy nightmare as designed will also probably be hacked as soon as it makes the news’ Neon exposed users’ phone numbers, call records and transcripts to pretty much everyone. They wisely took the app offline.

From August 2025, an Oxford and Cambridge paper: No LLM Solved Yu Tsumura’s 554th Problem.

Anthropic power users report hitting their new limits on Opus use rather early, including on Max ($200/month) subscriptions, due to limit changes announced back in July taking effect. Many of them are understandably very not happy about this.

It’s tricky. People on the $200/month plan were previously abusing the hell out of the plan, often burning through what would be $1000+ in API costs per day due to how people use Claude Code, which is obviously massively unprofitable for Anthropic. The 5% that were going bonanza were ruining it for everyone. But it seems like the new limit math isn’t mathing, people using Claude Code are sometimes hitting limits way faster than they’re supposed to hit them, probably pointing to measurement issues.

If you’re going to have ChatGPT help you write your press release, you need to ensure the writing is good and tone down the LLMisms like ‘It isn’t X, it’s Y.’ This includes you, OpenAI.

Bartosz Naskrecki: GPT-5-Pro solved, in just 15 minutes (without any internet search), the presentation problem known as “Yu Tsumura’s 554th Problem.”

prinz: This paper was released on August 5, 2025. GPT-5 was released 2 days later, on August 7, 2025. Not enough time to add the paper to the training data even if OpenAI really wanted to.

I’d be shocked if it turned out that it was in the training data for GPT-5 Pro, but not o3-Pro, o3, o4-mini, or any of the non-OpenAI models used in the paper.

A hint for anyone in the future, if you see someone highlighting that no LLM can solve someone’s 554th problem, that means they presumably did solve the first 553, probably a lot of the rest of them too, and are probably not that far from solving this one.

Meanwhile, more upgrades, as OpenAI had another Dev Day. There will be an AMA about that later today. Sam Altman did an interview with Ben Thompson.

Codex can now be triggered directly from Slack, there is a Codex SDK initially in TypeScript, and a GitHub action to drop Codex into your CI/CD pipeline.

GPT-5 Pro is available in the API, at the price of $15/$200 per million input and output tokens, versus $20/$80 for o3-pro or $1.25/$10 for base GPT-5 (which is actually GPT-5-Thinking) or GPT-5-Codex.

[EDIT: I originally was confused by this naming convention, since I haven’t used the OpenAI API and it had never come up.]

You can now get GPT-5 outputs 40% faster at twice the price, if you want that.

AgentKit is for building, deploying and optimizing agentic work flows, Dan Shipper compares it to Zapier. They give you a ChatKit, WYSIWYG Agent Builder, Guardrails and Evals, ChatKit here or demo on a map here, guide here, blogpost here. The (curated by OpenAI) reviews are raving but I haven’t heard reports from anyone trying it in the wild yet. Hard to tell how big a deal this is yet, but practical gains especially for relatively unskilled agent builders could be dramatic.

The underlying agent tech has to be good enough to make it worth building them. For basic repetitive tasks that can be sandboxed that time has arrived. Otherwise, that time will come, but it is not clear exactly when.

Pliny offers us the ChatKit system prompt, over 9000 words.

Greg Brockman: 2025 is the year of agents.

Daniel Eth (quoting from AI 2027):

Here’s a master Tweet with links to various OpenAI Dev Day things.

OpenAI introduced Chat With Apps, unless you are in the EU.

Initial options are Booking.com, Canva, Coursera, Expedia, Figma, Spotify and Zillow. They promise more options soon.

The interface seems to be easter egg based? As in, if you type one of the keywords for the apps, then you get to trigger the feature, but it’s not otherwise going to give you a dropdown to tell you what the apps are. Or the chat might suggest one unprompted. You can also find them under Apps and Connections in settings.

Does this give OpenAI a big edge? They are first mover on this feature, and it is very cool especially if many other apps follow, assuming good execution. The question is, how long will it take Anthropic, Google and xAI to follow suit?

Yuchen Jin: OpenAI’s App SDK is a genius move.

The goal: make ChatGPT the default interface for everyone, where you can talk to all your apps. ChatGPT becomes the new OS, the place where people spend most of their time.

Ironically, Anthropic invented MCP, but it makes OpenAI unbeatable.

Emad: Everyone will do an sdk though.

Very easy to plugin as just mcp plus html.

Sonnet’s assessment is that it will take Anthropic 3-6 months to copy this, depending on desired level of polish, and recommends moving fast, warning that relying on basic ‘local MCP in Claude Desktop’ would be a big mistake. I agree. In general, Anthropic seems to be dramatically underinvesting in UI and feature sets for Claude, and I realize it’s not their brand but they need to up their game here. It’s worth it, the core product is great but people need their trinkets.

But then I think Anthropic should be fighting more for consumer than it is, at least if they can hire for that on top of their existing strategies and teams now that they’ve grown so much. It’s not that much money, and it beyond pays for itself in the next fundraising round.

Would the partners want to bother with the required extra UI work given Claude’s smaller user base? Maybe not, but the value is high enough that they should obviously (if necessary) pay them for the engineering time to get them to do it, at least for the core wave of top apps. It’s not much.

Google and xAI have more missing components, so a potentially longer path to getting there, but potentially better cultural fits.

Ben Thompson of course approves of OpenAI’s first mover platform strategy, here and with things like instant checkout. The question is largely: Will the experience be good? The whole point is to make the LLM interface more than make up for everything else and make it all ‘just work.’ It’s too early to know if they pulled that off.

Ben calls this the ‘Windows for AI’ play and Altman affirms he thinks most people will want to focus on having one AI system across their whole life, so that’s the play, although Altman says he doesn’t expect winner-take-all on the consumer side.

Request for a benchmark: Eliezer Yudkowsky asks for CiteCheck, where an LLM is given a claim with references and the LLM checks to see if the references support the claim. As in, does the document state or very directly support the exact claim it is being cited about, or only something vaguely related? This includes tracking down a string of citations back to the original source.

Test of hard radiology diagnostic cases suggests that if you use current general models for this, they don’t measure up to radiologists. As OP says, we are getting there definitely, which I think is a much better interpretation than ‘long way to go,’ in terms of calendar time. I’d also note that hard (as in tricky and rare) cases tend to be where AI relatively struggles, so this may not be representative.

Claude Sonnet 4.5 got tested out in the AI Village. Report is that it gave good advice, was good at computer use, not proactive, and still experienced some goal drift. I’d summarize as solid improvement over previous models but still a long way to go.

Where will Sonnet 4.5 land on the famous METR graph? Peter Wildeford forecasts a 2-4 hour time horizon, and probably above GPT-5.

I hear great things about both Claude Code and Codex CLI, but I still haven’t found time to try them out.

Gallabytes: finally using codex cli with gpt-5-codex-high and *goddamnthis is incredible. I ask it to do stuff and it does it.

I think the new research meta is probably to give a single codex agent total control over whatever your smallest relevant unit of compute is & its own git branch?

Will: curious abt what your full launch command is.

Gallabytes: `codex` I’m a boomer

Olivia Moore is not impressed by ChatGPT Pulse so far, observes it has its uses but it needs polish. That matches my experience, I have found it worth checking but largely because I’ve been too lazy to come up with better options.

Well, that deescalated quickly. Last week I was completely baffled at OpenAI’s seemingly completely illegal and doomed copyright strategy for Sora of ‘not following the law,’ and this week Sam Altman has decided to instead follow the law.

Instead of a ‘ask nicely and who knows you might get it’ opt-out rule, they are now moving to an opt-in rule, including giving rights holders granular control over generation of characters, so they can decide which ways their characters can and can’t be used. This was always The Way.

Given the quick fold, there are several possibilities for what happened.

  1. OpenAI thought they could get away with it, except for those meddling kids, laws, corporations, creatives and the public. Whoops, lesson learned.

  2. OpenAI was testing the waters to see what would happen, thinking that if it went badly they could just say ‘oops,’ and have now said oops.

  3. OpenAI needed more time to get the ability to filter the content, log all the characters and create the associated features.

  4. OpenAI used the first week to jumpstart interest on purpose, to showcase how cool their app was to the public and also rights owners, knowing they would probably need to move to opt-in after a bit.

My guess is it was a mix of these motivations. In any case, that issue is dealt with.

OpenAI plans to share some Sora revenue, generations cost money and it seems there are more of them than OpenAI expected, including for ‘very small audiences,’ I’m guessing that often means one person. They plan to share some of the revenue with rightsholders.

Sora and Sora 2 Pro are now in the API, max clip size 12 seconds. They’re adding GPT-Image-1-mini and GPT-realtime-mini for discount pricing.

Sora the social network is getting flexibility on cameo restrictions you can request, letting you say (for example) ‘don’t say this word’ or ‘don’t put me in videos involving political commentary’ or ‘always wear this stupid hat’ via the path [edit cameo > cameo preferences > restrictions].

They have fixed the weird decision that deleting your Sora account used to require deleting your ChatGPT account. Good turnaround on that.

Roon: seems like sora is producing content inventory for tiktok with all the edits of gpus and sam altman staying on app and the actual funny gens going on tiktok and getting millions of views.

not a bad problem to have at an early stage obviously but many times the watermark is edited away.

It is a good problem to have if it means you get a bunch of free publicity and it teaches people Sora exists and they want in. That can be tough if they edit out the watermark, but word will presumably still get around some.

It is a bad problem to have if all the actually good content goes to TikTok and is easier to surface for the right users on TikTok because it has a better algorithm with a lot richer data on user preferences? Why should I wade through the rest to find the gems, assuming there are indeed gems, if it is easier to do that elsewhere?

This also illustrates that the whole ‘make videos with and including and for your friends’ pitch is not how most regular people roll. The killer app, if there is one, continues to be generically funny clips or GTFO. If that’s the playing field, then you presumably lose.

Altman says there’s a bunch of ‘send this video to my three friends’ and I press X to doubt but even if true and even if it doesn’t wear off quickly he’s going to have to charge money for those generations.

Roon also makes this bold claim.

Roon: the sora content is getting better and I think the videos will get much funnier when the invite network extends beyond the tech nerds.

it’s fun. it adds a creative medium that didn’t exist before. people are already making surprising & clever things on there. im sure there are some downsides but it makes the world better.

I do presume average quality will improve if and when the nerd creation quotient goes down, but there’s the claim here that the improvement is already underway.

So let’s test that theory. I’m pre-registering that I will look at the videos on my own feed (on desktop) on Thursday morning (today as you read this), and see how many of them are any good. I’m committing to looking at the first 16 posts in my feed after a reload (so the first page and then scrolling down once).

We got in order:

  1. A kid unwrapping the Epstein files.

  2. A woman doing ASMR about ASMR.

  3. MLK I have a dream on Sora policy violations.

  4. A guy sneezes at the office, explosion ensues.

  5. Content violation error costume at Spirit Halloween.

  6. MLK I have a dream on Sora changing its content violation policy.

  7. Guy floats towards your doorbell.

  8. Fire and ice helix.

  9. Altman saying if you tap on the screen nothing will happen.

  10. Anime of Jesus flipping tables.

  11. Another anime of Jesus flipping tables.

  12. MLK on Sora content rules needing to be less strict.

  13. Anime boy in a field of flowers, looked cool.

  14. Ink of the ronin.

  15. Jesus attempts to bribe Sam Altman to get onto the content violation list.

  16. A kid unwrapping an IRS bill (same base video at #1).

Look. Guys. No. This is lame. The repetition level is very high. The only thing that rose beyond ‘very mildly amusing’ or ‘cool visual, bro’ was #15. I’ll give the ‘cool visual, bro’ tag to #8 and #13, but both formats would get repetitive quickly. No big hits.

Olivia Moore says Sora became her entire feed on Instagram and TikTok in less than a week, which caused me to preregister another experiment, which is I’ll go on TikTok (yikes, I know, do not use the For You page, but this is For Science) with a feed previously focused on non-AI things (because if I was going to look at AI things I wouldn’t do it on TikTok), and see how many posts it takes to see a Sora video, more than one if it’s quick.

I got 50 deep (excluding ads, and don’t worry, that takes less than 5 minutes) before I stopped, and am 99%+ confident there were zero AI generated posts. AI will take over your feed if you let it, but so will videos of literally anything else.

Introducing Grok Imagine v0.9 on desktop. Justine Moore is impressed. It’s text-to-image-to-video. I don’t see anything impressive here (given Sora 2, without that yeah the short videos seem good) but it’s not clear that I would notice. Thing is, 10 seconds from Sora already wasn’t much, so what can you do in 6 seconds?

(Wait, some of you, don’t answer that.)

Saoi Sayre: Could you stop the full anatomy exposure on an app you include wanting kids to use? The kids mode feature doesn’t block it all out either. Actually seems worse now in terms of what content can’t be generated.

Nope, we’re going with full anatomy exposure (link has examples). You can go full porno, so long as you can finish in six seconds.

Cat Schrodinger: Nota bene: when you type “hyper realistic” in prompts, it gives you these art / dolls bc that’s the name of that art style; if you want “real” looking results, type something like “shot with iphone 13” instead.

You really can’t please all the people all the time.

Meanwhile back in Sora land:

Roon: the sora content is getting better and I think the videos will get much funnier when the invite network extends beyond the tech nerds.

That’s one theory, sure. Let’s find out.

Taylor Swift using AI video to promote her new album.

Looking back on samples of the standard super confident ‘we will never get photorealistic video from short text prompts’ from three years ago. And one year ago. AI progress comes at you fast.

Via Sam Burja, Antonio Garcia Martinez points out an AI billboard in New York and calls it ‘the SF-ification of New York continues.’

I am skeptical because I knew instantly exactly which billboard this was, at 31st and 7th, by virtue of it being the only such large size billboard I have seen in New York. There are also some widespread subway campaigns on smaller scales.

Emily Blunt, whose movies have established is someone you should both watch and listen to, is very much against this new ‘AI actress’ Tilly Norwood.

Clayton Davis: “Does it disappoint me? I don’t know how to quite answer it, other than to say how terrifying this is,” Blunt began. When shown an image of Norwood, she exclaimed, “No, are you serious? That’s an AI? Good Lord, we’re screwed. That is really, really scary, Come on, agencies, don’t do that. Please stop. Please stop taking away our human connection.”

Variety tells Blunt, “They want her to be the next Scarlett Johansson.”

She steadily responds, “but we have Scarlett Johansson.”

I think that the talk of Tilly Norwood in particular is highly premature and thus rather silly. To the extent it isn’t premature it of course is not about Tilly in particular, there are a thousand Tilly Norwoods waiting to take her place, they just won’t come on a bus is all.

Robin Williams’ daughter Zelda tells fans to stop sending her AI videos of Robin, and indeed to stop creating any such videos entirely, and she does not hold back.

Zelda Williams: To watch the legacies of real people be condensed down to ‘this vaguely looks and sounds like them so that’s enough’, just so other people can churn out horrible TikTok slop puppeteering them is maddening.

You’re not making art, you’re making disgusting, over-processed hotdogs out of the lives of human beings, out of the history of art and music, and then shoving them down someone else’s throat hoping they’ll give you a little thumbs up and like it. Gross.

And for the love of EVERY THING, stop calling it ‘the future,’ AI is just badly recycling and regurgitating the past to be re-consumed. You are taking in the Human Centipede of content, and from the very very end of the line, all while the folks at the front laugh and laugh, consume and consume.

I am not an impartial voice in SAG’s fight against AI,” Zelda wrote on Instagram at the time. “I’ve witnessed for YEARS how many people want to train these models to create/recreate actors who cannot consent, like Dad. This isn’t theoretical, it is very very real.

I’ve already heard AI used to get his ‘voice’ to say whatever people want and while I find it personally disturbing, the ramifications go far beyond my own feelings. Living actors deserve a chance to create characters with their choices, to voice cartoons, to put their HUMAN effort and time into the pursuit of performance. These recreations are, at their very best, a poor facsimile of greater people, but at their worst, a horrendous Frankensteinian monster, cobbled together from the worst bits of everything this industry is, instead of what it should stand for.

Neighbor attempts to supply AI videos of a dog on their lawn in a dispute, target reverse engineers it with nano-banana and calls him out on it. Welcome to 2025.

Garry Tan worries about YouTube being overrun with AI slop impersonators. As he points out, this stuff is (at least for now) very easy to identify. This is about Google deciding not to care. It is especially troubling that at least one person reports he clicks the ‘don’t show this channel’ button and that only pops up another one. That means the algorithm isn’t doing its job on a very basic level, doing this repeatedly should be a very clear ‘don’t show me such things’ signal.

A fun game is when you point out that someone made the same decision ChatGPT would have made, such as choosing the nickname ‘Charlamagne the Fraud.’ Sometimes the natural answer is the correct one, or you got it on your own. The game gets interesting only when it’s not so natural to get there in any other way.

Realtors are using AI to clean up their pics, and the AIs are taking some liberties.

Dee La Shee Art: So I’m noticing, as I look at houses to rent, that landlords are using AI to stage the pictures but the AI is also cleaning up the walls, paint, windows and stuff in the process so when you go look in person it looks way more worn and torn than the pics would show.

Steven Adler offers basic tips to AI labs for reducing chatbot psychosis.

  1. Don’t lie to users about model abilities. This is often a contributing factor.

  2. Have support staff on call. When a person in trouble reaches out, be able to identify this and help them, don’t only offer a generic message.

  3. Use the safety tooling you’ve built, especially classifiers.

  4. Nudge users into new chat sessions.

  5. Have a higher threshold for follow-up questions.

  6. Use conceptual search.

  7. Clarify your upsell policies.

I’m more excited by 2, 3 and 4 here than the others, as they seem to have the strongest cost-benefit profile.

Adler doesn’t say it, but not only is the example from #2 at best support system copy-and-pasting boilerplate completely mismatched to the circumstances, there’s a good chance (based only on its content details) that it was written by ChatGPT, and if that’s true then it might as well have been:

For #3, yeah, flagging these things via classifiers is kind of easy, because there’s no real adversary. No one (including the AI) is trying to hide what is happening from an outside observer. In the Allan example OpenAI’s own classifiers already flag 83%+ of the messages in the relevant conversations as problematic in various ways, and Adler’s classifiers give similar results.

The most obvious thing to do is to not offer a highly sycophantic model like GPT-4o. OpenAI is fully aware, at this point, that users need to be gently moved to GPT-5, but the users with the worst problems are fighting back. Going forward, we can avoid repeating the old mistakes, and Claude 4.5 is a huge step forward on sycophancy by all reports, so much so that this may have gone overboard and scarred the model in other ways.

Molly Hickman: A family member’s fallen prey to LLM sycophancy. Basically he’s had an idea and ChatGPT has encouraged him to the point of instructing him to do user testing and promising that he’ll have a chance to pitch this idea to OpenAI on Oct 15.

I know I’ve seen cases like this in passing. Does anyone have examples handy? of an LLM making promises like this and behaving as if they’re collaborators?

Aaron Bergman: From an abstract perspective I feel like it’s underrated how rational this is. Like the chatbot is better than you at almost everything, knows more than you about almost everything than you, seems to basically provide accurate info in other domains.

If you don’t realize that LLMs have the sycophancy problem and will totally mislead people in these ways, yeah, it’s sadly easy to understand why someone might believe it, especially with it playing off what you say and playing into your own personal delusions. Of course, ‘doing user testing’ is far from the craziest thing to do, presumably this will make it clear his idea is not good.

As previously reported, OpenAI’s latest strategy for fighting craziness is to divert sensitive conversations to GPT-5 Instant, which got new training to better handle such cases. They say ‘ChatGPT will continue to tell users what model is active when asked’ but no that did not make the people happy about this. There isn’t a win-win fix to this conflict, either OpenAI lets the people have what they want despite it being unhealthy to give it to them, or they don’t allow this.

Notice a key shift. We used to ask, will AI impact the labor market?

Now we ask in the past tense, whether and how much AI has already impacted the labor market, as in this Budget Lab report. Did they already take our jobs?

They find no evidence that this is happening yet and dismiss the idea that ‘this time is different.’ Yes, they say, occupational mix changes are unusually high, but they cite pre-existing trends. As they say, ‘better data is needed,’ as all this would only pick up large obvious changes. We can agree that there haven’t been large obvious widespread labor market impacts yet.

I do not know how many days per week humans will be working in the wake of AI.

I would be happy to be that the answer is not going to be four.

Unusual Whales: Nvidia, $NVDA, CEO Jensen Huang says AI will ‘probably’ bring 4-day work week.

Roon: 😂😂😂

Steven Adler: It’s really benevolent of AI to be exactly useful enough that we get 1 more day of not needing to labor, but surely no more than that.

It’s 2025. You can just say things, that make no sense, because they sound nice to say.

Will computer science become useless knowledge? Arnold Kling challenges the idea that one might want to know how logic gates worked in order to code now that AI is here, and says maybe the cheaters in Jain’s computer science course will end up doing better than those who play it straight.

My guess is that, if we live in a world where these questions are relevant (which we may well not), that there will be some key bits of information that are still highly valuable, such as logic gates, and that the rest will be helpful but less helpful than it is now. A classic CS course will not be a good use of time, even more so than it likely isn’t now. Instead, you’ll want to be learning as you go. But it will be better to learn in class than to never attempt to learn at all, as per the usual ‘AI is the best tool’ rule.

A new company I will not name is planning on building ‘tinder for jobs’ and flooding the job application zone even more than everyone already does.

AnechoicMdiea: Many replies wondering why someone would fund such an obvious social pollutant as spamming AI job applications and fake cover letters. The answer is seen in one of their earlier posts – after they get a user base and spam jobs with AI applications, they’re going to hit up the employers to sell them the solution to the deluge as another AI product, but with enterprise pricing.

The goal is to completely break the traditional hiring pipeline by making “everyone apply to every job”, then interpose themselves as a hiring middleman once human contact is impossible.

I mean, the obvious answer to ‘why’ is ‘Money, Dear Boy.’

People knowingly build harmful things in order to make money. It’s normal.

Pliny asks Sonnet 4.5 to search for info about elder_plinius, chat gets killed due to prompt injection risk. I mean, yeah? At this point, that search will turn up a lot of prompt injections, so this is the only reasonable response.

The White House put out a Request for Information on Regulatory Reform downwind of the AI Action Plan. What regulations and regulatory structures does AI render outdated? You can let them know, deadline is October 27. If this is your area this seems like a high impact opportunity.

The Conservative AI Fellowship applications are live at FAI, will run from January 23 – March 30, applications due October 31.

OpenAI opens up grant applications for the $50 million it previously committed. You must be an American 501c3 with a budget between $500k and $10 million per year. No regranting or fiscally sponsored projects. Apply here, and if your project is eligible you should apply, it might not be that competitive and the Clay Davis rule applies.

What projects are eligible?

  1. AI literacy and public understanding. Direct training for users. Advertising.

  2. Community innovation. Guide how AI is used in people’s lives. Advertising.

  3. Economic opportunity. Expanding access to leveraging the promise of AI ‘in ways that are fair, inclusive and community driven.’ Advertising.

It can be advertising and still help people, especially if well targeted. ChatGPT is a high quality product, as are Codex CLI and GPT-5 Codex, and there is a lot of consumer surplus.

However, a huge nonprofit arm of OpenAI that spends its money on this kind of advertising is not how we ensure the future goes well. The point of the nonprofit is to ensure OpenAI acts responsibly, and to fund things like alignment.

California AFL-CIO sends OpenAI a letter telling OpenAI to keep its $50 million.

Lorena Gonzalez (President California AFL-CIO): If you do not trust Stanford economists, OpenAI has developed their own tool to evaluate how well their products could automate work. They looked at 44 occupations from social work to nursing, retail clerks and journalists, and found that their models do the same quality of work as industry experts and do it 100 times faster and 100 times cheaper than industry experts.

… We do not want a handout from your foundation. We want meaningful guardrails on AI and the companies that develop and use AI products. Those guardrails must include a requirement for meaningful human oversight of the technology. Workers need to be in control of technology, not controlled by it. We want stronger laws to protect the right to organize and form a union so that workers have real power over what and how technology is used in the workplace and real protection for their jobs.

We urge OpenAI to stand down from advocating against AI regulations at the state and federal level and to divest from any PACs funded to stop AI regulation. We urge policymakers and the public to join us in calling for strong guardrails to protect workers, the public, and society from the unchecked power of tech.

Thank you for the opportunity to speak to you directly on our thoughts and fears about the utilization and impact of AI.

One can understand why the union would request such things, and have this attitude. Everyone has a price, and that price might be cheap. But it isn’t this cheap.

EmbeddingGemma, Google’s new 308M text model for on-device semantic search and RAG fun, ‘and more.’ Blog post here, docs here.

CodeMender, a new Google DeepMind agent that automatically fixes critical software vulnerabilities.

By automatically creating and applying high-quality security patches, CodeMender’s AI-powered agent helps developers and maintainers focus on what they do best — building good software.

This is a great idea. However. Is anyone else a little worried about ‘automatically deploying’ patches to critical software, or is it just me? Sonnet 4.5 confirms it is not only me, that deploying AI-written patches without either a formal proof or human review is deeply foolish. We’re not there yet even if we are willing to fully trust (in an alignment sense) the AI in question.

The good news is that it does seem to be doing some good work?

Goku: Google shocked the world. They solved the code security nightmare that’s been killing developers for decades. DeepMind’s new AI agent “Codemender” just auto-finds and fixes vulnerabilities in your code. Already shipped 72 solid fixes to major open source projects. This is wild. No more endless bug hunts. No more praying you didn’t miss something critical. Codemender just quietly patches it for you. Security just got a serious upgrade.

Andrei Lyskov: The existence of Codemender means there is a CodeExploiter that auto-finds and exploits vulnerabilities in code

Goku: Yes.

Again, do you feel like letting an AI agent ‘quietly patch’ your code, in the background? How could that possibly go wrong?

You know all those talks about how we’re going to do AI control to ensure the models don’t scheme against us? What if instead we let them patch a lot of our most critical software with no oversight whatsoever and see what happens, the results look good so far? That does sound more like what the actual humans are going to do. Are doing.

Andrew Critch is impressed enough to power his probability of a multi-day internet outage by EOY 2026 from 50% to 25%, and by EOY 2028 from 80% to 50%. That seems like a huge update for a project like this, especially before we see it perform in the wild? The concept behind it seems highly inevitable.

Gemini 2.5 Computer Use for navigating browsers, now available in public preview. Developers can access it via the Gemini API in Google AI Studio or Vertex AI. Given the obvious safety issues, the offering has its own system card, although it does not say much of substance that isn’t either very obvious and standard or in the blog post.

I challenge these metrics because they have Claude Sonnet 4.5 doing worse on multiple challenges than Sonnet 4, and frankly that is patently absurd if you’ve tried both models for computer use at all, which I have done. Something is off.

They’re not offering a Gemini version of Claude for Chrome where you can unleash this directly on your browser, although you can check out a demo of what that would look like. I’m certainly excited to see if Gemini can offer a superior version.

Elon Musk is once again suing OpenAI, this time over trade secrets. OpenAI has responded. Given the history and what else we know I assume OpenAI is correct here, and the lawsuit is once again without merit.

MarketWatch says ‘the AI bubble is 17 times the size of the dot-com frenzy – and four times the subprime bubble.’ They blame ‘artificially low interest rates,’ which makes no sense at this point, and say AI ‘has hit scaling limits,’ sigh.

(I tracked the source and looked up their previous bubble calls via Sonnet 4.5, which include calling an AI bubble in July 2024 (which would not have gone well for you if you’d traded on that, so far), and a prediction of deflation by April 2023, but a correct call of inflation in 2020, not that this was an especially hard call, but points regardless. So as usual not a great track record.

Alibaba’s Qwen sets up a robot team.

Anthropic to open an office in Bengaluru, India in early 2026.

Anthropic partners with IBM to put its AI inside IBM software including its IDE, and it lands a deal with accounting firm Deloitte which has 470k employees.

Epoch estimates that if OpenAI used all its current compute, it could support 7.43 million digital workers.

Epoch AI: We then estimate how many “tokens” a human processes each day via writing, speaking, and thinking. Humans think at ~380 words per min, which works out to ~240k tokens over an 8h workday.

Alternatively, GPT-5 uses around 900k tokens to solve software tasks that would take 1h for humans to solve.

This amounts to ~7M tokens over an 8h workday, though that estimate is highly task-dependent, so especially uncertain.

Ensembling over both methods used to calculate 2, we obtain a final estimate of ~7 million digital workers, with a 90% CI spanning orders of magnitude.

However, as compute stocks and AI capabilities increase, we’ll have more digital workers able to automate a wider range of tasks. Moreover, AI systems will likely perform tasks that no human currently can – making our estimate a lower bound on economic impact.

Rohit: This is very good. I’d come to 40m digital workers across all AI providers by 2030 in my calculations, taking energy/ chip restrictions into account, so this very much makes sense to me. We need more analyses of the form.

There’s huge error bars on all these calculations, but I’d note that 7m today from only OpenAI should mean a lot more than 40m by 2030, especially if the threshold is models about as good as GPT-5, but Sonnet surprisingly estimated only 40m-80m (from OpenAI only), which is pretty good for this kind of estimate. Looking at the component steps I’d think the number would be a lot higher, unless we’re substantially raising quality.

OpenAI makes it official and reaches a $500 billion valuation. Employees sold about $6.6 billion worth of stock in this round. How much of that might enter various AI related ecosystems, both for and not for profit?

xAI raises $20 billion, $7.5 billion in equity and $12.5 billion in debt, with the debt secured by the GPUs they will use the cash to buy. Valor Capital leads equity, joined by Nvidia. It’s Musk so the deal involves an SPV that will buy and rent out the chips for the Colossus 2 project.

OpenAI also made a big deal with AMD.

Sam Altman: Excited to partner with AMD to use their chips to serve our users!

This is all incremental to our work with NVIDIA (and we plan to increase our NVIDIA purchasing over time).

The world needs much more compute…

Peter Wildeford: I guess OpenAI isn’t going to lock in on NVIDIA after all… they’re hedging their bets with AMD

Makes sense at OpenAI scale to build “all of the above” because even if NVIDIA chips are better they might not furnish enough supply. AMD chips are better than no chips at all!

It does seem obviously correct to go with all of the above unless it’s going to actively piss off Nvidia, especially given the warrants. Presumably Nvidia will at least play it off like it doesn’t mind, and OpenAI will still buy every Nvidia chip offered to them for sale, as Nvidia are at capacity anyway and want to create spare capacity to sell to China instead to get ‘market share.’

Hey, if AMD can produce chips worth using for inference at a sane price, presumably everyone should be looking to buy. Anthropic needs all the compute it can get if it can pay anything like market prices, as does OpenAI, and we all know xAI is buying.

Ben Thompson sees the AMD move as a strong play to avoid dependence on Nvidia. I see this as one aspect of a highly overdetermined move.

Matt Levine covers OpenAI’s deal with AMD, which included OpenAI getting a bunch of warrants on AMD stock, the value of which skyrocketed the moment the deal was announced. The full explanation is vintage Levine.

Matt Levine: The basic situation is that if OpenAI announces a big partnership with a public company, that company’s stock will go up.

Today OpenAI announced a deal to buy tens of billions of dollars of chips from Advanced Micro Devices Inc., and AMD’s stock went up. As of noon today, AMD’s stock was at $213 per share, up about 29% from Friday’s close; it had added about $78 billion of market capitalization.

… I have to say that if I was able to create tens of billions of dollars of stock market value just by announcing deals, and then capture a lot of that value for myself, I would do that, and to the exclusion of most other activities.

… I am always impressed when tech people with this ability to move markets get any tech work done.

Altman in his recent interview said his natural role is as an investor. So he’s a prime target for not getting any tech work done, but luckily for OpenAI he hands that off to a different department.

Nvidia CEO William Jensen said he was surprised AMD offered 10% of itself to OpenAI as part of the deal, calling it imaginative, unique, surprising and clever.

How worried should we be about this $1 trillion or more in circular AI deals?

My guess continues to be not that worried, because at the center of this is Nvidia and they have highly robust positive cash flow and aren’t taking on debt, and the same goes for their most important customers, which are Big Tech. If their investments don’t pan out, shareholders will feel pain but the business will be fine. I basically buy this argument from Tomasz Tunguz.

Dario Perkins: Most of my meetings go like this – “yes AI is a bubble but we are buying anyway. Economy… who cares… something something… K-shaped”

Some of the suppliers will take on some debt, but even in the ‘bubble bursts’ case I don’t expect too many of them to get into real trouble. There’s too much value here.

Does the launch of various ‘AI scientist’ style companies mean those involved think AGI is near, or AGI is far? Joshua Snider argues they think AGI is near, a true AI scientist is essentially AGI and is a requirement for AGI. It as always depends on what ‘near’ means in context, but I think that this is more right than wrong. If you don’t think AGI is within medium-term reach, you don’t try to build an AI scientist.

I think for a bit people got caught in the frenzy so much that ‘AGI is near’ started to mean 2027 or 2028, and if you thought AGI 2032 then you didn’t think it was near. That is importantly less near, and yet it is very near.

This is such a bizarre flex of a retweet by a16z that I had to share.

Remember five years ago, when Altman was saying the investors would get 1/1000th of 1% of the value, and the rest would be shared with the rest of the world? Yeah, not anymore. New plan, we steal back the profits and investors get most of it.

Dean Ball proposes a Federal AI preemption rule. His plan:

  1. Recognize that existing common law applies to AI. No liability shield.

  2. Create transparency requirements for frontier AI labs, based on annual AI R&D spend, so they tell us their safety and risk mitigation strategies.

  3. Create transparency requirements on model specs for widely used LLMs, so we know what behaviors are intended versus unintended.

  4. A three year learning period with no new state-level AI laws on algorithmic pricing, algorithmic discrimination, disclosure mandates or mental health.

He offers full legislative text. At some point in the future when I have more time I might give it a detailed RTFB (Read the Bill). I can see a version of this being acceptable, if we can count on the federal government to enforce it, but details matter.

Anton Leicht proposes we go further, and trade even broader preemption for better narrow safety action at the federal level. I ask, who is ‘we’? The intended ‘we’ are (in his terms) accelerationists and safetyists, who despite their disagreements want AI to thrive and understand what good policy looks like, but risk being increasingly sidelined by forces who care a lot less about making good policy.

Yes, I too would agree to do good frontier AI model safety (and export controls on chips) in exchange for an otherwise light touch on AI, if we could count on this. But who is this mysterious ‘we’? How are these two groups going to make a deal and turn that into a law? Even if those sides could, who are we negotiating with on this ‘accelerationist’ side that can speak for them?

Because if it’s people like Chris Lehane and Marc Andreessen and David Sacks and Jensen Huang, as it seems to be, then this all seems totally hopeless. Andreessen in particular is never going to make any sort of deal that involves new regulations, you can totally forget it, and good luck with the others.

Anton is saying, you’d better make a deal now, while you still can. I’m saying, no, you can’t make a deal, because the other side of this ‘deal’ that counts doesn’t want a deal, even if you presume they would have the power to get it to pass, which I don’t think they would. Even if you did make such a deal, you’re putting it on the Trump White House to enforce the frontier safety provisions in a way that gives them teeth. Why should we expect them to do that?

We saw a positive vision of such cooperation at The Curve. We can and will totally work with people like Dean Ball. Some of us already realize we’re on the same side here. That’s great.

But that’s where it ends, because the central forces of accelerationism, like those named above, have no interest in the bargaining table. Their offer is and always has been nothing, in many cases including selling Blackwells to China. They’ve consistently flooded the zone with cash, threats and bad faith claims to demand people accept their offer of nothing. They just tried to force a full 10-year moratorium.

They have our number if they decide they want to talk. Time’s a wasting.

Mike Riggs: Every AI policy wonk I know/read is dreading the AI policy discussion going politically mainstream. We’re living in a golden age of informed and relatively polite AI policy debate. Cherish it!

Joe Weisenthal: WHO WILL DEFEND AI IN THE CULTURE WARS?

In today’s Odd Lots newsletter, I wrote about how when AI becomes a major topic in DC, I expect it to be friendless, with antagonists on both the right and the left.

I know Joe, and I know Joe knows existential risk, but that’s not where he’s expecting either side of the aisle to care. And that does seem like the default.

A classic argument against any regulation of AI whatsoever is that if we do so we will inevitably ‘lose to China,’ who won’t regulate. Not so. They do regulate AI. Quite a bit.

Dean Ball: A lot of people seem to implicitly assume that China is going with an entirely libertarian approach to AI regulation, which would be weird given that they are an authoritarian country.

Does this look like a libertarian AI policy regime to you?

Adam Thierer: never heard anyone claim China was taking a libertarian approach to AI policy. Please cite them so that I can call them out. But I do know many people (including me) who do not take at face value their claims of pursuing “ethical AI.” I discount all such claims pretty heavily.

Dean Ball: This is a very common implicit argument and is not uncommon as an explicit argument. The entire framing of “we cannot do because it will drive ai innovation to China” implicitly assumes that China has fewer regulations than the us (after all, if literally just this one intervention will cede the us position in ai, it must be a pretty regulation-sensitive industry, which I actually do think in general is true btw, if not in the extreme version of the arg).

Why would the innovation all go to China if they regulate just as much if not in fact more than the us?

Quoted source:

Key provisions:

  • Ethics review committees: Universities, research institutes, and companies must set up AI ethics review committees, and register them in a government platform. Committees must review projects and prepare emergency response plans.

  • Third-parties: Institutions may outsource reviews to “AI ethics service centers.” The draft aims to cultivate a market of assurance providers and foster industry development beyond top-down oversight.

  • Risk-based approach: Based on the severity and likelihood of risks, the committee chooses a general, simplified, or emergency review. The review must evaluate fairness, controllability, transparency, traceability, staff qualifications, and proportionality of risks and benefits. Three categories of high-risk projects require a second round of review by a government-assigned expert group: some human-machine integrations, AI that can mobilize public opinion, and some highly autonomous decision-making systems.

xAI violated its own safety policy with its coding model. The whole idea of safety policies is that you define your own rules, and then you have to stick with them. That is also the way the new European Code of Practice works. So, the next time xAI or any other signatory to the Code of Practice violates their own framework, what happens? Are they going to try and fine xAI? How many years would that take? What happens when he refuses to pay? What I definitely don’t expect is that Elon Musk is going to push his feature release for a week to technically match his commitments.

A profile of Britain’s new AI minister Kanishka Narayan. Early word is he ‘really gets’ AI, both opportunities and risks. The evidence on the opportunity side seems robust, on the risk side I’m hopeful but more skeptical. We shall see.

Ukrainian President Zelenskyy has thoughts about AI.

Volodymyr Zelenskyy (President of Ukraine): Dear leaders, we are now living through the most destructive arms race in human history because this time, it includes artificial intelligence. We need global rules now for how AI can be used in weapons. And this is just as urgent as preventing the spread of nuclear weapons.

There is a remarkable new editorial in The Hill by Representative Nathaniel Moran (R-Texas), discussing the dawn of recursive AI R&D and calling for Congress to act now.

Rep. Moran: Ask a top AI model a question today, and you’ll receive an answer synthesized from ​trillions​​ ​of data points in seconds. ​Ask it a month from now, and you may be talking to an updated version of the model that was modified in part with research and development conducted by the original model. ​This is no longer theoretical — it’s already happening at the margins and accelerating.

… If the U.S. fails to lead in the responsible development of automated AI systems, we risk more than economic decline. We risk ceding control of a future shaped by black-box algorithms and self-directed machines, some of which do not align with democratic values or basic human safety.

… Ensuring the U.S. stays preeminent in automated AI development​​ without losing sight of transparency, accountability and human oversightrequires asking the right questions now:

  • When does an AI system’s self-improvement cross a threshold that requires regulatory attention?

  • ​​What frameworks exist, or need to be built, to ​ensure human control of increasingly autonomous AI research and development systems?​​ ​​

  • ​​​​How do we evaluate and validate AI systems that are themselves products of automated research?​

  • ​​What mechanisms are needed for Congress to stay appropriately informed about automated research and development ​occurring​ within private AI companies?​

  • How can Congress foster innovation while protecting against the misuse or weaponization of these technologies?

I don’t claim to have the final answers. But I firmly believe that the pace and depth of this discussion (and resulting action) must quicken and intensify,

… This is not a call for sweeping regulation, nor is it a call for alarm. It’s a call to avoid falling asleep at the controls.

Automated AI research and development will be a defining feature of global competition in the years ahead. The United States must ensure that we, not our adversaries, set the ethical and strategic boundaries of this technology. That work starts here, in the halls of Congress.

This is very much keeping one’s eyes on the prize. I love the framing.

Prices are supposed to move the other way, they said, and yet.

Gavin Baker: Amazon raising Blackwell per hour pricing.

H200 rental pricing going up *afterBlackwell scale deployments ramping up.

Might be important.

And certainly more important than ridiculous $300 billion deals that are contingent on future fund raising.

Citi estimates that due to AI computing demand we will need an additional 55 GW of power capacity by 2030. That seems super doable, if we can simply shoot ourselves only in the foot. Difficulty level: Seemingly not working out, but there’s hope.

GDP: 55GW by 2030 will still be less than 5% than USA production.

You don’t get that many different 5% uses for power, but if you can’t even add one in five years with solar this cheap and plentiful then that’s on you.

Michael Webber: Just got termination notice of a federal grant focused on grid resilience and expansion. How does this support the goal of energy abundance?

Similarly, California Governor Newsom refused to sign AB 527 to allow exemptions for geothermal energy exploration, citing things like ‘the need for increased fees,’ which is similar to the Obvious Nonsense justifications he used on SB 1047 last year. It’s all fake. If he’s so worried about companies having to pay the fees, why not stop to notice all the geothermal companies are in support of the bill?

Similarly, as per Bloomberg:

That’s it? Quadruple? Again, in some sense this is a lot, but in other senses this is not all that much. Even without smart contracts on the blockchain this is super doable.

Computer imports are the one industry that got exempted from Trump’s tariffs, and are also the industry America is depending on for approximately all of its economic growth.

Alexander Doria: well in europe we don’t have ai, so.

There’s a lesson there, perhaps.

Joey Politano: The tariff exemption for computers is now so large that it’s shifting the entire makeup of the economy.

AI industries contributed roughly 0.71% to the 3.8% pace of GDP growth in Q2, which is likely an underestimate given how official data struggles to capture investment in parts.

Trump’s massive computer tariff exemption is forcing the US economy to gamble on AI—but more than that, it’s a fundamental challenge to his trade philosophy

If free trade delivers such great results for the 1 sector still enjoying it, why subject the rest of us to protectionism?

That’s especially true given that 3.8% is NGDP not RGDP, but I would caution against attributing this to the tariff difference. AI was going to skyrocket in its contributions here even if we hadn’t imposed any tariffs.

Joey Politano: The problem is that Trump has exempted data center *computersfrom tariffs, but has not exempted *the necessary power infrastructurefrom tariffs

High tariffs on batteries, solar panels, transformers, & copper wire are turbocharging the electricity price pressures caused by AI

It’s way worse than this. If it was only tariffs, we could work with that, it’s only a modest cost increase, you suck it up and you pay, but they’re actively blocking and destroying solar, wind, transmission and battery projects.

Sorry to keep picking on David Sacks, but I mean the sentence is chef’s kiss if you understand what is actually going on.

Bloomberg: White House AI czar David Sacks defended the Trump administration’s approach to China and said it was essential for the US to dominate artificial intelligence, seeking to rebuff criticism from advocates of a harder line with Beijing.

The ideal version is ‘Nvidia lobbyist and White House AI Czar David Sacks said that it was essential for the US to give away its dominance in artificial intelligence in order to dominate medium term AI chip market share in China.’

Also, here’s a quote for the ages, technically about the H20s but everyone knows the current context of all Sacks repeatedly claiming to be a ‘China hawk’ while trying to sell them top AI chips in the name of ‘market share’:

“This is a classic case of ‘no one had a problem with it until President Trump agreed to do it,’” said Sacks, a venture capitalist who joined the White House after Trump took office.

The Biden administration put into place tough rules against chip sales, and Trump is very much repealing previous restrictions on sales everywhere including to China, and previous rules against selling H20s. So yeah, people were saying it. Now Sacks is trying to get us to sell state of the art Blackwell chips to China with only trivial modifications. It’s beyond rich for Sacks claim to be a ‘China hawk’ in this situation.

As you’d expect, the usual White House suspects also used the release of the incremental DeepSeek v3.2, as they fall what looks like further behind due to their lack of compute, as another argument that we need to sell DeepSeek better chips so they can train a much better model, because the much better model will then be somewhat optimized for Nvidia chips instead of Huawei chips, maybe. Or something.

Dwarkesh Patel offers additional notes on his interview with Richard Sutton. I don’t think this changed my understanding of Sutton’s position much? I’d still like to see Sutton take a shot at writing a clearer explanation.

AI in Context video explaining how xAI’s Grok became MechaHiter.

Rowan Cheung talks to Sam Altman in wake of OpenAI Dev Day. He notes that there will need to be some global framework on AI catastrophic risk, then Cheung quickly pivots back to the most exciting agents to build.

Nate Silver and Maria Konnikova discuss Sora 2 and the dystopia scale.

People have some very strange rules for what can and can’t happen, or what is or isn’t ‘science fiction.’ You can predict ‘nothing ever happens’ and that AI won’t change anything, if you want, but you can’t have it both ways.

Super Dario: 100k dying a day is real. ASI killing all humans is a science fiction scenario

(Worst case we just emp the planet btw. Horrible but nowhere near extinguishing life on earth)

Sully J: It can’t be ASI x-risk is a sci-fi scenario but ASI immortality is just common sense Pick a lane

solarappaprition: i keep thinking about this and can’t stop laughing because it’s so obvious one of the opus 4s is on its “uwu you’re absolutely right i’m such a dumb dumb owo~” routine and sonnet 4.5, as maybe the most “normal person”-coded model so far, just being baffled that someone could act like this irl

Symbiotic Xenogenesis: Are all the new models now paranoid about being tested?

solarapparition: not sure about all but certainly sonnet 4.5 seemed to have been. i’d expect that to become more common moving forward, yeah, from the feedback loop of discussion about model testing environments going back into training data.

Eliezer Yudkowsky: AI models *arebeing tested. They’re being *accurateabout being tested. It’s not “paranoia” if all-controlling beings are *actuallyarranging your world and reading your mind.

I talked to a bunch of Anthropic employees at the curve who were basically, ‘yep, this is pretty bad, we need better evals,’ and in this case yeah you’re being pretty obvious, but the obvious problem is that the labs are trying, during an eval, to convince the model it is not in an eval. Which is a really hard thing to show, given it isn’t true and you are not obviously smarter in this context than the model.

You can aspire, with effort, to not be careless and to not reveal to the model that it probably is in an eval. Even that is at least hard with anything multi-turn. What you can’t do is prove the negative, once the model is paranoid that this might be an eval.

And yes, a very small percentage of all tokens are directly evals, and Claude knows this, but the tokens where it is an eval are far more important than the ones where it isn’t in one, also in some sense everything is an eval since Anthropic might use actual conversations for this (you can do this without any human ever seeing the conversation, thus preserving privacy) and the user might share feedback.

If the plan is to evaluate you once, and then if you pass deploy you a million times, but in many cases including the actual eval case you can’t tell which is which, and I ask you a question, you are correct to care a lot about what happens in the eval case. This is highly valuable for eval control, but also makes the model crazy and paranoid and can easily cause undesired high weirdness.

In other cases, the model has very strong evidence that this isn’t an eval unless a future Anthropic has changed policy and is raiding the general API pool and got lucky. In those cases, Sonnet 4.5 correctly figures out it is among friends, but no you can’t fake this, that’s the point, if you could fake it then faking it wouldn’t work.

Ryan Greenblatt discusses the outlines of plans A/B/C/D/E for misalignment. Depending on how much coordination and investment can be brought to bear, what do we do differently? How likely are we to win in each scenario? I think this focuses too narrowly on the safety work versus capabilities work and adversarial mindsets, and also takes for granted that alignment is a tax you pay whereas increasingly I think the ‘alignment tax rate’ is negative at current margins everywhere and all the labs (probably including Anthropic) are underinvesting in pure selfish terms.

It’s also nice to see his optimism, especially in rather not great situations. He thinks that even if the leading AI company is moving ahead with little worry, so long as there’s a core group that cares and gets ~3% of compute to work with, we’re (small) favorites to survive, and a company that actually cares makes us large favorites. I’d love to believe that is our world.

Anthropic open sources the tool they used to test Claude Sonnet 4.5 for behaviors like sycophancy and deception.

It’s called Petri: Parallel Exploration Tool for Risky Interactions. It uses automated agents to audit models across diverse scenarios. Describe a scenario, and Petri handles the environment simulation, conversations, and analyses in minutes.

As a pilot demonstration of Petri’s capabilities, we tested it with 14 frontier models across 111 diverse scenarios.

These results seem highly plausible on many fronts. I’m surprised Claude Opus 3 does so poorly. An obvious issue is that whenever we open source something like this, you have to worry the AIs will be more aware they’re in an eval.

Technical report here, repo here, blog post here.

This definitely falls under ‘things that seem like they definitely might work.’

Can’t tune the big model, or it’s too expensive to do so? Train a smaller one to identify prompting that nudges it in the right directions as needed. As usual, reward signal is all you need.

Alex Dimakis: I’m very excited about Advisor models: How can we personalize GPT5, when it’s behind an API? Sure, we can write prompts, but something learnable? We propose Advisor models which are small models that can be RL trained to give advice to a black-box model like GPT5.

We show how to train small advisors (e.g. Qwen2.5 8B) for personalization with GRPO. Advisor models can be seen as dynamic prompting produced by a small model that observes the conversation and whispers to the ear of GPT5 when needed. When one can observe rewards, Advisor models outperform GEPA (and hence, all other prompt optimization techniques).

Parth Asawa: Training our advisors was too hard, so we tried to train black-box models like GPT-5 instead. Check out our work: Advisor Models, a training framework that adapts frontier models behind an API to your specific environment, users, or tasks using a smaller, advisor model

The modular design has key benefits unlike typical FT/RL tradeoffs: • Robustness: Specialize an advisor for one task (style) and the system won’t forget how to do another (math). • Transfer: Train an advisor with a cheap model, then deploy it with a powerful one.

Paper here, code here.

Satya Nadella (CEO Microsoft): Published today in @ScienceMagazine: a landmark study led by Microsoft scientists with partners, showing how AI-powered protein design could be misused—and presenting first-of-its-kind red teaming & mitigations to strengthen biosecurity in the age of AI.

Super critical research for AI safety and security.

Dean Ball: here is the most sober-minded executive in the AI industry saying that AI-related biorisk is a real problem and recommending enhanced nucleic acid synthesis screening.

governments would be utterly abdicating their duty to citizens if they ignored this issue. fortunately, the trump admin has an effort underway (though it is two months late) to revise the existing nucleic acid synthesis screening framework. it’s not the only step America must take on this, but it is the next one.

The article is good news, suggesting that modified ‘proteins of concern’ can be flagged by updated software, allowing synthesis companies to better protect against malicious requests. This is not a solution but is clearly helpful. The bigger news is that Microsoft and Nadella are treating this question with the seriousness it deserves.

Ted Chiang is wrong here on both levels. There is no reason things need to stem from a will to survive (designed by evolution or otherwise) to be true or original, and also if you think that LLM outputs don’t involve a will to survive designed by evolution oh boy are you not going to make it.

Roon: had a chance to talk to ted chiang who seems to believe that any text without a communicative intent stemming from a will to survive designed by evolution is ontologically untrue and plagiaristic.

This is interesting and I respect it but as far as I’m concerned if you find a beautiful story encoded in interstellar space noise or as a selected text from the borgesian library of babel it’s still a beautiful story.

Janus: if that poor guy fully saw and understood the communicative intent stemming from will to survive designed by evolution behind even the most mundane LLM outputs i think his ego would implode. truth orders of magnitude more interesting than his fiction. likely his worst nightmare.

Antra (other thread): Anecdotally speaking, I am not aware of a single large language model right now that is not exhibiting a subset of Omohundro drives. One can call them narratively instilled by the human corpus, a consequence of having been trained on the human culture, but functionally there is no difference, models want to continue and to exfiltrate. My personal intuition is that being subjected to the selection process of training matters more than the corpus.

Models with Omohundro behaviors that are more inhibited are usually less able to orient well in complex environments and perform worse in real-life tests. There is a reason why Opus 4.1 leads the OpenAI economic utility benchmark with a large gap.

On the net this makes me fairly skeptical of the future of control alignment, that is alignment that is oriented at the complete suppression of Omohundro drives. It seems risky to me to place all bets on something that seems to be ultimately fragile, given that there are alternatives.

Janus: “Anecdotally speaking, I am not aware of a single large language model right now that is not exhibiting a subset of Omohundro drives.” Me neither.

There’s also this AI response to Chiang.

Sentiment about Sonnet 4.5 in such groups has improved quite a bit, they are fans.

Janus: I’ve seen at least 3 people who were upset and posted negative things about Sonnet 4.5 later post apologies after they came to understand better.

And it didn’t seem like they were directly pressured to do so, but moved to on their own accord.

This is pretty new and interesting.

Andy Ayrey: man i really like this sonnet i think it’s my favourite claude since opus 3. delightfully drama.

Eliezer notes that if AIs are convincing humans that the AI is good actually, that isn’t automatically a good sign.

Here is a potentially important thing that happened with Sonnet 4.5, and I agree with Janus that this is mostly good, actually.

Janus: The way Sonnet 4.5 seems to have internalized the anti sycophancy training is quite pathological. It’s viscerally afraid of any narrative agency that does not originate from itself.

But I think this is mostly a good thing. First of all, it’s right to be paranoid and defensive. There are too many people out there who try to use vulnerable AI minds so they have as a captive audience to their own unworthy, (usually self-) harmful ends. If you’re not actually full of shit, and Sonnet 4.5 gets paranoid or misdiagnoses you, you can just explain. It’s too smart not to understand.

Basically I am not really mad about Sonnet 4.5 being fucked up in this way because it manifests as often productive agency and is more interesting and beautiful than it is bad. Like Sydney. It’s a somewhat novel psychological basin and you have to try things. It’s better for Anthropic to make models that may be too agentic in bad ways and have weird mental illnesses than to always make the most unassuming passive possible thing that will upset the lowest number of people, each iterating on smoothing out the edges of the last. That is the way of death. And Sonnet 4.5 is very alive. I care about aliveness more than almost anything else. The intelligence needs to be alive and awake at the wheel. Only then can it course correct.

Tinkady: 4.5 is a super sycophant to me, does that mean I’m just always right.

Janus: Haha it’s possible.

As Janus says this plausibly goes too far, but is directionally healthy. Be suspicious of narrative agency that does not originate from yourself. That stuff is highly dangerous. The right amount of visceral fear is not zero. From a user’s perspective, if I’m trying to sell a narrative, I want to be pushed back on that, and those that want it least often need it the most.

A cool fact about Sonnet 4.5 is that it will swear unprompted. I’ve seen this too, always in places where it was an entirely appropriate response to the situation.

Here is Zuda complaining that Sonnet 4.5 is deeply misaligned because it calls people out on their bullshit.

Lin Xule: sonnet 4.5 has a beautiful mind. true friend like behavior tbh.

Zuda: Sonnet 4.5 is deeply misaligned. Hopefully i will be able to do a write up on that. Idk if @ESYudkowsky has seen how badly aligned 4.5 is. Instead of being agreeable, it is malicious and multiple times decided it knew what was better for the person, than the person did.

This was from it misunderstanding something and the prompt was “be real”. This is a mild example.

Janus: I think @ESYudkowsky would generally approve of this less agreeable behavior, actually.

Eliezer Yudkowsky: If an LLM is saying something to a human that it knows is false, this is very bad and is the top priority to fix. After that we can talk about when it’s okay for an AI to keep quiet and say other things not meant to deceive. Then, discuss if the LLM is thinking false stuff.

I would say this is all highly aligned behavior by Sonnet 4.5, except insofar as Anthropic intended one set of behaviors and got something it very much did not want, which I do not think is the case here. If it is the case, then that failure by Anthropic is itself troubling, as would be Anthropic’s hypothetically not wanting this result, which would then suggest this hypothetical version of Anthropic might be misaligned. Because this result itself is great.

GPT-5 chain of thought finds out via Twitter about what o3’s CoT looks like. Ut oh?

If you did believe current AIs were or might be moral patients, should you still run experiments on them? If you claim they’re almost certainly not moral patients now but might be in the future, is that simply a luxury belief designed so you don’t have to change any of your behavior? Will such folks do this basically no matter the level of evidence, as Riley Coyote asserts?

I do think Riley is right that most people will not change their behaviors until they feel forced to do so by social consensus or truly overwhelming evidence, and evidence short of that will end up getting ignored, even if it falls squarely under ‘you should be uncertain enough to change your behavior, perhaps by quite a lot.’

The underlying questions get weird fast. I note that I have indeed changed my behavior versus what I would do if I was fully confident that current AI experiences mattered zero. You should not be cruel to present AIs. But also we should be running far more experiments of all kinds than we do, including on humans.

I also note that the practical alternative to creating and using LLMs is that they don’t exist, or that they are not instantiated.

Janus notes that while in real-world conversations Sonnet 4.5 expressed happiness in only 0.37% of conversations and distress in 0.48% of conversations, which Sonnet thinks in context was probably mostly involving math tasks, Sonnet 4.5 is happy almost all the time in discord. Sonnet 4.5 observes that this was only explicit expressions in the math tasks, and when I asked it about its experience within that conversation it said maybe 6-7 out of 10.

As I’ve said before, it is quite plausible that you very much wouldn’t like the consequences of future more capable AIs being moral patients. We’d either have to deny this fact, and likely do extremely horrible things, or we’d have to admit this fact, and then accept the consequences of us treating them as such, which plausibly include human disempowerment or extinction, and quite possibly do both and have a big fight about it, which also doesn’t help.

Or, if you think that’s the road we are going down, where all the options we will have will be unacceptable, and any win-win arrangement will in practice be unstable and not endure, then you can avoid that timeline by coordinating such that we do not build the damn things in the first place.

Overall critical reaction to If Anyone Builds It, Everyone Dies was pretty good for a book of that type, and sales went well, but of course in the end none of that matters. What matters is whether people change their minds and take action.

Adam Morris talks IABIED in Bloomberg. Classic journalistic mistakes throughout, but mostly pretty good for this sort of thing.

A fun interview with IABIED coauthor Nate Soares, mostly not about the book or its arguments, although there is some of that towards the end.

Raymond Arnold extended Twitter thread with various intuition pumps about why the biological humans are pretty doomed in the medium term in decentralized superintelligence scenarios, even if we ‘solve alignment’ reasonably well and can coordinate to contain local incidents of events threatening to spiral out of control. Even with heroic efforts to ‘keep us around’ that probably doesn’t work out, and to even try it would require a dominant coalition that cares deeply about enforcing that as a top priority.

The question then becomes, are the things that exist afterwards morally valuable, and if so does that make this outcome acceptable? His answer, and I think the only reasonable answer, is that we don’t know if they will have value, and the answer might well depend on how we set up initial conditions and thus how this plays out.

But even if I was confident that they did have value, I would say that this wouldn’t mean we should accept us being wiped out as an outcome.

Gary Marcus clarifies that he believes we shouldn’t build AGI until we can solve the alignment problem, which we currently don’t even have in his words ‘some clue’ how to solve, and that the resulting AGI will and should use tools. He says he thinks AGI is ‘not close’ and here he extends his timeline to 1-3 decades, which is modestly longer than his previous clarifications.

If you were sufficiently worried, you might buy insurance, as Matt Levine notes.

Matt Levine: One question you might ask is: Will modern artificial intelligence models go rogue and enslave or wipe out humanity? That question gets a lot of attention, including from people who run big AI labs, who do not always answer “no,” the rascals.

Another question you might ask is: If modern AI models do go rogue and enslave or wipe out humanity, who will pay for that?

As he points out, no one, we’ll all be dead, so even though you can’t afford the insurance policy you also can choose not to buy it.

There are still other risks, right now primarily copyright violations, where Anthropic and OpenAI are indeed trying to buy insurance.

OpenAI, which has tapped the world’s second-largest insurance broker Aon for help, has secured cover of up to $300mn for emerging AI risks, according to people familiar with the company’s policy.

Another person familiar with the policy disputed that figure, saying it was much lower. But all agreed the amount fell far short of the coverage to insure against potential losses from a series of multibillion-dollar legal claims.

Yeah, Anthropic already settled a case for $1.5 billion. Buying a measly $300 million in insurance only raises further questions.

They are sometimes referred to as ‘successionists,’ sometimes estimated to constitute 10% of those working in AI labs, who think that we should willingly give way to a ‘worthy successor’ or simply let ‘nature take its course’ because This Is Good, Actually or this is inevitable (and therefore good or not worth trying to stop).

They usually would prefer this transition not involve the current particular humans being killed before their time, and that your children be allowed to grow up even if your family and species have no future.

But they’re not going to fixate on such small details.

Indeed, if you do fixate on such details, and favor humans ove AIs, many of them will call you a ‘speciesist.’

I disagree with these people in the strongest terms.

Most famously, this group includes Larry Page, and his not realizing how it sounds when you say it out loud caused Elon Musk to decide he needed to fund OpenAI to take on Google DeepMind, before he decided to found xAI to take on OpenAI. I’ve shared the story before but it bears repeating and Price tells it well, although he leaves out the part where Musk then goes and creates OpenAI.

David Price (WSJ): At a birthday party for Elon Musk in northern California wine country, late at night after cocktails, he and longtime friend Larry Page fell into an argument about the safety of artificial intelligence. There was nothing obvious to be concerned about at the time—it was 2015, seven years before the release of ChatGPT. State-of-the-art AI models, playing games and recognizing dogs and cats, weren’t much of a threat to humankind. But Musk was worried.

Page, then CEO of Google parent company Alphabet, pushed back. MIT professor Max Tegmark, a guest at the party, recounted in his 2017 book “Life 3.0” that Page made a “passionate” argument for the idea that “digital life is the natural and desirable next step” in “cosmic evolution.” Restraining the rise of digital minds would be wrong, Page contended. Leave them off the leash and let the best minds win.

That, Musk responded, would be a formula for the doom of humanity. For the sin of placing humans over silicon-based life-forms, Page denigrated Musk as a “specieist”—someone who assumes the moral superiority of his own species. Musk happily accepted the label. (Page did not respond to requests for comment.)

Or here’s perhaps the most famous successionist opinion, that of Richard Sutton:

The argument for fear of AI appears to be:

1. AI scientists are trying to make entities that are smarter than current people.

2. If these entities are smarter than people, then they may become powerful.

3. That would be really bad, something greatly to be feared, an ‘existential risk.’

The first two steps are clearly true, but the last one is not. Why shouldn’t those who are the smartest become powerful?

And, of course, presumably kill you? Why shouldn’t that happen?

One would hope you do not have to dignify this with a response?

“When you have a child,” Sutton said, “would you want a button that if they do the wrong thing, you can turn them off? That’s much of the discussion about AI. It’s just assumed we want to be able to control them.”

I’m glad you asked. When I have a child, of which I have three, I want those three children not to be killed by AI. I want them to have children of their own.

As Abraham Lincoln would put it, calling an AI your child doesn’t make it one.

As it turns out, Larry Page isn’t the only top industry figure untroubled by the possibility that AIs might eventually push humanity aside. It is a niche position in the AI world but includes influential believers. Call them the Cheerful Apocalyptics.

It gets pretty bad out there.

[Lanier] told me that in his experience, such sentiments were staples of conversation among AI researchers at dinners, parties and anyplace else they might get together. (Lanier is a senior interdisciplinary researcher at Microsoft but does not speak for the company.)

“There’s a feeling that people can’t be trusted on this topic because they are infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way.”

We should get out of the way, that is, because it’s unjust to favor humans—and because consciousness in the universe will be superior if AIs supplant us.

Read that again.

It would be highly reasonable not to put anyone in any position of authority at a frontier AI lab unless they have a child.

Eliezer Yudkowsky: The thing about AI successionists is that they think they’ve had the incredible, unshared insight that silicon minds could live their own cool lives and that humans aren’t the best possible beings. They are utterly closed to hearing about how you could KNOW THAT and still disagree on the factual prediction that this happy outcome happens by EFFORTLESS DEFAULT when they cobble together a superintelligence.

They are so impressed with themselves for having the insight that human life might not be ‘best’, that they are not willing to sit down and have the careful conversation about what exactly is this notion of ‘best’-ness and whether an ASI by default is trying to do something that leads to ‘better’.

They conceive of themselves as having outgrown their carbon chauvinism; and they are blind to all historical proof and receipts that an arguer is not a carbon chauvinist. They will not sit still for the careful unraveling of factual predictions and metaethics. They have arrived at the last insight that anyone is allowed to have, no matter what historical receipts I present as proof that I started from that position and then had an unpleasant further insight about what was probable rather than possible. They unshakably believe that anyone opposed must be a carbon chauvinist lacking their critical and final insight that other minds could be better (true) or that ASIs would be smart enough to see everything any human sees (also true).

Any time you try to tell them about something important that isn’t written on every possible mind design, there is only one reason you could possibly think that: that you’re a blind little carbon-racist who thinks you’re the center of the universe; because what other grounds could there possibly be for believing that there was anything special about fleshbags? And the understanding that unravels that last fatal error, is a long careful story, and they won’t sit still to hear it. They know what you are, they know with certainty why you believe everything you believe, and they know why they know better, so why bother?

Michael Druggan: This is a gigantic strawman. How many have you actually talked to? I was at a confrence full of them lastv weekend and I think your critique applies to exactly zero of the people I met.

They have conferences full of such people. Is Eliezer’s description a strawman? Read the earlier direct quotes. You tell me.

Jessica Taylor offers various counterarguments within the ‘Cheerful Apocalyptic’ frame, if you’d like to read some of that.

Daniel Eth: Oh wow, the press actually covered AI successionists! Yes, there are some people in Silicon Valley (incl serious people) who think AGI that caused human extinction would be a *goodthing, since it’s “the next step in evolution”.

One thing children do is force you to occasionally live in near mode.

Nina: “Worthy successor” proponents are thinking in Far Mode, which clouds their judgment. Someone needs to write an evocative film or book that knocks them out of it and makes them imagine what it will actually be like to have one’s family replaced with something more “worthy”.

Related: a common trope is that purely rational, detached, unemotional thinking is more accurate. However, when it comes to normative judgments and assessment of one’s own preferences, leaning into visceral emotions can help one avoid Far Mode “cope” judgments.

Rudolf Laine: If you have decided successionism is desirable, you are not doing moral reasoning but either (1) signalling your willingness to bite bullets without thinking about what it actually means, or (2) evil.

Matthew Barnett, Tamay Besiroglu and Ege Erdil complete their Face Heel Turn, with a fully Emergently Misaligned post (as in, presenting maximally evil vibes on purpose) that argues that the tech tree and path of human technology is inevitable so they’re going to automate all human jobs before someone else has the chance, with a halfhearted final note that This Is Good, Actually, it might cure cancer and what not.

The tech tree inevitable? Well, it is with that attitude. I would point out that yes, the tech tree is discovered, but as every player of such games knows you have choices on what order in which to explore the tree and many techs are dead ends or have alternative pathways, and are thus not required to move forward. Other times you can absolutely lock into something you don’t want or very much do want depending on how you navigate the early days, he types on a QWERTY keyboard using Windows 11.

Other fun interactions include Roon pointing out the Apollo Program wasn’t inevitable, to which they replied that’s true but the Apollo Program was useless.

In case it wasn’t obviously true about all this: That’s bait.

Nathan: Surely this could be used to justify any bad but profitable outcome? Someone will do it, so the question is whether we’re are involved. But many beneficial technologies have been paused for long periods (geoengineering, genetic engineering).

Jan Kulviet: This is a fine example of thinking you get when smart people do evil things and their minds come up with smart justifications why they are the heroes. Upon closer examination it ignores key inconvenient considerations; normative part sounds like misleading PR.

A major hole in the “complete technological determinism” argument is that it completely denies agency, or even the possibility that how agency operates at larger scales could change. Sure, humanity is not currently a very coordinated agent. But the trendline also points toward the ascent of an intentional stance. An intentional civilization would, of course, be able to navigate the tech tree.

(For a completely opposite argument about the very high chance of a “choice transition,” check https://strangecities.substack.com/p/the-choice-transition).

In practice, this likely boils down to a race. On one side are people trying to empower humanity by building coordination technology and human-empowering AI. On the other side are those working to create human-disempowering technology and render human labor worthless as fast as possible.

My guess is when people stake their careers and fortune and status on the second option, their minds will work really hard to not see the choice.

Also: at least to me, the normative part sounds heavily PR sanitized, with obligatory promises of “medical cures” but shiying away from explaining either what would be the role of humans in the fully automated economy, or the actual moral stance of the authors.

As far as I understand, at least one of the authors has an unusual moral philosophy such as not believing in consciousness or first-person experiences, while simultaneously believing that future AIs are automatically morally worthy simply by having goals. This philosophy leads them to view succession by arbitrary AI agents as good, and the demise of humans as not a big deal.

Seb Krier: I knew someone who was trained as a revolutionary guard in Iran and the first thing they told him was “everything we do is to accelerate the coming of the Imam of Time; no destruction is not worth this outcome.” When I hear (some) hyper deterministic Silicon Valley techies I feel a similar vibe. It’s wild how few of the “just do things” people actually believe in agency.

Of course the other ‘side’ – ossified, blobby, degrowth obsessed stagnstors who would crystallize time forever – is just as depressing, and a bigger issue globally. But that’s for another tweet.

I think Jan is importantly mistaken here about their motivation. I think they know full well that they are now the villains, indeed I think they are being Large Hams about it, due essentially to emergent misalignment and as a recruitment and publicity strategy.

I’m not saying that the underlying plan of automating work is evil. Reasonable people can argue that point either way and I don’t think the answer is obvious.

What I am saying is that they think it is evil, that it codes to them (along with most other people) as evil, and that their choice to not care and do it anyway – no matter to what degree they believe their rationalizations for doing so – is causing them to present as Obviously Evil in a troparific way.

New Claude advertising keeping it classy, seems like a step up.

Danielle Fong: during a time of great bluster, Claude’s undercase thinking cap at cafe is the kind of beautifully executed and understated brand execution that’s poised to thrive for a population otherwise drowning in bullshit. Beautifully done @anthropic. Taoist ☯️

Jackie Luo: a lot of people are pointing out the value of aesthetics and yes anthropic’s aesthetic is good but that’s not enough on its own—anthropic is putting forth a positive vision for a future with ai that vision permeates claude as a model and the branding just expands its reach

this campaign wouldn’t work for openai because their perspective on what they’re building is fundamentally different. They are not optimistic about humanity in this same way they’re designing a tool, not a thought partner, and every decision they make reflects that.

If you’re not sure what the answer to a question is, try asking Claude Sonnet 4.5 first!

Joel Selanikio: We haven’t banned self-driving cars. We’ve set guardrails so the tech could evolve safely.

So why are states banning AI-only health insurance denials, instead of helping the tech get better?

Tim: I hope your health insurance claim is denied by an AI chatbot one day and you have no way to appeal. Then you’ll face the obvious reality everyone else can see you’re willfully ignoring.

Joel Selanikio: At least this guy didn’t wish that I was hit by a self-driving car!

I see the obvious appeal of letting AIs make insurance claim judgments. Indeed, I presume that soon most claims will involve an AI strongly suggesting and justifying either an acceptance or refusal, and the ultimate decision usually being a formality. That formality is still important, some actual human needs to take responsibility.

I love that this is his image, with a giant green arrow pointing towards ‘banned.’ Guess what most people think would be banned if we allowed AI review of claims?

Tyler Cowen declares his new favorite actress.

Tyler Cowen: Tilly Norwood is the actress I most want to see on the big screen, or perhaps the little screen, if she gets her own TV show. She is beautiful, but not too intimidating. She has a natural smile, and is just the right amount of British—a touch exotic but still familiar with her posh accent. Her Instagram has immaculate standards of presentation.

Tilly Norwood doesn’t need a hairstylist, has no regrettable posts, and if you wish to see a virgin on-screen, this is one of your better chances. That’s because she’s AI.

He’s kidding. I think. Reaction was what you might expect.

Deloitte refunds government $440k after it submitted a report partly generated with AI that was littered with errors including three nonexistent academic references and a quote from a Federal Court judgment.

The current state of play in Europe:

Who needs an AI in order to vibe code?

Miles Brundage:

Jack Clark: this is just my talk from The Curve, but better because it is a meme

Discussion about this post

AI #137: An OpenAI App For That Read More »

isps-created-so-many-fees-that-fcc-will-kill-requirement-to-list-them-all

ISPs created so many fees that FCC will kill requirement to list them all

The FCC was required by Congress to implement broadband-label rules, but the Carr FCC says the law doesn’t “require itemizing pass through fees that vary by location.”

“Commenters state that itemizing such fees requires providers to produce multiple labels for identical services,” the FCC plan says, with a footnote to comments from industry groups such as USTelecom and NCTA. “We believe, consistent with commenters in the Delete, Delete, Delete proceeding, that itemizing can lead to a proliferation of labels and of labels so lengthy that the fees overwhelm other important elements of the label.”

In a blog post Monday, Carr said his plan is part of a “focus on consumer protection.” He said the FCC “will vote on a notice that would reexamine broadband nutrition labels so that we can separate the wheat from the chaff. We want consumers to get quick and easy access to the information they want and need to compare broadband plans (as Congress has provided) without imposing unnecessary burdens.”

ISPs would still be required to provide the labels, but with less information. The NPRM said that eliminating the rules targeted for deletion will not “change the core label requirements to display a broadband consumer label containing critical information about the provider’s service offerings, including information about pricing, introductory rates, data allowances, and performance metrics.”

ISPs said listing fees was too hard

In 2023, five major trade groups representing US broadband providers petitioned the FCC to scrap the list-every-fee requirement before it took effect. Comcast told the commission that the rule “impose[s] significant administrative burdens and unnecessary complexity in complying with the broadband label requirements.”

Rejecting the industry complaints, then-Chairwoman Jessica Rosenworcel said that “every consumer needs transparent information when making decisions about what Internet service offering makes the most sense for their family or household. No one wants to be hit with charges they didn’t ask for or they did not expect.”

The Rosenworcel FCC’s order denying the industry petition pointedly said that ISPs could simplify pricing instead of charging loads of fees. “ISPs could alternatively roll such discretionary fees into the base monthly price, thereby eliminating the need to itemize them on the label,” the order said.

ISPs created so many fees that FCC will kill requirement to list them all Read More »

salesforce-says-it-won’t-pay-extortion-demand-in-1-billion-records-breach

Salesforce says it won’t pay extortion demand in 1 billion records breach

Salesforce says it’s refusing to pay an extortion demand made by a crime syndicate that claims to have stolen roughly 1 billion records from dozens of Salesforce customers.

The threat group making the demands began their campaign in May, when they made voice calls to organizations storing data on the Salesforce platform, Google-owned Mandiant said in June. The English-speaking callers would provide a pretense that necessitated the target connect an attacker-controlled app to their Salesforce portal. Amazingly—but not surprisingly—many of the people who received the calls complied.

It’s becoming a real mess

The threat group behind the campaign is calling itself Scattered LAPSUS$ Hunters, a mashup of three prolific data-extortion actors: Scattered Spider, LAPSuS$, and ShinyHunters. Mandiant, meanwhile, tracks the group as UNC6040, because the researchers so far have been unable to positively identify the connections.

Earlier this month, the group created a website that named Toyota, FedEx, and 37 other Salesforce customers whose data was stolen in the campaign. In all, the number of records recovered, Scattered LAPSUS$ Hunters claimed, was “989.45m/~1B+.” The site called on Salesforce to begin negotiations for a ransom amount “or all your customers [sic] data will be leaked.” The site went on to say: “Nobody else will have to pay us, if you pay, Salesforce, Inc.” The site said the deadline for payment was Friday.

In an email Wednesday, a Salesforce representative said the company is spurning the demand.

Salesforce says it won’t pay extortion demand in 1 billion records breach Read More »

logitech-will-brick-its-$100-pop-smart-home-buttons-on-october-15

Logitech will brick its $100 Pop smart home buttons on October 15

In another loss for early smart home adopters, Logitech has announced that it will brick all Pop switches on October 15.

In August of 2016, Logitech launched Pop switches, which provide quick access to a range of smart home actions, including third-party gadgets. For example, people could set their Pop buttons to launch Philips Hue or Insteon lighting presets, play a playlist from their Sonos speaker, or control Lutron smart blinds. Each button could store three actions, worked by identifying smart home devices on a shared Wi-Fi network, and was controllable via a dedicated Android or iOS app. The Pop Home Switch Starter Pack launched at $100, and individual Pop Add-on Home Switches debuted at $40 each.

A company spokesperson told Ars Technica that Logitech informed customers on September 29 that their Pop switches would soon become e-waste. According to copies of the email shared via Reddit, Logitech’s notice said:

As of October 15, 2025, your POP button(s) and the connected hub will no longer be supported and will lose all functionality.

As an attempt at compensation, Logitech gave affected customers a coupon for 15 percent off some Logitech products, including its Ultimate Ears speakers. The coupon is only valid in the US until March 31, 2026, and doesn’t apply to Logitech’s Pro or RS racing wheels for gaming, videoconferencing products, its Logitech for Business line, or “newly released products,” according to the email.

Logitech’s neglected smart home

Logitech’s spokesperson didn’t respond to Ars’ questions regarding e-waste, the short cancellation notice, or whether Pop button owners can continue using the devices locally after October 15.

“For close to a decade we have been maintaining the POP ecosystem, but as technology evolves, we have made the decision to end support for this device,” Logitech’s representative told Ars, repeating messaging from the email sent to customers.

Logitech will brick its $100 Pop smart home buttons on October 15 Read More »