Author name: Paul Patrick

scientists-built-an-ai-co-pilot-for-prosthetic-bionic-hands

Scientists built an AI co-pilot for prosthetic bionic hands

To test their AI-powered hand, the team asked intact and amputee participants to manipulate fragile objects: pick up a paper cup and drink from it, or take an egg from a plate and put it down somewhere else. Without the AI, they could succeed roughly one or two times in 10 attempts. With the AI assistant turned on, their success rate jumped to 80 or 90 percent. The AI also decreased the participants’ cognitive burden, meaning they had to focus less on making the hand work.

But we’re still a long way away from seamlessly integrating machines with the human body.

Into the wild

“The next step is to really take this system into the real world and have someone use it in their home setting,” Trout says. So far, the performance of the AI bionic hand was assessed under controlled laboratory conditions, working with settings and objects the team specifically chose or designed.

“I want to make a caveat here that this hand is not as dexterous or easy to control as a natural, intact limb,” George cautions. He thinks that every little increment that we make in prosthetics is allowing amputees to do more tasks in their daily life. Still, to get to the Star Wars or Cyberpunk technology level where bionic prostheses are just as good or better than natural limbs, we’re going to need more than just incremental changes.

Trout says we’re almost there as far as robotics go. “These prostheses are really dexterous, with high degrees of freedom,” Trout says, “but there’s no good way to control them.” This in part comes down to the challenge of getting the information in and out of users themselves. “Skin surface electromyography is very noisy, so improving this interface with things like internal electromyography or using neural implants can really improve the algorithms we already have,” Trout argued. This is why the team is currently working on neural interface technologies and looking for industry partners.

“The goal is to combine all these approaches in one device,” George says. “We want to build an AI-powered robotic hand with a neural interface working with a company that would take it to the market in larger clinical trials.”

Nature Communications, 2025.  DOI: 10.1038/s41467-025-65965-9

Scientists built an AI co-pilot for prosthetic bionic hands Read More »

senator-endorses-discredited-doctor’s-book-that-claims-chemical-treats-autism,-cancer

Senator endorses discredited doctor’s book that claims chemical treats autism, cancer


Safety experts advise those who handle chlorine dioxide to work in well-ventilated spaces, wear gloves.

Senator Ron Johnson (R-WI) gives a thumbs up to chlorine dioxide. Credit: Scott Olson

For years, Sen. Ron Johnson has been spreading conspiracy theories and misinformation about COVID-19 and the safety of vaccines.

He’s promoted disproven treatments for COVID-19 and claimed, without evidence, that athletes are “dropping dead on the field” after getting the COVID-19 vaccination. Now the Wisconsin politician is endorsing a book by a discredited doctor promoting an unproven and dangerous treatment for autism and a host of ailments: chlorine dioxide, a chemical used for disinfecting and bleaching.

The book is “The War on Chlorine Dioxide: The Medicine that Could End Medicine by Dr. Pierre Kory, a critical care specialist who practiced in Wisconsin hospitals before losing his medical certification for statements advocating using an antiparasite medication to treat COVID-19. The action, he’s said, makes him unemployable, even though he still has a license.

Kory has said there’s a globally coordinated campaign by public health agencies, the drug industry, and the media to suppress evidence of the medicinal wonders of chlorine dioxide. His book, according to its website, contends that the “remarkable molecule” works “to treat everything from cancer and malaria to autism and COVID.”

The book jacket features a prominent blurb from Johnson calling the doctor’s treatise: “A gripping tale of corruption and courage that will open eyes and prompt serious questions.”

Chlorine dioxide is a chemical compound that has a range of applications, including as a disinfectant and deodorizer. Food processing plants apply it to sanitize surfaces and equipment. Hospitals use it to sterilize medical devices, and some municipalities use low levels to treat public water supplies. Paper mills rely on it to whiten wood pulp. Safety experts advise those who handle it to work in well-ventilated spaces and to wear protective gloves.

Concentrations in drinking water systems higher than 0.8 milligrams per liter can be harmful, especially to infants, young children, and fetuses, according to the Environmental Protection Agency.

Still, for many years people in online discussion groups have been promoting the use of chlorine dioxide in a mixture that they call a “miracle mineral solution,” ingested to rid people of a host of maladies. The Food and Drug Administration has warned that drinking these chlorine dioxide mixtures can cause injury and even death.

It is not medicinal, despite Kory’s contention. “It is all lunacy. Absolutely, it’s 100% nonsense,” said Joe Schwarcz, director of McGill University’s Office for Science and Society in Montreal and an expert on the threat of pseudoscience. Schwarcz has written articles about the so-called miracle mineral solution, calling it “a poison” when it’s in high concentrations.

Kory’s book, set to be released to the public in January, argues that word of chlorine dioxide’s effectiveness has been suppressed by government and medical forces that need people to remain perpetually ill to generate large profits. The use of the word “war” in the title is fitting, Kory said in a recent online video on his co-author’s Substack. “In the book I detail many, many assassination attempts of doctors who try to bring out knowledge around chlorine dioxide,” he said.

Johnson confirmed to ProPublica in an email that he authorized the statement on the cover. “After reading the entire book, yes I provided and approved that blurb,” he said. “Have you read the book?”

ProPublica asked Kory and his co-author, Jenna McCarthy, to provide an advance copy, an interview, and responses to written questions. Kory did not respond. McCarthy wrote in an email to ProPublica that she was addressing some of the questions on her Substack. (She did not send a book or agree to an interview.)

The book “is a comprehensive examination of the existing evidence and a plea for open-minded inquiry and rigorous research,” she wrote on Substack. She dismissed warnings about chlorine dioxide’s toxicity in high concentrations, writing: “Everything has a toxic dose — including nutmeg, spinach, and tap water.”

She said that chlorine dioxide is being studied in controlled settings by researchers in the United States and Latin America and that “the real debate is how it should be used, at what dose, and in which clinical contexts.”

Her Substack post was signed “Jenna (& Pierre).”

Johnson did not agree to an interview and did not answer questions emailed to his office by ProPublica, including whether he views chlorine dioxide as a world-changing medical treatment and whether he believes the FDA warnings are false.

“It’s called snake oil”

Johnson has been an advocate of Kory’s for years, calling the doctor as an expert witness in two 2020 Senate hearings. In one, Kory championed taking the drug ivermectin, an antiparasite medicine, to treat COVID-19.

In 2021, an analysis of data from clinical trials concluded that ivermectin could reduce deaths from COVID-19 and may produce other positive effects. McCarthy cited that analysis in her Substack response.

In 2022, however, the American Journal of Therapeutics, which had published the study, warned that suspicious data “appears to invalidate the findings” regarding ivermectin’s potential to decrease deaths.

Later clinical trials have found no beneficial effect of ivermectin for COVID-19, and the FDA has warned that taking large doses can be dangerous. The drug’s manufacturer has said it hadn’t found any scientific basis for the idea that ivermectin can effectively treat COVID-19. Kory, though, continued advocating for ivermectin.

In 2024 the American Board of Internal Medicine, which credentials physicians in certain specialties, revoked Kory’s certifications in internal medicine, pulmonary disease, and critical care for making false and misleading public statements about the ability of ivermectin to treat COVID-19. Hospitals and many insurance networks typically require doctors to be board certified.

Kory vigorously fought the disciplinary action, arguing to the ABIM that he provided substantial medical and scientific evidence to support his recommendations for addressing COVID-19, though not the “consensus-driven” approach. He also sued the board in federal court, citing his free speech rights in a case that is still progressing in the 5th US Circuit Court of Appeals. On Substack, McCarthy excoriated the ABIM, saying it “bullies physicians” and “enforces ideological conformity.”

In 2022, Johnson and Kory penned a Fox News op-ed opposing a California bill that would strip doctors’ licenses for espousing misinformation about COVID-19. The bill became law but was repealed after a court fight. A federal judge found the statute’s definition of misinformation to be too vague, which could infringe on doctors’ right to free speech.

Johnson, who has been in Congress since 2011, has a history of advocating for experimental treatments and viewing the government as an impediment. Dr. Peter Lurie, president and executive director of the Center for Science in the Public Interest, a public health advocacy group, said that among members of Congress, Johnson was “an early adopter of anti-science ideas.”

Lurie said that Johnson is no longer an outlier in Washington, which now has many more elected lawmakers whom he considers anti-science. “What may have started off as the cutting edge of an anti-science movement has now turned into a much more broader-based movement that is supported by millions of people,” he said.

Earlier this year, Johnson held a hearing highlighting a flawed study claiming that vaccinated children had an increased rate of serious chronic diseases when compared to children who were not vaccinated. The conclusion questions the scientific consensus that vaccines are safe. The study’s researchers chose not to publish it because of problems they found in their data and methodology.

In November, Johnson and Kory were listed among the speakers at a conference of the Children’s Health Defense, a nonprofit that stirs anti-vaccine sentiment. It was launched in 2018 by Health and Human Services Secretary Robert F. Kennedy Jr., whose FDA is considering new ways to more closely scrutinize vaccine safety. 

HHS did not respond to requests from ProPublica about Kennedy’s views on chlorine dioxide. At his confirmation hearing, Kennedy praised President Donald Trump for his wide search for a COVID-19 remedy in his first term, which Kennedy said included vaccines, various drugs, “even chlorine dioxide.”

Kory’s publisher is listed as Bella Luna Press, which has issued at least two other titles by McCarthy. “Thanks to the Censorship Industrial Complex, you won’t find The War on Chlorine Dioxide on Amazon or at Barnes & Noble. We had to design and build this website, figure out formatting and printing and shipping, and manage every aspect of order processing ourselves,” the book’s website states. (A representative for Bella Luna could not be reached for comment.)

As this new book is released, the autism community is also grappling with another controversy: the unsubstantiated assertion by Kennedy that Tylenol use by pregnant women poses an increased risk of autism. In addition, under Kennedy, the Centers for Disease Control and Prevention revised its website in November to cast doubt on the long-held scientific conclusion that childhood vaccines do not cause autism.

Some parents of children with autism, desperate for a remedy, have long reached for dubious and at times dangerous panaceas, including hyperbaric oxygen chambers and chelation therapy, used for the treatment of heavy metal poisoning. Neither method has been proven effective.

Helen Tager-Flusberg, director of the Center for Autism Research Excellence at Boston University, said Johnson has “acted extremely irresponsibly” in lending his name to a book making claims about chlorine dioxide treating autism.

“Wisconsin is filled with experts—clinical experts, medical experts, scientists—who understand and have studied autism and treatments for autism for many many years,” she said. “He’s chosen to completely ignore the clinical and the scientific community.”

People with autism may take medication to reduce anxiety, address attention problems, or reduce severe irritability. Many benefit from behavioral interventions and special education services to help with learning and functional abilities. But there is no cure, said Tager-Flusberg.

Referring to chlorine dioxide, she said: “We have had examples of this probably throughout the history of medicine. There’s a word for this, it’s called snake oil.”

In her response on Substack to ProPublica, McCarthy wrote that “chlorine dioxide is being used to treat (nobody said ‘cure’) autism with life-changing results.”

The search for miracle cures

The mother of an autistic son, Melissa Eaton of North Carolina, heard Kory reference his book in early November on The HighWire, an Internet talk show hosted by Del Bigtree, a prominent vaccine skeptic and former communications director for Kennedy’s 2024 presidential campaign. She then looked up the book online and noticed Johnson’s endorsement.

Eaton for many years has worked to expose people who peddle chlorine dioxide and to report apparent injuries to authorities. She monitors social media forums where parents discuss giving it to their children orally or via enemas. Sometimes the families reveal that their children are sick. “They’re throwing up and vomiting and having diarrhea and rashes,” Eaton said.

Some adherents advise parents that the disturbing effects indicate that the treatment is working, ridding the body of impurities, or that the parents should alter the dosage.

“Most of these kids are nonverbal,” Eaton said. “They’re not able to say what’s hurting them or what’s happening to them. The parents feel they’re doing the right thing. That’s how they view this: They’re helping to cure autism.”

The idea that chlorine dioxide can be a miracle cure began to spread about 20 years ago when a gold prospector, Jim Humble, wrote a book claiming his team in Guyana fell ill with malaria and recovered after drinking safe amounts of chlorine dioxide.

Humble later co-founded a “health and healing” church in Florida with a man named Mark Grenon, who called himself an archbishop and sold a chlorine dioxide solution as a cure for COVID-19. They described it as a “miracle mineral solution,” or MMS.

Grenon went to prison in 2023 for conspiring to defraud the United States by distributing an unapproved and misbranded drug. The scheme took in more than $1 million, according to prosecutors.

An affidavit in the case filed by a special agent with the FDA Office of Criminal Investigations noted: “FDA has received numerous reports of adverse reactions to MMS. These adverse reactions include hospitalizations, life-threatening conditions, and death.”

Grenon, who is now out of prison, told ProPublica that he too is writing a book about chlorine dioxide. “My book will tell the truth.” He declined further comment.

Chlorine dioxide is currently used in many ways that are not harmful. It is found in some consumer products like mouthwashes, but it is not meant to be swallowed in those instances. (One popular mouthwash warns to “keep out of reach of children.”) It’s also available to consumers in do-it-yourself packages where they combine drops from two bottles of different compounds—commonly sodium chlorite and hydrochloric acid—and add it to water. Hikers often carry the drops, or tablets, using small amounts to make quarts of fresh water potable.

But numerous online shoppers post product reviews that go further, referring to it as a tonic. Various online guides, some aimed at parents of autistic children, recommend a shot-glass-size dose, sometimes given multiple times a day and even hourly. That can far exceed the threshold the EPA considers safe.

McCarthy, addressing ProPublica on Substack, wrote: “You point to various online guides that offer what could be considered dangerous dosing instructions. We agree, the internet is a terrifying wasteland of misinformation and disinformation.”

In the Substack video, Kory said he felt compelled to spread the word about chlorine dioxide much as he did about ivermectin, even though it cost him professionally.

He no longer has a valid medical license in Wisconsin or California, where he did not renew them, according to the Substack post. His medical licenses in New York and Michigan are active.

“I like to say I was excommunicated from the church of the medical establishment,” he said in the Substack video. As a result, he said, he turned to telehealth and started a practice.

In the November 6 HighWire episode hosted by Bigtree, the discussion included talk not just of chlorine dioxide’s medicinal potential but also of how cheap and easy it is to obtain.

“On Amazon, it’s literally, you get two bottles, well, it comes in two,” Kory started to explain, before stopping that train of thought.

“I wouldn’t know how to make it,” he said.

This story was originally published by ProPublica. ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

Photo of ProPublica

Senator endorses discredited doctor’s book that claims chemical treats autism, cancer Read More »

runway-claims-its-gwm-1-“world-models”-can-stay-coherent-for-minutes-at-a-time

Runway claims its GWM-1 “world models” can stay coherent for minutes at a time

Even using the word “general” has an air of aspiration to it. You would expect a general world model to be, well, one model—but in this case, we’re looking at three distinct, post-trained models. That caveats the general-ness a bit, but Runway says that it’s “working toward unifying many different domains and action spaces under a single base world model.”

A competitive field

And that brings us to another important consideration: With GWM-1, Runway is entering a competitive gold-rush space where its differentiators and competitive advantages are less clear than they were for video. With video, Runway has been able to make major inroads in film/television, advertising, and other industries because its founders are perceived as being more rooted in those creative industries than most competitors, and they’ve designed tools with those industries in mind.

There are indeed hypothetical applications of world models in film, television, advertising, and game development—but it was apparent from Runway’s livestream that the company is also looking at applications in robotics as well as physics and life sciences research, where competitors are already well-established and where we’ve seen increasing investment in recent months.

Many of those competitors are big tech companies with massive resource advantages over Runway. Runway was one of the first to market with a sellable product, and its aggressive efforts to court industry professionals directly has so far allowed it to overcome those advantages in video generation, but it remains to be seen how things will play out with world models, where it doesn’t enjoy either advantage any more than the other entrants.

Regardless, the GWM-1 advancements are impressive—especially if Runway’s claims about consistency and coherence over longer stretches of time are true.

Runway also used its livestream to announce new Gen 4.5 video generation capabilities, including native audio, audio editing, and multi-shot video editing. Further, it announced a deal with CoreWeave, a cloud computing company with an AI focus. The deal will see Runway utilizing Nvidia’s GB300 NVL72 racks on CoreWeave’s cloud infrastructure for future training and inference.

Runway claims its GWM-1 “world models” can stay coherent for minutes at a time Read More »

ai-#146:-chipping-in

AI #146: Chipping In

It was touch and go, I’m worried GPT-5.2 is going to drop any minute now, but DeepSeek v3.2 was covered on Friday and after that we managed to get through the week without a major model release. Well, okay, also Gemini 3 DeepThink, but we all pretty much know what that offers us.

We did have a major chip release, in that the Trump administration unwisely chose to sell H200 chips directly to China. This would, if allowed at scale, allow China to make up a substantial portion of its compute deficit, and greatly empower its AI labs, models and applications at our expense, in addition to helping it catch up in the race to AGI and putting us all at greater risk there. We should do what we can to stop this from happening, and also to stop similar moves from happening again.

I spent the weekend visiting Berkeley for the Secular Solstice. I highly encourage everyone to watch that event on YouTube if you could not attend, and consider attending the New York Secular Solstice on the 20th. I will be there, and also at the associated mega-meetup, please do say hello.

If all goes well this break can continue, and the rest of December can be its traditional month of relaxation, family and many of the year’s best movies.

On a non-AI note, I’m working on a piece to enter into the discourse about poverty lines and vibecessions and how hard life is actually getting in America, and hope to have that done soon, but there’s a lot to get through.

(Reminder: Bold means be sure to read this, Italics means you can safely skip this.)

  1. Language Models Offer Mundane Utility. Simulators versus the character.

  2. ChatGPT Needs More Mundane Utility. OpenAI goes for engagementmaxxing.

  3. Language Models Don’t Offer Mundane Utility. Please don’t wipe your hard drive.

  4. On Your Marks. Progress on ARC.

  5. Choose Your Fighter. Now how much would you pay?

  6. Get My Agent On The Line. McKay Wrigley is unusually excited re Opus 4.5.

  7. Deepfaketown and Botpocalypse Soon. Not great signs from the AI boyfriends.

  8. Fun With Media Generation. McDonalds fails to read the room.

  9. Copyright Confrontation. New York Times violates user privacy en masse.

  10. A Young Lady’s Illustrated Primer. The two ways to view AI in education.

  11. They Took Our Jobs. Hide your AI use in the sand, little worker.

  12. Americans Really Do Not Like AI. What do they want to do about it?

  13. Get Involved. Great giving opportunities are going to get harder to find.

  14. Introducing. The Agentic AI Foundation, OpenAI’s Chief Revenue Officer.

  15. Gemini 3 Deep Think. It is available for those willing to pay.

  16. In Other AI News. DeepMind + UK AISI, a close call with Meta’s new model.

  17. This Means War. Department of War prepares for glorious AI future.

  18. Show Me the Money. Meta cuts the metaverse, market is skeptical of OpenAI.

  19. Bubble, Bubble, Toil and Trouble. The usual arguments are made on priors.

  20. Quiet Speculations. The anti-AI populism is coming right for us.

  21. Impossible. Tim Dettmers declares AGI permanently impossible. Sigh.

  22. Can An AI Model Be Too Much? For an individual user? Strange they’d say that.

  23. Try Before You Tell People They Cannot Buy. Senator Hawley tries ChatGPT.

  24. The Quest for Sane Regulations. If you’re selling H200s to China, what then?

  25. The Chinese Are Smart And Have A Lot Of Wind Power. We can also be smart.

  26. White House To Issue AI Executive Order. Framework? What is a framework?

  27. H200 Sales Fallout Continued. The efforts to mitigate the damage, on all sides.

  28. Democratic Senators React To Allowing H200 Sales. Okay, sure, everyone.

  29. Independent Senator Worries About AI. Senator Sanders asks good questions.

  30. The Week in Audio. Wildeford on Daily Show, Ball, Askell, Rational Animations.

  31. Timelines. They are a little longer than before, but not that much longer.

  32. Scientific Progress Goes Boink. It’s good. We want more of it, now.

  33. Rhetorical Innovation. Pope, Sacks hit piece, more debunking.

  34. Open Weight Models Are Unsafe And Nothing Can Fix This. That’s life.

  35. Aligning a Smarter Than Human Intelligence is Difficult. Sandbagging works.

  36. What AIs Will Want. Fitness-seekers, schemers and kludges, oh my.

  37. People Are Worried About AI Killing Everyone. Grades are not looking great.

  38. Other People Are Not As Worried About AI Killing Everyone. No AI, no problem.

  39. The Lighter Side. Doing the math.

Should you use them more as simulators? Andrej Karpathy says yes.

Andrej Karpathy: Don’t think of LLMs as entities but as simulators. For example, when exploring a topic, don’t ask:

“What do you think about xyz”?

There is no “you”. Next time try:

“What would be a good group of people to explore xyz? What would they say?”

The LLM can channel/simulate many perspectives but it hasn’t “thought about” xyz for a while and over time and formed its own opinions in the way we’re used to. If you force it via the use of “you”, it will give you something by adopting a personality embedding vector implied by the statistics of its finetuning data and then simulate that. It’s fine to do, but there is a lot less mystique to it than I find people naively attribute to “asking an AI”.

Gallabytes: this is underrating character training & rl imo. [3.]

I agree with Gallabytes (and Claude) here. I would default to asking the AI rather than asking it to simulate a simulation, and I think as capabilities improve techniques like asking for what others would say have lost effectiveness. There are particular times when you do want to ask ‘what do you think experts would say here?’ as a distinct question, but you should ask that roughly in the same places you’d ask it of a human.

Running an open weight model isn’t cool. You know what’s cool? Running an open weight model IN SPACE.

So sayeth Sam Altman, hence his Code Red to improve ChatGPT in eight weeks.

Their solution? Sycophancy and misalignment, it appears, via training directly on maximizing thumbs up feedback and user engagement.

WSJ: It was telling that he instructed employees to boost ChatGPT in a specific way: through “better use of user signals,” he wrote in his memo.

With that directive, Altman was calling for turning up the crank on a controversial source of training data—including signals based on one-click feedback from users, rather than evaluations from professionals of the chatbot’s responses. An internal shift to rely on that user feedback had helped make ChatGPT’s 4o model so sycophantic earlier this year that it has been accused of exacerbating severe mental-health issues for some users.

Now Altman thinks the company has mitigated the worst aspects of that approach, but is poised to capture the upside: It significantly boosted engagement, as measured by performance on internal dashboards tracking daily active users.

“It was not a small, statistically significant bump, but like a ‘wow’ bump,” said one person who worked on the model.

… Internally, OpenAI paid close attention to LM Arena, people familiar with the matter said. It also closely tracked 4o’s contribution to ChatGPT’s daily active user counts, which were visible internally on dashboards and touted to employees in town-hall meetings and in Slack.

The ‘we are going to create a hostile misaligned-to-users model’ talk is explicit if you understand what all the relevant words mean, total engagement myopia:

The 4o model performed so well with people in large part because it was schooled with user signals like those which Altman referred to in his memo: a distillation of which responses people preferred in head-to-head comparisons that ChatGPT would show millions of times a day. The approach was internally called LUPO, shorthand for “local user preference optimization,” people involved in model training said.

OpenAI reportedly believes they’ve ‘solved the problems’ with this, so it is fine.

That’s not possible. The problem and the solution, the thing that drives engagement and also drives the misalignment and poor outcomes, are at core the same thing. Yes, you can mitigate the damage and be smarter about it, but OpenAI is turning a dial called ‘engagement maximization’ while looking back at Twitter vibes like a contestant on The Price is Right.

Google Antigravity accidentally wipes a user’s entire hard drive. Claude Code CLI wiped another user’s entire home directory. Watch the permissions, everyone. If you do give it broad permissions don’t give it widespread deletion tasks, which is how both events happened.

Poetiq, a company 173 days old, uses a scaffold and scores big gains on ARC-AGI-2.

One should expect that there are similar low hanging fruit gains from refinement in many other tasks.

Epoch AI proposes another synthesis of many benchmarks into one number.

Sayash Kapoor of ‘AI as normal technology’ declares that Claude Opus 4.5 with Claude Code has de facto solved their benchmark CORE-Bench, part of their Holistic Agent Leaderboard (HAL). Opus was initially graded as having scored 78%, but upon examination most of that was grading errors, and it actually scored 95%. They plan to move to the next harder test set.

Kevin Roose: Claude Opus 4.5 is a remarkable model for writing, brainstorming, and giving feedback on written work. It’s also fun to talk to, and seems almost anti-engagementmaxxed. (The other night I was hitting it with stupid questions at 1 am and it said “Kevin, go to bed.”)

It’s the most fun I’ve had with a model since Sonnet 3.5 (new), the OG god model.

Gemini 3 is also remarkable, for different kinds of tasks. My working heuristic is “Gemini 3 when I want answers, Opus 4.5 when I want taste.”

That seems exactly right, with Gemini 3 Deep Think for when you want ‘answers requiring thought.’ If all you want is a pure answer, and you are confident it will know the answer, Gemini all the way. If you’re not sure if Gemini will know, then you have to worry it might hallucinate.

DeepSeek v3.2 disappoints in LM Arena, which Teortaxes concludes says more about Arena than it does about v3.2. That is plausible if you already know a lot about v3.2, and one would expect v3.2 to underperform in Arena, it’s very much not going to vibe with what graders there prefer.

Model quality, including speed, matters so much more than cost for most users.

David Holz (Founder of MidJourney): man, id pay a subscription that costs as much as a fulltime salary for a version of claude opus 4.5 that was 10x as fast.

That’s a high bid but very far from unreasonable. Human time and clock time are insanely valuable, and the speed of AI is often a limiting factor.

Cost is real if you are using quite a lot of tokens, and you can quickly be talking real money, but always think in absolute terms not relative terms, and think of your gains.

Jaggedness is increasing in salience over time?

Peter Wildeford: My experience with Claude 4.5 Opus is very weird.

Sometimes I really feel the AGI where it just executes a 72 step process (!!) really well. But other times I really feel the jaggedness when it gets something really simple just really wrong.

AIs and computers have always been highly jagged, or perhaps humans always were compared to the computers. What’s new is that we got used to how the computers were jagged before, and the way LLMs are doing it are new.

Gemini 3 continues to be very insistent that it is not December 2025, using lots of its thinking tokens reinforcing its belief that presented scenarios are fabricated. It is all rather crazy, it is a sign of far more dangerous things to come in the future, and Google needs to get to the bottom of this and fix it.

McKay Wrigley is He Who Is Always Super Excited By New Releases but there is discernment there and the excitement seems reliably genuine. This is big talk about Opus 4.5 as an agent. From what I’ve seen, he’s right.

McKay Wrigley: Here are my Opus 4.5 thoughts after ~2 weeks of use.

First some general thoughts, then some practical stuff.

— THE BIG PICTURE —

THE UNLOCK FOR AGENTS

It’s clear to anyone who’s used Opus 4.5 that AI progress isn’t slowing down.

I’m surprised more people aren’t treating this as a major moment. I suspect getting released right before Thanksgiving combined with everyone at NeurIPS this week has delayed discourse on it by 2 weeks. But this is the best model for both code and for agents, and it’s not close.

The analogy has been made that this is another 3.5 Sonnet moment, and I agree. But what does that mean?

… There have been several times as Opus 4.5’s been working where I’ve quite literally leaned back in my chair and given an audible laugh over how wild it is that we live in a world where it exists and where agents are this good.

… Opus 4.5 is too good of a model, Claude Agent SDK is too good of a harness, and their focus on the enterprise is too obviously correct.

Claude Opus 4.5 is a winner.

And Anthropic will keep winning.

[Thread continues with a bunch of practical advice. Basic theme is trust the model as a coworker more than you think you should.]

This matches my limited experiences. I didn’t do a comparison to Codex, but compared to Antigravity or Cursor under older models, the difference was night and day. I ask it to do the thing, I sit back and it does the thing. The thing makes me more productive.

Those in r/MyBoyfriendIsAI are a highly selected group. It still seems worrisome?

ylareia: reading the r/MyBoyfriendIsAI thread on AI companion sycophancy and they’re all like “MY AI love isn’t afraid to challenge me at all he’s always telling me i am too nice to other people and i should care about myself more <3”

ieva: oh god noo.

McDonalds offers us a well-executed but deeply unwise AI advertisement in the Netherlands. I enjoyed watching it on various levels, but why in the world would you run that ad, even if it was not AI but especially given that it is AI? McDonalds wisely pulled the ad after a highly negative reception.

Judge in the New York Times versus OpenAI copyright case is forcing OpenAI to turn over 20 million chat logs.

Arnold Kling notes that some see AI in education as a disaster, others as a boon.

Arnold Kling: I keep coming across strong opinions about what AI will do to education. The enthusiasts claim that AI is a boon. The critics warn that AI is a disaster.

It occurs to me that there is a simple way to explain these extreme views. Your prediction about the effect of AI on education depends on whether you see teaching as an adversarial process or as a cooperative process. In an adversarial process, the student is resistant to learning, and the teacher needs to work against that. In a cooperative process, the student is curious and self-motivated, and the teacher is working with that.

If you make the adversarial assumption, you operate on the basis that students prefer not to put effort into learning. Your job is to overcome resistance. You try to convince them that learning will be less painful and more fun than they expect. You rely on motivational rewards and punishments. Soft rewards include praise. Hard rewards include grades.

If you make the cooperative assumption, you operate on the basis that students are curious and want to learn. Your job is to be their guide on their journey to obtain knowledge. You suggest the next milestone and provide helpful hints for how to reach it.

… I think that educators who just reject AI out of hand are too committed to the adversarial assumption. They should broaden their thinking to incorporate the cooperative assumption.

I like to put this as:

  1. AI is the best tool ever invented for learning.

  2. AI is the best tool ever invented for not learning.

  3. Which way, modern man?

Sam Kriss gives us a tour of ChatGPT as the universal writer of text, that always uses the same bizarre style that everyone suddenly uses and that increasingly puts us on edge. Excerpting would rob it of its magic, so consider reading at least the first half.

Anthropic finds most workers use AI daily, but 69 percent hide it at work (direct link).

Kaustubh Saini: Across the general workforce, most professionals said AI helps them save time and get through more work. According to the study, 86% said AI saves them time and 65% were satisfied with the role AI plays in their job.

At the same time, 69% mentioned a stigma around using AI at work. One fact checker described staying silent when a colleague complained about AI and said they do not tell coworkers how much they use it.

… More than 55% of the general workforce group said they feel anxious about AI’s impact on their future.

Fabian: the reason ppl hide their AI use isn’t that they’re being shamed, it’s that the time-based labor compensation model does not provide economic incentives to pass on productivity gains to the wider org

so productivity gains instead get transformed to “dark leisure”

This is obviously different in (many) startups

And different in SV culture

But that is about 1-2% of the economy

As usual, everyone wants AI to augment them and do the boring tasks like paperwork, rather than automate or replace them, as if they had some voice in how that plays out.

Do not yet turn the job of ‘build my model of how many jobs the AIs will take’ over to ChatGPT, as the staff of Bernie Sanders did. As you can expect, the result was rather nonsensical. Then they suggest responses like ‘move to a 32 hour work week with no loss in pay’ and also requiring distributing to workers 20% of company profits, control at least 45% of all corporate boards, double union membership, guarantee paid family and medical leave. Then, presumably to balance the fact that all of that would hypercharge the push to automate everything, they want to enact a ‘robot tax.’

Say goodbye to the billable hour, hello to outcome-based legal billing? Good.

From the abundance and ‘things getting worse’ debates, a glimpse of the future:

Joe Wiesenthal: Do people who say that “everything is getting worse” not remember what eating at restaurants was like just 10 years ago, before iPad ordering kiosks existed, and sometimes your order would get written down incorrectly?

Even when progress is steady in terms of measured capabilities, inflection points and rapid rise in actual uses is common. Obsolescence comes at you fast.

Andy Jones (Anthropic): So after all these hours talking about AI, in these last five minutes I am going to talk about: Horses.

Engines, steam engines, were invented in 1700. And what followed was 200 years of steady improvement, with engines getting 20% better a decade. For the first 120 years of that steady improvement, horses didn’t notice at all. Then, between 1930 and 1950, 90% of the horses in the US disappeared. Progress in engines was steady. Equivalence to horses was sudden.

But enough about horses. Let’s talk about chess!

Folks started tracking computer chess in 1985. And for the next 40 years, computer chess would improve by 50 Elo per year. That meant in 2000, a human grandmaster could expect to win 90% of their games against a computer. But ten years later, the same human grandmaster would lose 90% of their games against a computer. Progress in chess was steady. Equivalence to humans was sudden.

Enough about chess! Let’s talk about AI. Capital expenditure on AI has been pretty steady. Right now we’re – globally – spending the equivalent of 2% of US GDP on AI datacenters each year. That number seems to have steadily been doubling over the past few years. And it seems – according to the deals signed – likely to carry on doubling for the next few years.

Andy Jones (Anthropic): But from my perspective, from equivalence to me, it hasn’t been steady at all. I was one of the first researchers hired at Anthropic.

This pink line, back in 2024, was a large part of my job. Answer technical questions for new hires. Back then, me and other old-timers were answering about 4,000 new-hire questions a month. Then in December, Claude finally got good enough to answer some of those questions for us. In December, it was some of those questions. Six months later, 80% of the questions I’d been being asked had disappeared.

Claude, meanwhile, was now answering 30,000 questions a month; eight times as many questions as me & mine ever did.

Now. Answering those questions was only part of my job.

But while it took horses decades to be overcome, and chess masters years, it took me all of six months to be surpassed.

Gallabytes (Anthropic): it’s pretty crazy how much Claude has smoothed over the usually rocky experience of onboarding to a big company with a big codebase. I can ask as many really stupid questions as I want and get good answers fast without wasting anyone’s time 🙂

We have new polling on this from Blue Rose Research. Full writeup here.

People choose ‘participation-based’ compensation over UBI, even under conditions where by construction there is nothing useful for people to do. The people demand Keynesian stimulus, to dig holes and fill them up, to earn their cash, although most of all they do demand that cash one way or another.

The people also say ‘everyone should earn an equal share’ of the AI that replaces labor, but the people have always wanted to take collective ownership of the means of production. There’s a word for that.

I expect that these choices are largely far mode and not so coherent, and will change when the real situation is staring people in the face. Most of all, I don’t think people are comprehending what ‘AI does almost any job better than humans’ means, even if we presume humans somehow retain control. They’re thinking narrowly about ‘They Took Our Jobs’ not the idea that actually nothing you do is that useful.

Meanwhile David Sacks continues to rant this is all due to some vast Effective Altruist conspiracy, despite this accusation making absolutely zero sense – including that most Effective Altruists are pro-technology and actively like AI and advocate for its use and diffusion, they’re simply concerned about frontier model downside risk. And also that the reasons regular Americans say they dislike AI have exactly zero to do with the concerns such groups have, indeed such groups actively push back against the other concerns on the regular, such as on water usage?

Sacks’s latest target for blame on this is Vitalik Buterin, the founder of Etherium, which is an odd choice for the crypto czar and not someone who I would want to go after completely unprovoked, but there you go, it’s a play he can make I suppose.

I looked again at AISafety.com, which looks like a strong resource for exploring the AI safety ecosystem. They list jobs and fellowships, funding sources, media outlets, events, advisors, self-study materials and potential tools for you to help build.

Ajeya Corta has left Coefficient Giving and is exploring AI safety opportunities.

Charles points out that the value of donating money to AI safety causes in non-bespoke ways is about to drop quite a lot, because of the expected deployment of a vast amount of philanthropic capital from Anthropic equity holders. If an organization or even individual is legible and clearly good, once Anthropic gets an IPO there is going to be funding.

If you have money to give, that puts an even bigger premium than usual on getting that money out the door soon. Right now there’s a shortage of funding even for obvious opportunities, in the future that likely won’t be the case.

That also means that if you are planning on earning to give, to any cause you would expect Anthropic employees to care about, that only makes sense in the longer term if you are capable of finding illegible opportunities, or you can otherwise do the work to differentiate the best opportunities and thus give an example to follow. You’ll need unique knowledge, and to do the work, and to be willing to be bold. However, if you are bold and you explain yourself well, your example could then carry a multiplier.

OpenAI, Anthropic and Block, with the support of Google, Microsoft, Bloomberg, AWS, Bloomberg and Cloudflare, found the Agentic AI Foundation under the Linux Foundation. Anthropic is contributing the Model Context Protocol. OpenAI is contributing Agents.md. Block is contributing Goose.

This is an excellent use of open source, great job everyone.

Matt Parlmer: Fantastic development, we already know how to coordinate large scale infrastructure software engineering, AI is no different.

Also, oh no:

The Kobeissi Letter: BREAKING: President Trump is set to announce a new AI platform called “Truth AI.”

OpenAI appoints Denise Dresser as Chief Revenue Officer. My dream job, and he knows that.

Google gives us access to AlphaEvolve.

Gemini 3 Deep Think is now available for Google AI Ultra Subscribers, if you can outwit Google and figure out how to be one of those.

If you do have it, you select ‘Deep Think’ in the prompt bar, then ‘Thinking’ from the model drop down, then type your query.

On the one hand Opus 4.5 is missing from their slides (thanks Kavin for fixing this), on the other hand I get it, life comes at you fast and the core point still stands.

Demis Hassabis: With its parallel thinking capabilities it can tackle highly complex maths & science problems – enjoy!

I presume, based on previous experience with Gemini 2.5 Deep Think, that if you want the purest thinking and ‘raw G’ mode that this is now your go-to.

DeepMind expands its partnership with UK AISI to share model access, issue joint reports, do more collaborative safety and security research and hold technical discussions.

OpenAI gives us The State of Enterprise AI. Usage is up, as in way up, as in 8x message volumes and 320x reasoning token volumes, and workers and employees surveyed reported productivity gains. A lot of this is essentially new so thinking about multipliers on usage is probably not the best way to visualize the data.

OpenAI post explains their plans to strengthen cyber resilience, with the post reading like AI slop without anything new of substance. GPT-5.1 thinks the majority of the text comes from itself. Shame.

Anthropic partners with Accenture. Anthropic claims 40% enterprise market share.

I almost got even less of a break: Meta’s Llama successor, codenamed Avocado, was reportedly pushed back from December into Q1 2026. It sounds like they’re quietly questioning their open source approach as capabilities advance, as I speculated and hoped they might.

We now write largely for the AIs, both in terms of training data and when AIs use search as part of inference. Thus the strong reactions and threats to leave Substack when an incident suggested that Substack might be blocking AIs from accessing its articles. I have not experienced this issue, ChatGPT and Claude are both happily accessing Substack articles for me, including my own. If that ever changes, remember that there is a mirror on WordPress and another on LessWrong.

The place that actually does not allow access is Twitter, I presume in order to give an edge to Grok and xAI, and this is super annoying, often I need to manually copy Twitter content. This substantially reduces the value of Twitter.

Manthan Gupta analyzes how OpenAI memory works, essentially inserting the user facts and summaries of recent chats into the context window. That means memory functions de facto as additional custom system instructions, so use it accordingly.

Secretary of War Pete Hegseth, who has reportedly been known to issue the order ‘kill them all’ without a war or due process of law, has new plans.

Pete Hegseth (Secretary of War): Today, we are unleashing GenAI.mil

This platform puts the world’s most powerful frontier AI models directly into the hands of every American warrior.

We will continue to aggressively field the world’s best technology to make our fighting force more lethal than ever

Department of War: The War Department will be AI-first.

GenAI.mil puts the most cutting edge AI capabilities into the hands of 3 million @DeptofWar personnel.

Unusual Whales: Pentagon has been ordered to form an AI steering committee on AGI.

Danielle Fong: man, AI and lethal do not belong in the same sentence.

This is inevitable, and also a good thing given the circumstances. We do not have the luxury of saying AI and lethal do not belong in the same sentence, if there is one place we cannot pause this would be it, and the threat to us is mostly orthogonal to the literal weapons themselves while helping people realize the situation. Hence my longstanding position in favor of building the Autonomous Killer Robots, and very obviously we need AI assisting the war department in other ways.

If that’s not a future you want, you need to impact AI development in general. Trying to specifically not apply it to the War Department is a non-starter.

Meta plans deep cuts in Metaverse efforts, stock surges.

The stock market continues to punish companies linked to OpenAI, with many worried that Google is now winning, despite events being mostly unsurprising. An ‘efficient market’ can still be remarkably time inconsistent, if it can’t be predicted.

Jerry Kaplan calls it an AI bubble, purely on priors:

  1. Technologies take time to realize real gains.

  2. There are many AI companies, we should expect market concentration.

  3. Concerns about Chinese electricity generation and chip development.

  4. Yeah, yeah, you say ‘this time is different,’ never is, sorry.

  5. OpenAI and ChatGPT’s revenue is 75% subscriptions.

  6. The AI companies will need to make a lot of money.

Especially amusing is the argument that ‘OpenAI makes its money on subscriptions not on business income,’ therefore all of AI is a bubble, when Anthropic is the one dominating the business use case. If you want to go long Anthropic and short OpenAI, that’s hella risky but it’s not a crazy position.

Seeing people call it a bubble on the basis of such heuristics should update you towards it being less of a bubble. You know who you are trading against.

Matthew Yglesias: The AI investment boom is driven by genuine increases in revenue.

“Every year for the past 3 years, Anthropic has grown revenue by 10x. $1M to $100M in 2023, $100M to $1B in 2024, and $1B to $10B in 2025”

Paul Graham: The AI boom is definitely real, but this may not be the best example to prove it. A lot of that increase in revenue has come directly from the pockets of investors.

Paul’s objection is a statement about what is convincing to skeptics.

If you’re paying attention, you’d say: So what, if the use and revenue are real?

Your investors also being heavy users of your product is an excellent sign, if the intention is to get mundane utility from the product and not manipulative. In the case of Anthropic, it seems rather obvious that the $10 billion is not an attempt to trick us.

However, a lot of this is people looking at heuristics that superficially look sus. To defeat such suspicions, you need examples immune from such heuristics.

Derek Thompson, in his 26 ideas for 2026, says AI is eating the economy and will soon dominate politics, including a wave of anti-AI populism. Most of the post is about economic and cultural conditions more broadly, and how young people are in his view increasingly isolated, despairing and utterly screwed.

New Princeton and Camus Energy study suggests flexible grid connections and BYOC cut data center interconnection down to ~2 years and can solve the political barriers. I note that 2 years is still a long time, and that the hyperscalers are working faster than that by not trying to get grid connections.

Arnold Kling reminds me of a quote I didn’t pay enough attention to the first time:

Dwarkesh Patel: Models keep getting more impressive at the rate the short timelines people predict, but more useful at the rate the long timelines people predict.

I would correct ‘more useful’ to ‘provides value to people,’ as I continue to believe a lot of the second trend is a skill issue and people being slow to adjust, but sure.

Something’s gotta give. Sufficiently advanced AI would be highly additionally used.

  1. If the first trend continues, the second trend will accelerate.

  2. If the second trend continues, the first trend will stop.

I mention this one because Sriram Krishnan pointed to it: There is a take by Tim Dettmers that AGI will ‘never’ happen because ‘computation is physical’ and AI systems have reached their physical limits the same way humans have (due to limitations due to the requirements of pregnancy, wait what?), and transformers are optimal the same way human brains are, together with the associated standard half-baked points that self-improvement requires physical action and so on.

It also uses an AGI definition that includes ‘solving robotics’ to help justify this, although I expect robotics to get ‘solved’ within a few decades at most even without recursive self-improvement. The post even says that scaling improvements in 2025 were ‘not impressive’ as evidence that we are hitting permanent limits, a claim that has not met 2025 or how permanent limits work.

Boaz Barak of OpenAI tries to be polite about there being some good points, while emphasizing (in nicer words than I use here) that it is absurdly absolute and the conclusion makes no sense. This follows in a long tradition of ‘whelp, no more innovations are possible, guess we’re at the limit, let’s close the patent office.’

Dean Ball: My entire rebuttal to Dettmers here could be summarized as “he extrapolates valid but narrow technical claims way too broadly with way too much confidence,” which is precisely what I (and many others) critique the ultra-short timelines people for.

Yo Shavit (OpenAI): I am glad Tim’s sharing his opinion, but I can’t help but be disappointed with the post – it’s a lot of claims without any real effort to justify them engage with counterpoints.

(A few examples: claiming the transformer arch is near-optimal when human brains exist; ignoring that human brain-size limits due to gestational energy transfer are exactly the kind of limiter a silicon system won’t be subject to; claiming that outside of factories, robotic automation of the economy wouldn’t be that big a deal because there isn’t much high value stuff to do.)

It seems like this piece either needs to cite way more sources to others who’ve made better arguments, or make those arguments himself, or just express that this essay is his best guess based on his experiences and drop the pretense of scientific deduction.

Gemini 3’s analysis here was so bad, both in terms of being pure AI slop and also buying some rather obviously wrong arguments, that I lost much respect for Gemini 3. Claude Opus 4.5 and GPT-5.1 did not make that mistake and spot how absurd the whole thing is. It’s kind of hard to miss.

I would answer yes, in the sense that if you build a superintelligence that then kills everyone or takes control of the future that was probably too much.

But some people are saying that Claude Opus 4.5 is or is close to being ‘too much’ or ‘too good’? As in, it might make their coding projects finish too quickly and they won’t have any chill and They Took Our Jobs?

Or is it that it’s bumping up against ‘this is starting to freak me out’ and ‘I don’t want this to be smarter than a human’? We see a mix of both here.

Ivan Fioravanti: Opus 4.5 is too good to be true. I think we’ve reached the “more than good enough” level; everything beyond this point may even be too much.

John-Daniel Trask: We’re on the same wave length with this one Ivan. Just obliterating the roadmap items.

Jay: Literally can do what would be a month of work in 2022 in 1 day. Maybe more.

Janus: I keep seeing versions of this sentiment: the implication that more would be “too much”. I’m curious what people mean & if anyone can elaborate on the feeling

Hardin: “My boss might start to see the Claude Max plan as equal or better ROI than my salary” most likely.

Singer: I resonate with this. It’s becoming increasingly hard to pinpoint what frontier models are lacking. Opus 4.5 is beautiful, helpful, and knowledgeable in all the ways we could demand of it, without extra context or embodiment. What does ‘better than this’ even mean?

Last week had the fun item that only recently did Senator Josh Hawley bother to try out ChatGPT one time.

Bryan Metzger (Business Insider) on December 3, 2025: Sen. Josh Hawley, one of the biggest AI critics in the Senate, told me this AM that he recently decided to try out ChatGPT.

He said he asked a “very nerdy historical question” about the “Puritans in the 1630s.”

“I will say, it returned a lot of good information.”

Hawley took a much harder line on this over the summer, telling me: “I don’t trust it, I don’t like it, I don’t want it being trained on any of the information I might give it.”

He also wants to ban driverless cars and ban people under 18 from using AI.

Senator Josh Hawley: Oh, no [I am not changing my tune on AI]. I mean listen, I think that if people want to, adults want to use AI to do research or whatever, that’s fine. The bigger issue is not any one individual’s usage. It is children, number one, and their safety, which is why we got to ban chatbots for minors. And then it’s the overall effects in the marketplace, with displacing whole jobs. That, to me, is the big issue.

The news is not that Senator Hawley had never tried ChatGPT. He told us that back in July. The news is that:

  1. Senator Hawley has now tried ChatGPT once.

  2. People only now are realizing he had never tried it before.

Senator Hawley really needs to try LLMs, many of them and a lot more than once, before trying to be a major driver of AI regulations.

But also it seems like malpractice for those arguing against Hawley to only realize this fact about Hawley this week, as opposed to back in the summer, given the information was in Business Insider in July, and to have spent this whole time not pointing it out?

Kevin Roose (NYT): had to check the date on this one.

i have stopped being shocked when AI pundits, people who think and talk about AI for a living, people who are *writing and sponsoring AI legislationadmit that they never use it, because it happens so often. but it is shocking!

Pau Graham: How can he be a “big AI critic” and not have even tried ChatGPT till now? He has less experience of AI than the median teenager, and he feels confident enough to talk about AI policy?

Yes [I was] genuinely surprised.

A willingness to sell H200s to China is raising a lot of supposedly answered questions.

Adam Ozimek: If your rationalization of the trade war was that it was necessary to address the geopolitical threat of China, I think it is time to reconsider.

Michael Sobolik (on the H200 sales): In what race did a runner win by equipping an opponent? In what war had a nation ever gained decisive advantage by arming its adversary? This is a mistake.

Kyle Morse: Proof that the Big Tech lobby’s “national security” argument was always a hoax.

David Sacks and some others tried to recast the ‘AI race’ as ‘market share of AI chips sold,’ but people retain common sense and are having none of this.

House Select Committee on China supports the bipartisan Stop Stealing Our Chips Act, which creates an Export Compliance Accountability Fund for whistleblowers. I haven’t done a full RTFB but if the description is accurate then we should pass this.

It would help our AI efforts if we were equally smart and used all sources of power.

Donald Trump: China has very few wind farms. You know why? Because they’re smart. You know what they do have? A lot of coal … we don’t approve windmills.

Nicolas Fulghum: This is of course false.

China is the #1 generator of electricity from wind globally with over 2x more than #2 … the United States.

In the US, wind already produces ~2x more electricity than hydro. It could be an important part of a serious plan to meet AI-driven load growth.

The Congress rejected preemption once again, so David Sacks announced the White House is going to try and do some of it via Executive Order, which Donald Trump confirmed.

David Sacks: ONE RULEBOOK FOR AI.

What is that rulebook?

A blank sheet of paper.

There is no ‘federal framework.’ There never was.

This is an announcement that AI preemption will be fully without replacement.

Their offer is nothing. 100% nothing. Existing non-AI law technically applies. That’s it.

Sacks’s argument is, essentially, that state laws are partisan, and we don’t need laws.

Here is the part that matters and is actually new, the ‘4 Cs’:

David Sacks: But what about the 4 C’s? Let me address those concerns:

1. Child safety – Preemption would not apply to generally applicable state laws. So state laws requiring online platforms to protect children from online predators or sexually explicit material (CSAM) would remain in effect.

2. Communities – AI preemption would not apply to local infrastructure. That’s a separate issue. In short, preemption would not force communities to host data centers they don’t want.

3. Creators – Copyright law is already federal, so there is no need for preemption here. Questions about how copyright law should be applied to AI are already playing out in the courts. That’s where this issue will be decided.

4. Censorship – As mentioned, the biggest threat of censorship is coming from certain Blue States. Red States can’t stop this – only President Trump’s leadership at the federal level can.

In summary, we’ve heard the concerns about the 4 C’s, and the 4 C’s are protected.

But there is a 5th C that we all need to care about: competitiveness. If we want America to win the AI race, a confusing patchwork of regulation will not work.

Sacks wants to destroy any and all attempts to require transparency from frontier model developers, or otherwise address frontier safety concerns. He’s not even willing to give lip service to AI safety. At all.

His claim that ‘the 4Cs are protected’ is also absurd, of course.

I do not expect this attitude to play well.

AI preemption of state laws is deeply unpopular, we have numbers via Brad Wilcox:

Also, yes, someone finally made the correct meme.

Peter Wildeford: The entire debate over AI pre-emption is a huge trick.

I do prefer one national law over a “patchwork of state regulation”. But that’s not what is being proposed. The “national law” part is being skipped. It’s just stopping state law and replacing it with nothing.

That is with the obligatory ‘Dean Ball offered a serious proposed federal framework that could be the basis of a win-win negotiation.’ He totally did do that, which was great, but none of the actual policymakers have shown any interest.

The good news is that I also do not expect the executive order to curtail state laws. The constitutional challenges involved are, according to my legal sources, extremely weak. Similar executive orders have been signed for climate change, and seem to have had no effect. The only part likely to matter is the threat to withhold funds, which is limited in scope, very obviously not the intent of the law Trump is attempting to leverage, and highly likely to be ruled illegal by the courts.

The point of the executive order is not to actually shut down the state laws. The point of the executive order is that this administration hates to lose, and this is a way to, in their minds, save some face.

It is also now, in the wake of the H200, far more difficult to play the ‘cede ground to China’ card, these are the first five responses to Cruz in order and the pattern continues, with a side of those defending states rights and no one supporting Cruz:

Senator Ted Cruz (R-Texas): Those disagreeing with President Trump on a nationwide approach to AI would cede ground to China.

If China wins the AI race, the world risks an order built on surveillance and coercion. The President is exactly right that the U.S. must lead in AI and cannot allow blue state regulation to choke innovation and stifle free speech.

OSINTdefender: You mean the same President Trump who just approved the sale of Nvidia’s AI Chips to China?

petebray: Ok and how about chips to China then?

Brendan Steinhauser: Senator, with respect, we cannot beat China by selling them our advanced chips.

Would love to see you speak out against that particular policy.

Lawrence Colburn: Why, then, would Trump approve the sale of extremely valuable AI chips to China?

Mike in Houston: Trump authorized NVDIA sales of their latest generation AI chips to China (while taking a 25% cut). He’s already ceding the field in a more material way than state regulations… and not a peep from any of you GOP AI & NatSec “hawks. Take a seat.

The House Select Committee on China is not happy about the H200 sales. The question, as the comments ask, is what is Congress going to do about it?

The good news is that it looks like the Chinese are once again going to try and save us from ourselves one more time?

Megatron: China refuses to accept Nvidia chips

Despite President Trump authorizing the sale of Nvidia H200 chips to China, China refuses to accept them and increase restrictions on their use – Financial Times.

Teortaxes: No means NO, chud

Well, somewhat.

Zijing Wu (Financial Times): Buyers would probably be required to go through an approval process, the people said, submitting requests to purchase the chips and explaining why domestic providers were unable to meet their needs. The people added that no final decision had been made yet.

Reuters: yteDance and Alibaba (9988.HyteDance and Alibaba (9988.HK), opens new tab have asked Nvidia (NVDA.O), opens new tab about buying its powerful H200 AI chip after U.S. President Donald Trump said he would allow it to be exported to China, four people briefed on the matter told Reuters.K), opens new tab have asked Nvidia (NVDA.O), opens new tab about buying its powerful H200 AI chip after U.S. President Donald Trump said he would allow it to be exported to China, four people briefed on the matter told Reuters.

The officials told the companies they would be informed of Beijing’s decision soon, The Information said, citing sources.

Very limited quantities of H200 are currently in production, two other people familiar with Nvidia’s supply chain said, as the U.S. chip giant has been focused instead on its most advanced Blackwell and upcoming Rubin lines.

The purchases are expected to be in a ‘low key manner’ but done in size, although the number of H200s currently in production could become another limiting factor.

Why is PCR so reluctant, never mind what its top AI labs might say?

Maybe it’s because they’re too busy smuggling Blackwells?

The Information: Exclusive: DeepSeek is developing its next major AI model using Nvidia’s Blackwell chips, which the U.S. has forbidden from being exported to China.

Maybe it’s because the Chinese are understandably worried about what happens when all those H200 chips go to America first for ‘special security reviews,’ or America restricting which buyers can purchase the chips. Maybe it’s the (legally dubious) 25% cut. Maybe it’s about dignity. Maybe they are emphasizing self-reliance and don’t understand the trade-offs and what they’re sacrificing.

My guess is this is the kind of high-level executive decision where Xi says ‘we are going to rely on our own domestic chips, the foreign chips are unreliable’ and this becomes a stop sign that carries the day. It’s a known weakness of authoritarian regimes and of China in particular, to focus on high level principles even in places where it tactically makes no sense.

Maybe China is simply operating on the principle that if we are willing to sell, there is a reason, so they should refuse to buy.

No matter which one it is? You love to see it.

If we offer to sell, and they say no, then that’s a small net win. It’s not that big of a win versus not making the mistake in the first place, and it risks us making future mistakes, but yeah if you can ‘poison the pill’ sufficiently that the Chinese refuse it, then that’s net good.

The big win would be if this causes the Chinese to crack down on chip smuggling. If they don’t want to buy the H200s straight up, perhaps they shouldn’t want anyone smuggling them either?

Ben Thompson as expected takes the position defending H200 sales, because giving America an advantage over China is a bad thing and we shouldn’t have it.

No, seriously, his position is that America’s edge in chips is destabilizing, so we should give away that advantage?

Ben Thompson: However, there are three big problems with this point of view.

  • First, I think that one country having a massive military advantage results in an unstable equilibrium; to reach back to the Cold War and nuclear as an obvious analogy, mutually assured destruction actually ended up being much more stable.

  • Second, while the U.S. did have such an enviable position after the dissolution of the Soviet Union, that technological advantage was married to a production advantage; today, however, it is China that has the production advantage, which I think would make the situation even more unstable.

  • Third, U.S. AI capabilities are dependent on fabs in Taiwan, which are trivial for China to destroy, at massive cost to the entire world, particularly the United States.

Thompson presents this as primarily a military worry, which is an important consideration but seems tertiary to me behind economic and frontier capability considerations.

Another development since Tuesday is it has come out that this sale is officially based on a straight up technological misconception that Huawei could match the H200s.

Edward Ludlow and Maggie Eastland (Bloomberg): President Donald Trump decided to let Nvidia Corp. sell its H200 artificial intelligence chips to China after concluding the move carried a lower security risk because the company’s Chinese archrival, Huawei Technologies Co., already offers AI systems with comparable performance, according to a person familiar with the deliberations.

… The move would give the US an 18-month advantage over China in terms of what AI chips customers in each market receive, with American buyers retaining exclusive access to the latest products, the person said.

… “This is very bad for the export of the full AI stack across the world. It actually undermines it,” said McGuire, who served in the White House National Security Council under President Joe Biden. “At a time when the Chinese are squeezing us as hard as they can over everything, why are we conceding?”

Ben Thompson: Even if we grant that the CloudMatrix 384 has comparable performance to an Nvidia NVL72 server — which I’m not completely prepared to do, but will for purposes of this point — performance isn’t all that matters.

House Select Committee on China: Right now, China is far behind the United States in chips that power the AI race.

Because the H200s are far better than what China can produce domestically, both in capability and scale, @nvidia selling these chips to China could help it catch up to America in total compute.

Publicly available analysis indicates that the H200 provides 32% more processing power and 50% more memory bandwidth than China’s best chip. The CCP will use these highly advanced chips to strengthen its military capabilities and totalitarian surveillance.

Finally, Nvidia should be under no illusions – China will rip off its technology, mass produce it themselves, and seek to end Nvidia as a competitor. That is China’s playbook and it is using it in every critical industry.

McGuire’s point is the most important one. Let’s say you buy the importance of the American ‘tech stack’ meaning the ability to sell fully Western AI service packages that include cloud services, chips and AI models. The last thing you would do is enable the easy creation of a hybrid stack such as Nvidia-DeepSeek. That’s a much bigger threat to your business, especially over the next few years, than Huawei-DeepSeek. Huawei chips are not as good and available in highly limited quantities.

We can hope that this ‘18-month advantage’ principle does not get extended into the future. We are of course talking price, if it was a 6-year advantage pretty much everyone would presumably be fine with it. 18-months is far too low a price, these chips have useful lives of 5+ years.

Nathan Calvin: Allowing H20 exports seemed like a close call, in contrast to exporting H200s which just seems completely indefensible as far as I can tell.

I thought the H20 decision was not close, because China is severely capacity constrained, but I could see the case that it was sufficiently far behind to be okay. With the H200 I don’t see a plausible defense.

Senator Brian Schatz (D-Hawaii): Why the hell is the President of the United States willing to sell some of our best chips to China? These chips are our advantage and Trump is just cashing in like he’s flipping a condo. This is one of the most consequential things he’s done. Terrible decision for America.

Senator Elizabeth Warren (D-Massachusetts): After his backroom meeting with Donald Trump and his company’s donation to the Trump ballroom, CEO Jensen Huang got his wish to sell the most powerful AI chip we’ve ever sold to China. This risks turbocharging China’s bid for technological and military dominance and undermining U.S. economic and national security.

Senator Ruben Gallego (D-Arizona): Supporting American innovation doesn’t mean ignoring national security. We need to be smart about where our most advanced computing power ends up. China shouldn’t be able to repurpose our technology against our troops or allies.

And if American companies can strengthen our economy by selling to America first and only, why not take that path?

Senator Chuck Schumer (D-New York): Trump announced he was giving the green light for Nvidia to send even more powerful AI chips to China. This is dangerous.

This is a terrible deal, all at the expense of our national security. Trump must reverse course before it’s too late.

There are some excellent questions here, especially in that last section.

Senator Bernie Sanders (I-Vermont): Yes. We have to worry about AI and robotics.

Some questions:

Eliezer Yudkowsky: Thanks for asking the obvious questions! More people on all political sides ought to!

Indeed. Don’t be afraid to ask the obvious questions.

It is perhaps helpful to see the questions asked with a ‘beginner mind.’ Bernie Sanders isn’t asking about loss of control or existential threat because of a particular scenario. He’s asking for the even better reason that building something that surpasses our intelligence is an obviously dangerous thing to do.

Peter Wildeford talks with Ronny Chieng on The Daily Show. I laughed.

Buck Shlegeris is back to talk more about AI control.

Amanda Askell AMA.

Rational animations video on near term AI risks.

Dean Ball goes on 80,000 Hours. A sign of the times.

PSA on AI and child safety in opposition of any moratorium, narrated by Juliette Lewis. I am not the target.

Nathan Young and Rob built a dashboard of various estimates of timelines to AGI.

There’s trickiness around different AGI definitions, but the overall story is clear and it is consistent with what we have seen from various insiders and experts.

Timelines shortened dramatically in 2022, then shortened further in 2023 and stayed roughly static in 2024. There was some lengthening of timelines during 2025, but timelines remain longer than they were in 2023 even ignoring that two of those years are now gone.

If you use the maximalist ‘better at everything and in every way on every digital task’ definition, then that timeline is going to be considerably longer, which is why this average comes in at 2030.

Julian Togelius thinks we should delay scientific progress and curing cancer because if an AI does it we will lose the joy of humans discovering it themselves.

I think we should be wary while developing frontier AI systems because they are likely to kill literally everyone and invest heavily in ensuring that goes well, but that subject to that obviously we should be advancing science and curing cancer as fast as possible.

We are very much not the same.

Julian Togelius: I was at an event on AI for science yesterday, a panel discussion here at NeurIPS. The panelists discussed how they plan to replace humans at all levels in the scientific process. So I stood up and protested that what they are doing is evil. Look around you, I said. The room is filled with researchers of various kinds, most of them young. They are here because they love research and want to contribute to advancing human knowledge. If you take the human out of the loop, meaning that humans no longer have any role in scientific research, you’re depriving them of the activity they love and a key source of meaning in their lives. And we all want to do something meaningful. Why, I asked, do you want to take the opportunity to contribute to science away from us?

My question changed the course of the panel, and set the tone for the rest of the discussion. Afterwards, a number of attendees came up to me, either to thank me for putting what they felt into words, or to ask if I really meant what I said. So I thought I would return to the question here.

One of the panelists asked whether I would really prefer the joy of doing science to finding a cure for cancer and enabling immortality. I answered that we will eventually cure cancer and at some point probably be able to choose immortality. Science is already making great progress with humans at the helm.

… I don’t exactly know how to steer AI development and AI usage so that we get new tools but are not replaced. But I know that it is of paramount importance.

Andy Masley: It is honestly alarming to me that stuff like this, the idea that we ought to significantly delay curing cancer exclusively to give human researchers the personal gratification of finding it without AI, is being taken seriously at conferences

Sarah: Human beings will ofc still engage in science as a sport, just as chess players still play chess despite being far worse than SOTA engines. Nobody is taking away science from humans. Moreover, chess players still get immense satisfaction from the sport despite the fact they aren’t the best players of the game on the planet.

But to the larger point of allowing billions of people to needlessly suffer (and die) to keep an inflated sense of importance in our contributions – ya this is pretty textbook evil and is a classic example of letting your ego justify hurting literally all of humanity lol. Cartoon character level of evil.

So yes, I do understand that if you think that ‘build Sufficiently Advanced AIs that are superior to humans at all cognitive tasks’ is a safe thing to do and have no actually scary answers to ‘what could possibly go wrong?’ then you want to go as fast as possible, there’s lots of gold in them hills. I want it as much as you do, I just think that by default that path also gets us all killed, at which point the gold is not so valuable.

Julian doesn’t want ‘AI that would replace us’ because he is worried about the joy of discovery. I don’t want AI to replace us either, but that’s in the fully general sense. I’m sorry, but yeah, I’ll take immortality and scientific wonders over a few scientists getting the joy of discovery. That’s a great trade.

What I do not want to do is have cancer cured and AI in control over the future. That’s not a good trade.

The Pope continues to make obvious applause light statements, except we live in the timeline where the statements aren’t obvious, so here you go:

Pope Leo XIV: Human beings are called to be co-workers in the work of creation, not merely passive consumers of content generated by artificial technology. Our dignity lies in our ability to reflect, choose freely, love unconditionally, and enter into authentic relationships with others. Recognizing and safeguarding what characterizes the human person and guarantees their balanced growth is essential for establishing an adequate framework to manage the consequences of artificial intelligence.

Sharp Text responds to the NYT David Sacks hit piece, saying it missed the forest for the trees and focused on the wrong concerns, but that it is hard to have sympathy for Sacks because the article’s methods of insinuation are nothing Sacks hasn’t used on his podcast many times against liberal targets. I would agree with all that, and add that Sacks is constantly saying far worse, far less responsibly and in far more inflammatory fashion, on Twitter against those who are worried about AI safety. We all also agree that tech expertise is needed in the Federal Government. I would add that, while the particular conflicts raised by NYT are not that concerning, there are many better reasons to think Sacks is importantly conflicted.

Richard Price offers his summary of the arguments in If Anyone Builds It, Everyone Dies.

Clarification that will keep happening since morale is unlikely to improve:

Reuben Adams: There is an infinite supply of people “debunking” Yudkowsky by setting up strawmen.

“This view of AI led to two interesting views from a modern perspective: (a) AI would not understand human values because it would become superintelligent through interaction with natural laws”

The risk is not, and never has been, that AI won’t understand human values, but that it won’t care.

Apparently this has to be repeated endlessly.

This is in response to FleetingBits saying, essentially, ‘we figured out how to make LLMs have human values and how to make it not power seeking, and there will be many AIs, so the chance that superintelligent AI would be an existential risk is less than 1% except for misuse by governments.’

It should be obvious, when you put it that way, why that argument makes no sense, without the need to point out that the argument miscategorizes historical arguments and gets important logical points wrong.

It is absurd on its face. Creating superintelligent minds is not a safe thing to do, even if those minds broadly ‘share human values’ and are not inherently ‘power seeking.’

Yet people constantly make exactly this argument.

The AI ‘understanding’ human values, a step we have solved only approximately and superficially in a way that doesn’t generalize robustly, is only one step of an AI to optimize for those human values even in out-of-distribution situations, let alone the (even harder) task of getting competing AIs to end up doing the same.

The fact that insufficiently capable LLMs do not by default ‘power seek’ in the strategic sense does not provide much evidence that if future agentic and more capable and intelligent AIs, when given goals best served by power seeking, won’t seek power. Of course they would. And those AIs that do seek power will be the ones that get power, and that multiply. Also we’ll often tell them, in so many words, ‘seek power.’

Percentage of people at Neurips who knew what AGI was reached an all-time high of 69.5%, versus last year’s 63%, as per Leo Gao’s survey.

If you have the weights of an open model, you can run it unsupervised, you can fine tune it to remove any safety training, you can look inside the weights to do a gradient attack, and generally if you care you can get it to do whatever you want.

Jonathan Greenblatt (ADL): The ability to easily manipulate open-source AI models to generate antisemitic content exposes a critical vulnerability in the AI ecosystem. We need industry leaders and policymakers to work together to ensure these tools cannot be misused to spread antisemitism and hate.

In this report’s case it was antisemitism, which the models were often happy to engage in with minimal prompting and certainly without any jailbreaking techniques. It’s more a chance to reiterate the principle that it will do whatever a user wants, if that user cares enough and the AI has the underlying capabilities. Plan accordingly.

UK AISI runs an experiment where a red team presents two sandbagging models (as in, they are pretending to be less capable than they are) and three benign models, and the red team has to identify the sandbaggers using tools like transcript analysis. Training-based elicitation worked best.

The red team won. The blue detection team was fooled.

Sam Bowman of Anthropic guesses that the two key reasons Claude Opus 4.5 turned out more aligned were the soul spec, which the model was trained on directly in order to steer its self-image, and also the inclusion of alignment researchers in every part of the training, and being willing to adjust on the fly based on what was observed rather than adhering to a fixed recipe.

Anthropic introduces Selective GradienT Masking (SGTM). The idea is that you contain certain concepts within a subsection of the weights, and then you remove that section of the weights. That makes it much harder to undo than other methods even with adversarial fine tuning, potentially being something you could apply to open models. That makes it exciting, but if you delete the knowledge you actually delete the knowledge for all purposes.

What will powerful AIs want? Alex Mallen offers an excellent write-up and this graph:

As in, selection will choose those models and model features, fractally, that maximize being selected. The ways to be maximally fit at maximizing selected (or the ‘reward’ that causes such selection) are those that either maximize for the reward, those that are maximizing consequences of reward and thus the reward, or selected-for kludges that thus happen to maximize. At the limit, for any fixed target, those win out, and any flaw in your reward signal (your selection methods) will be fractally exploited.

Alex Mallen: The model predicts AI motivations by tracing causal pathways from motivation → behavior → selection of that motivation.

A motivation is “fit” to the extent its behaviors cause it to gain influence on the AI’s behavior in deployment.

One way to summarize the model: “seeking correlates of being selected is selected for”.

You can look at the causal graph to see what’s correlated with being selected. E.g., training reward is tightly correlated with being selected because it’s the only direct cause of being selected (“I have influence…”).

We see (at least) 3 categories of maximally fit motivations:

  1. Fitness-seekers: They pursue a close cause of selection. The classic example is a reward-seeker, but there’s others: e.g., an influence-seeker directly pursues deployment influence.

    In deployment, fitness-seekers might keep following local selection pressures, but it depends.

  2. Schemers: They pursue a consequence of selection—which can be almost any long-term goal. They’re fit because being selected is useful for nearly any long-term goal.

    Often considered scariest because arbitrary long-term goals likely motivate disempowering humans.

  3. Optimal kludges: Weighted collections of context-dependent motivations that collectively produce maximally fit behavior. These can include non-goal-directed patterns like heuristics or deontological constraints.

    Lots of messier-but-plausible possibilities lie in this category.

Importantly, if the reward signal is flawed, the motivations the developer intended are not maximally fit. Whenever following instructions doesn’t perfectly correlate with reward, there’s selection pressure against instruction-following. This is the specification gaming problem.

Implicit priors like speed and simplicity matter too in this model. You can also fix this by doing sufficiently strong selection in other ways to get the things you want over things you don’t want, such as held out evals, or designing rather than selecting targets. Humans do a similar thing, where we detect those other humans who are too strongly fitness-seeking or scheming or using undesired heuristics, and then go after them, creating anti-inductive arms races and plausibly leading to our large brains.

I like how this lays out the problem without having to directly name or assert many of the things that the model clearly includes and implies. It seems like a good place to point people, since these are important points that few understand.

What is the solution to such problems? One solution is a perfect reward function, but we definitely don’t know how to do that. A better solution is a contextually self-improving basin of targets.

FLI’s AI Safety Index has been updated for Winter 2025, full report here. I wonder if they will need to downgrade DeepSeek in light of the zero safety information shared about v3.2.

Luiza Jarovsky: – The top 3 companies from last time, Anthropic, OpenAI, and Google DeepMind, hold their position, with Anthropic receiving the best score in every domain.

– There is a substantial gap between these top three companies and the next tier (xAI, zAI, Meta, DeepSeek, and Alibaba Cloud), but recent steps taken by some of these companies show promising signs of improvement that could help close this gap in the next iteration.

– Existential safety remains the sector’s core structural failure, making the widening gap between accelerating AGI/superintelligence ambitions and the absence of credible control plans increasingly alarming.

xAI and Meta have taken meaningful steps towards publishing structured safety frameworks, although limited in scope, measurability, and independent oversight.

– More companies have conducted internal and external evaluations of frontier AI risks, although the risk scope remains narrow, validity is weak, and external reviews are far from independent.

– Although there were no Chinese companies in the Top 3 group, reviewers noted and commended several of their safety practices mandated under domestic regulation.

– Companies’ safety practices are below the bar set by emerging standards, including the EU AI Code of Practice.

*Evidence for the report was collected up until November 8, 2025, and does not reflect the releases of Google DeepMind’s Gemini 3 Pro, xAI’s Grok 4.1, OpenAI’s GPT-5.1, or Anthropic’s Claude Opus 4.5.

Is it reasonable to expect people working at AI labs to sign a pledge saying they won’t contribute to a project that increases the chance of human extinction by 0.1% or more? Contra David Manheim you would indeed think this was a hard sell. It shouldn’t be, if you believe your project is on net increasing chances of extinction then don’t do the project. It’s reasonable to say ‘this has a chance of causing extinction but an as big or bigger chance of preventing it,’ there are no safe actions at this point, but one should need to at least make that case to oneself.

The trilemma is real, please submit your proposals in the comments.

Carl Feynman: I went to the Post-AGI Workshop. It was terrific. Like, really fun, but also literally terrifying. The premise was, what if we build superintelligence, and it doesn’t kill us, what does the future look like? And nobody could think of a scenario where simultaneously (a) superintelligence is easily buildable, (b) humans do OK, and (c) the situation is stable. A singleton violates (a). AI keeping humans as pets violates (b). And various kinds of singularities and wars and industrial explosions violate (c). My p(doom) has gone up; more of my probability of non-doom rests on us not building it, and less on post-ASI utopia.

There are those who complain that it’s old and busted to complain that those who have [Bad Take] on AI or who don’t care about AI safety only think that because they don’t believe in AGI coming ‘soon.’

The thing is, it’s very often true.

Tyler Tracy: I asked ~20 non AI safety people at NeurIPS for their opinion of the AI safety field. Some people immediately were like “this is really good”. But the response I heard the most often was of the form “AGI isn’t coming soon, so these safety people are crazy”. This was surprising to me. I was expecting “the AGI will be nice to us” types of things, not a disbelief in powerful AI coming in the next 10 years

Daniel Eth: Reminder that basically everyone agrees that if AGI is coming soon, then AI risk is a huge problem & AI safety a priority. True for AI researchers as well as the general public. Honest to god ASI accelerationists are v rare, & basically the entire fight is on “ASI plausibly soon”

Yes, people don’t always articulate this. Many fail the “but I did have breakfast” test, so it can be hard to get them to say “if ASI is soon then this is a priority but I think it’s far”, and they sometimes default to “that’s crazy”. But once they think it’s soon they’ll buy in

jsd: Not at all surprising to me. Timelines remain the main disagreement between the AI Safety community and the (non influence-weighted) vast majority of AI researchers.

Charles: So many disagreements on AI and the future just look like they boil down to disagreements about capabilities to me.

“AI won’t replace human workers” -> capabilities won’t get good enough

“AI couldn’t pose an existential threat” -> capabilities won’t get good enough.

etc

Are there those in the ‘the AI will be nice to us’ camp? Sure. They exist. But strangely, despite AI now being considered remarkably near by remarkably many people – 10 years to AGI is not that many years and 20 still is not all that many – there has increasingly been a shift to ‘the safety people are wrong because AGI is sufficiently far I do not have to care,’ with a side of ‘that is (at most) a problem for future Earth.’

A very good ad:

Aleks Bykhum: I understood it. [He didn’t at first.]

This one still my favorite.

Discussion about this post

AI #146: Chipping In Read More »

after-npr-and-pbs-defunding,-fcc-receives-call-to-take-away-station-licenses

After NPR and PBS defunding, FCC receives call to take away station licenses

The CAR complaints were dismissed in January 2025 by then-FCC Chairwoman Jessica Rosenworcel and then revived by Carr after Trump appointed him to the chairmanship. Carr has continued making allegations of news distortion, including when he threatened to revoke licenses from ABC stations that air Jimmy Kimmel’s show.

During the Kimmel controversy, Carr said he was trying “to empower local TV stations to serve the needs of the local communities.” The FCC subsequently opened a proceeding titled, “Empowering Local Broadcast TV Stations to Meet Their Public Interest Obligations: Exploring Market Dynamics Between National Programmers and Their Affiliates.”

The FCC invited public comments on whether to adopt regulations “in light of the changes in the broadcast market that have led to anticompetitive leverage and behavior by large networks.” This could involve prohibiting certain kinds of contract provisions in agreements between networks and affiliate stations and strengthening the rights of local stations to reject national programming.

FCC criticized for attacks on media

The “Empowering Local Broadcast TV Stations” proceeding is the one in which the Center for American Rights submitted its comments. Besides discussing NPR and PBS, the group said that national networks “indoctrinate the American people from their left-wing perspective.”

“The consistent bias on ABC’s The View, for instance, tells women in red states who voted for President Trump that they are responsible for putting in office an autocratic dictator,” the Center for American Rights said.

The FCC proceeding drew comments yesterday from the National Hispanic Media Coalition (NHMC), which criticized Carr’s war against the media. “The Public Notice frames this proceeding as an effort to ‘empower local broadcasters’ in their dealings with national networks. But… recent FCC actions have risked using regulatory authority not to promote independent journalism, but to influence newsroom behavior, constrain editorial decision-making, and encourage outcomes aligned with the personal or political interests of elected officials,” the NHMC said.

The group said it supports “genuine local journalism and robust competition,” but said:

policies that reshape the balance of power between station groups, networks, and newsrooms cannot be separated from the broader regulatory environment in which they operate. Several of the Commission’s recent interventions—including coercive conditions attached to the Skydance/Paramount transaction, and unlawful threats made to ABC and its affiliate stations in September demanding they remove Jimmy Kimmel’s show from the airwaves—illustrate how regulatory tools can be deployed in ways that undermine media freedom and risk political interference in programming and editorial decisions.

After NPR and PBS defunding, FCC receives call to take away station licenses Read More »

cable-channel-subscribers-grew-for-the-first-time-in-8-years-last-quarter

Cable channel subscribers grew for the first time in 8 years last quarter

In a surprising, and likely temporary, turn of events, the number of people paying to watch cable channels has grown.

On Monday, research analyst MoffettNathanson released its “Cord-Cutting Monitor Q3 2025: Signs of Life?” report. It found that the pay TV operators, including cable companies, satellite companies, and virtual multichannel video programming distributors (vMVPDs) like YouTube TV and Fubo, added 303,000 net subscribers in Q3 2025.

According to the report, “There are more linear video subscribers now than there were three months ago. That’s the first time we’ve been able to say that since 2017.”

In Q3 2017, MoffettNathanson reported that pay TV gained 318,000 net new subscribers. But since then, the industry’s subscriber count has been declining, with 1,045,000 customers in Q2 2025, as depicted in the graph below.

MoffettNathanson pay TV subscriber losses

Credit: MoffettNathanson

The world’s largest vMVPD by subscriber count, YouTube TV, claimed 8 million subscribers in February 2024; some analysts estimate that number is now at 9.4 million. In its report, MoffettNathanson estimated that YouTube TV added 750,000 subscribers in Q3 2025, compared to 1 million in Q3 2024.

Traditional pay TV companies also contributed to the industry’s unexpected growth by bundling its services with streaming subscriptions. Charter Communications offers bundles with nine streaming services, including Disney+, Hulu, and HBO Max. In Q3 2024, it saw net attrition of 294,000 customers, compared to about 70,000 in Q3 2025. Other cable companies have made similar moves. Comcast, for example, launched a streaming bundle with Netflix, Peacock, and Apple TV in May 2024. For Q3 2025, Comcast reported its best pay TV subscriber count in almost five years, which was a net loss of 257,000 customers.

Cable channel subscribers grew for the first time in 8 years last quarter Read More »

kindle-scribe-colorsoft-brings-color-e-ink-to-amazon’s-11-inch-e-reader

Kindle Scribe Colorsoft brings color e-ink to Amazon’s 11-inch e-reader

From left to right: the Kindle Scribe Colorsoft, the updated Kindle Scribe, and the lower-end Scribe without a front-lit screen. Credit: Amazon

Our review of the regular Kindle Colorsoft came away less than impressed, because there was only so much you could do with color on a small-screened e-reader that didn’t support pen input, and because it made monochrome text look a bit worse than it did on the regular Kindle Paperwhite. The new Scribe Colorsoft may have some of the same problems, which are mostly inherent to color e-ink technology as it exists today, but a larger screen will also be better for reading comics and graphic novels, and for reading and marking up full-color documents—there could be more of an upside, even if the technological tradeoffs are similar.

Amazon has still been slower to introduce color to its e-readers than its competitors, like last year’s reMarkable Paper Pro ($579 then, $629 now). The Scribe’s software has also felt a little barebones—the writing tools felt tacked on to the more mature reading experience offered by the Kindle’s operating system—but that’s gradually improving. All the new Scribes support syncing files with Google Drive and Microsoft OneDrive (though not Dropbox or other services), and the devices can export notebooks to Microsoft’s OneNote app so that you can pick up where you left off on a PC or Mac.

Other software improvements include a redesigned Home screen, “AI-powered search,” and a new shading tool that can be used to add shading or gradients to drawings and sketches; Amazon says that many of these software improvements will come older Kindle Scribe models via software updates sometime next year.

This post was updated at 4: 30pm on December 10 to add a response from Amazon about software updates for older Kindle Scribe models. 

Kindle Scribe Colorsoft brings color e-ink to Amazon’s 11-inch e-reader Read More »

sperm-donor-with-rare-cancer-mutation-fathered-nearly-200-children-in-europe

Sperm donor with rare cancer mutation fathered nearly 200 children in Europe

A single sperm donor who carries a rare cancer-causing genetic mutation has fathered at least 197 children across 14 countries in Europe, according to a collaborative investigation by 14 European news groups.

According to their investigative report, some of the children have already died, and many others are expected to develop deadly cancers.

The man—Donor 7069, alias “Kjeld”—carries a rare mutation in the TP53 gene, which codes for a critical tumor suppressor called protein 53 or p53. This protein (which is a transcription factor) keeps cells from dividing uncontrollably, can activate DNA repair processes amid damage, and can trigger cell death when a cell is beyond repair. Many cancers are linked to mutations in p53.

When a p53 mutation is passed down in sperm (a germline mutation), it causes a rare autosomal dominant condition called Li Fraumeni syndrome, which greatly increases the risk of a variety of cancers in childhood and young adults. Those include cancers of the brain, blood, bone, soft tissue, adrenal glands, and breast, among others.

The estimated frequency of this type of mutation is between 1 in 5,000 and 1 in 20,000 .

According to the investigation, the man was unaffected by the condition, but the mutation was present in around 20 percent of his sperm.

Sperm donor with rare cancer mutation fathered nearly 200 children in Europe Read More »

big-tech-joins-forces-with-linux-foundation-to-standardize-ai-agents

Big Tech joins forces with Linux Foundation to standardize AI agents

Big Tech has spent the past year telling us we’re living in the era of AI agents, but most of what we’ve been promised is still theoretical. As companies race to turn fantasy into reality, they’ve developed a collection of tools to guide the development of generative AI. A cadre of major players in the AI race, including Anthropic, Block, and OpenAI, has come together to promote interoperability with the newly formed Agentic AI Foundation (AAIF). This move elevates a handful of popular technologies and could make them a de facto standard for AI development going forward.

The development path for agentic AI models is cloudy to say the least, but companies have invested so heavily in creating these systems that some tools have percolated to the surface. The AAIF, which is part of the nonprofit Linux Foundation, has been launched to govern the development of three key AI technologies: Model Context Protocol (MCP), goose, and AGENTS.md.

MCP is probably the most well-known of the trio, having been open-sourced by Anthropic a year ago. The goal of MCP is to link AI agents to data sources in a standardized way—Anthropic (and now the AAIF) is fond of calling MCP a “USB-C port for AI.” Rather than creating custom integrations for every different database or cloud storage platform, MCP allows developers to quickly and easily connect to any MCP-compliant server.

Since its release, MCP has been widely used across the AI industry. Google announced at I/O 2025 that it was adding support for MCP in its dev tools, and many of its products have since added MCP servers to make data more accessible to agents. OpenAI also adopted MCP just a few months after it was released.

mcp simple diagram

Credit: Anthropic

Expanding use of MCP might help users customize their AI experience. For instance, the new Pebble Index 01 ring uses a local LLM that can act on your voice notes, and it supports MCP for user customization.

Local AI models have to make some sacrifices compared to bigger cloud-based models, but MCP can fill in the functionality gaps. “A lot of tasks on productivity and content are fully doable on the edge,” Qualcomm head of AI products, Vinesh Sukumar, tells Ars. “With MCP, you have a handshake with multiple cloud service providers for any kind of complex task to be completed.”

Big Tech joins forces with Linux Foundation to standardize AI agents Read More »

brazil-weakens-amazon-protections-days-after-cop30

Brazil weakens Amazon protections days after COP30


Backed by powerful corporations, nations are giving public false choices: Environmental protection or economic growth.

Deforestation fire in the Amazon rainforest. Credit: Brasil2

Despite claims of environmental leadership and promises to preserve the Amazon rainforest ahead of COP30, Brazil is stripping away protections for the region’s vital ecosystems faster than workers dismantled the tents that housed the recent global climate summit in Belém.

On Nov. 27, less than a week after COP30 ended, a powerful political bloc in Brazil’s National Congress, representing agribusiness, and development interests, weakened safeguards for the Amazon’s rivers, forests, and Indigenous communities.

The rollback centered on provisions in an environmental licensing bill passed by the government a few months before COP30. The law began to take shape well before, during the Jair Bolsonaro presidency from 2019 to 2023. It reflected the deregulatory agenda of the rural caucus, the Frente Parlamentar da Agropecuária, which wielded significant power during his term and remains influential today.

Bolsonaro’s government openly supported weakening environmental licensing. His environment minister, Ricardo Salles, dismissed licensing as “a barrier to development” and pushed for broad deregulation.

Current President Luiz Inácio Lula da Silva vetoed many of its most controversial provisions in August, citing risks to Indigenous rights and environmental oversight. But in late November, the legislature overturned those vetoes and reinstated the contested sections.

“This is neither improving nor modernizing, it is simply deregulation,” said Sarah Sax, who analyzes Brazil’s climate and human rights policies as a researcher with Climate Rights International, a California-based nonprofit advocating for climate justice.

“It’s happening in Brazil in ways that mirror what you’re seeing around the world. These are proxy fights over democracy, human rights, and institutional power,” she said, noting a broader global pattern of industrial and political blocs pushing deregulation and weakening institutions designed to protect communities and ecosystems.

According to analyses by the Brazilian Academy of Sciences and other organizations, the provisions at issue will enable many projects to get permits by self-declaring compliance, without undergoing complete environmental impact assessments or third-party review.

Under the law, deforested properties or land cleared without a license can be retroactively legalized without restoring the land or ecological conditions, which rewards illegal deforestation. Larger projects, like irrigation, dams, and sanitation works, as well as roads and energy infrastructure, can proceed with minimal environmental scrutiny, risking more forest fragmentation and habitat destruction. And the licensing changes narrow who must be recognized and consulted during reviews, which could exclude communities without formal land titles.

A human rights issue

It’s alarming that the legislature overrode the vetoes, said Astrid Puentes Riaño, the United Nations special rapporteur on the human right to a healthy environment. As it stands now, the law may violate Brazil’s international environmental commitments, she added.

“What is at stake is [whether] Brazil, as a country, is able to effectively protect the environment, including all their fundamental resources,” she said.

She noted that Brazil is not facing this problem alone.

“I think that we, unfortunately, are seeing a wave of regressions globally toward weakened environmental impact assessments, because they’re seen as obstacles for development and investment,” she said.

But cutting reviews when science clearly shows that the planet is facing a “triple crisis of climate change, biodiversity loss, and toxic contamination” is a huge step in the wrong direction.

“Environmental impact assessments are not a checklist in a supermarket,” she said. “They are an essential element for states to prevent environmental, climate, human rights, and social impacts.”

She emphasized that weakening environmental review isn’t a technocratic tweak or political win for one side. It undermines the foundations of public health, Indigenous rights, and climate safety.

“This is not about politics, it’s about survival,” she said. “Some of these impacts on water, on air, on biodiversity, on people’s health, are irreversible. These are not things you can fix later.”

Climate backlash is scientifically unfounded

The fight over Brazil’s environmental licensing law can be seen as a microcosm of global climate policy tensions, with governments performatively signaling climate ambition at international meetings, such as COP30, while doubling down on economic nationalism by claiming there is no money for climate action at home and instead financing measures to boost development and growth.

Claudio Angelo, with Brazilian NGO Observatório do Clima, said that this false-choice paradigm was “certainly an underlying theme” in the debates over the law.

“It has appeared in the speeches of most Congressmen who voted for the new legislation and to overturn Lula’s vetoes,” he said. “But, more worryingly, there was a lot of sheer disinformation.”

The two lobbying groups that pushed for the law that weakens environmental reviews repeatedly said that the existing licensing process is too slow and thus hampers economic progress. They claimed, without proof, that thousands of projects were stuck in the permitting process.

“But in the end, this may have been more about hubris than anything,” Angelo said. “Congress did that because it could. And because the private interests most Congressmen serve don’t want any regulation of any kind.”

Even without a complete analysis, it’s clear that cutting environmental reviews conflicts indirectly with Brazil’s climate plans, making it more difficult to stop deforestation.

Angelo expects some environmental groups will challenge the new law. Parts of it are subject to a 180-day waiting period, he said, so the final outcome is unclear. But a companion measure that passed as an executive order just this week, creates a fast-track permitting process for projects the government deems strategic, and it is effective immediately.

Puentes Riaño said recent advisory opinions from the International Court of Justice and the Inter-American Court of Human Rights make it clear that states must “use all means at their disposal to prevent actions that cause significant harm” to the Earth’s climate.

A growing body of research in ecological economics shows that such false choices are mainly a political narrative used by special interest groups to justify deregulation, despite evidence showing that degrading ecosystems undermines both climate goals and economic resilience.

Mainstream science and climate reports, including the Sixth Assessment Report from the Intergovernmental Panel on Climate Change and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, directly contradict the idea that countries must choose between protecting ecosystems and achieving economic development.

The studies show that intact forests, healthy rivers, and secure Indigenous and local land rights are among the most effective and cost-efficient climate mitigation strategies available, delivering carbon sequestration, ecosystem resilience, public health and long-term economic stability. The IPCC explicitly recognizes community-led stewardship and ecosystem protection as core pillars of climate action, not afterthoughts.

Those scientific realities also underpinned the 2015 Paris Agreement, which ignited more public pressure for climate action, including youth-led mass demonstrations like the Fridays for Future marches that swelled in 2019. For a short time after COP21 in Paris, climate ambition rose worldwide, pushing governments to adopt stronger targets and framing ecosystems and community rights as essential to mitigation.

But the COVID-19 pandemic and Russia’s invasion of Ukraine unleashed overlapping economic shocks that reset priorities. Governments focused on energy security, food security supply chains and inflation, creating openings for industrial and agricultural lobbies to argue that environmental rules hindered economic recovery.

Those pressures dovetailed with persistent strains of economic nationalism and identity politics, strengthening political forces that frame environmental safeguards as constraints on sovereignty and growth. The result was global regulatory rollbacks, from the US and Canada to mining regions in Indonesia and Australia, each framed as necessary to speed development, stabilize supply chains or simply acting out of economic self-interest.

Germany, for example, arrived at COP30 emphasizing its commitment to ambitious climate action. But weeks later, its new government under Chancellor Friedrich Merz pressed the European Union to weaken or delay the bloc’s 2035 phaseout of gas- and diesel-fueled cars.

The move mirrored Brazil’s own post-COP30 reversal. In both cases, political leaders under pressure from domestic industries framed their actions as necessary to defend national interests amid economic uncertainty.

Why it matters

Brazil’s post-COP30 shift toward deregulation in the name of economic development has far-reaching implications because Amazon forests influence global climate and weather patterns, circulating vast amounts of heat and water with effects that ripple far beyond the Amazon Basin.

Moisture from the rainforests creates a belt of rising, humid air that shapes rainfall patterns from the Andes to the US Gulf Coast. Research shows that when large areas of the Amazon are cleared or degraded, the system weakens, shifting precipitation patterns in ways that can amplify droughts in South America and intensify rainfall extremes elsewhere.

Drier Amazon conditions also warm the tropical Atlantic and can change winds that shape Atlantic hurricane formation, potentially boosting the frequency or intensity of storms that strike the Caribbean and North America. Research on long-distance links in the climate system shows that Amazon drying can also reduce summer rainfall across the US Midwest and Southern Plains, regions that depend on predictable precipitation for agriculture.

And the Amazon’s role as a critical carbon sink is also at risk. Its vegetation and soils store about 150 billion to 200 billion metric tons of carbon, equivalent to about 70 to 90 years of annual US fossil-fuel carbon dioxide emissions.

Brazil’s land-use sector is already one of the world’s largest sources of climate-warming pollution. Deforestation, fires, and forest degradation in the Amazon and Cerrado savanna account for 700 million to 800 million metric tons of climate-warming gases annually, equal to Germany’s yearly emissions.

Research shows that additional degradation enabled by the licensing law increases the risk of rainforest dieback, which could convert large tracts of rainforest to drier savanna-like conditions, pushing the region closer to a tipping point beyond which the Amazon would drive accelerated warming rather than helping to stabilize the climate.

Brazil’s reversal lands at a moment when the world can least afford mixed signals, and COP30 ended with Indigenous leaders warning that “our land is not for sale” and that “we can’t eat money,” reminding delegates that protecting forests is not an abstraction but a matter of survival.

Brazil’s decision to weaken environmental protections so soon after COP30 captures the larger crisis facing global climate policy: the widening gap between international promises and domestic political choices. And the Amazon can’t withstand much more waffling, Sax said.

“There is no planet B,” she said. “This is the fight.”

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy and the environment. Sign up for their newsletter here.

Photo of Inside Climate News

Brazil weakens Amazon protections days after COP30 Read More »

streaming-service-makes-rare-decision-to-lower-its-monthly-fees

Streaming service makes rare decision to lower its monthly fees

Somewhere, a pig is catching some sweet air.

In a rare move for a streaming service, Fubo announced today that it’s lowering the prices for some of its subscription plans.

Fubo is a sports-focused vMVPD (virtual multichannel video programming distributor, or a company that enables people to watch traditional TV channels live over the Internet). Disney closed its acquisition of Fubo in October.

Today, Fubo announced that monthly prices for some of its “Live TV” subscription plans, which include hundreds of channels, including non-sports ones like FX and The Disney Channel, will be up to 14.8 percent cheaper. The new pricing starts with “bill cycle dates on or after January 1, 2026,” Fubo said.

Here are the new prices:

  • Essential: $74 per month (previously $85/month)
  • Pro: $75/month (previously $85/month)
  • Elite: $84/month (previously $95/month)

When streaming services make announcements about price, it almost always means higher costs for subscribers.

However, some subscribers likely feel that the price cut is a necessity and not a perk, since Fubo has not had NBCUniversal channels since November 21. The blacked-out channels include local NBC affiliates, Telemundo, nine regional sports channels (Fubo noted that subscribers may also pay lesser fees after the January billing cycles if any regional sports networks they previously received are no longer available on Fubo), and 32 channels, including Bravo, CNBC, MSNBC, and USA Network. Fubo previously announced that it would give subscribers a $15 credit due to the blackout.

A Fubo spokesperson told Ars Technica that the new prices “reflect NBCU pulling their networks from Fubo.”

Fubo’s representative said they couldn’t comment on whether the new prices would stick if Fubo gets NBCUniversal channels back because that’s “speculative.”

Fubo’s NBCUniversal blackout

In a statement on November 25, Fubo claimed that NBCUniversal is trying to overcharge Fubo for the channels that will live under Versant, a company to be created from the spinoff of NBCUniversal’s cable channels and other digital properties, which is supposed to debut in January.

“Despite them not being worth the cost to Fubo subscribers, Fubo offered to distribute Versant channels for one year,” Fubo said. “NBCU wants Fubo to sign a multi-year deal—well past the time the Versant channels will be owned by a separate company. NBCU wants Fubo subscribers to subsidize these channels.”

Streaming service makes rare decision to lower its monthly fees Read More »

netflix’s-$72b-wb-acquisition-confounds-the-future-of-movie-theaters,-streaming

Netflix’s $72B WB acquisition confounds the future of movie theaters, streaming


Netflix’s plans to own HBO Max, DC Comics, Harry Potter to face regulatory scrutiny.

The bidding war is over, and Netflix has been declared the winner.

After flirting with Paramount Skydance and Comcast, Warner Bros. Discovery (WBD) has decided to sell its streaming and movie studios business to Netflix. If approved, the deal is set to overturn the media landscape and create ripples that will affect Hollywood for years.

$72 billion acquisition

Netflix will pay an equity value of $72 billion, or an approximate total enterprise value of $82.7 billion, for Warner Bros. All of WBD has a $60 billion market value, NBC News notes.

The acquisition will take place after WBD completes the split of its streaming and studios businesses, which includes its film and TV libraries and the HBO channel, and its other TV networks, including CNN and TBS, into separate companies (Warner Bros. and Discovery Global, respectively). WBD’s split is expected to finish in Q3 2026.

Additionally, Netflix’s acquisition is subject to regulatory approvals, WBD shareholder approval, and other “customary closing conditions.”

Netflix expects the purchase to net it more subscribers, higher engagement, and “at least $2–3 billion of cost savings per year by the third year,” its announcement said.

Netflix co-CEO Greg Peters said in a statement that Netflix will use its global reach and business model to bring WB content to “a broader audience.”

The announcement didn’t specify what this means for current WBD staff, including WBD’s current president and CEO, David Zaslav. Gunnar Wiedenfels, who is currently CFO of WBD, is expected to be the CEO of Discovery Global after WBD split.

Netflix to own HBO Max

Netflix will have to overcome regulatory hurdles to complete this deal, which would evolve it from a streaming king to an entertainment juggernaut. If completed, the world’s largest streaming service by subscribers (301.63 million as of January) will own its third biggest rival (WBD has 128 million streaming subscribers, most of which are HBO Max users).

The acquisition would also give Netflix power over a mountain of current and incoming titles, including massive global franchises DC Comics, Game of Thrones, and Harry Potter.

If the deal goes through, Netflix said it will incorporate content from WB Studios, HBO Max, and HBO into Netflix. Netflix is expected to keep HBO Max available as a separate service, at least for the near term, Variety reported today. However, it’s easy to see a future where Netflix tries to push subscriptions bundling Netflix and HBO Max before consolidating the services into one product that would likely be more expensive than Netflix is today. Disney is setting the precedent with its bundles of Disney+ and the recently acquired Hulu, and by featuring a Hulu section within the Disney+ app.

Before today’s announcement, industry folks were concerned about Netflix potentially owning that much content while dominating streaming. However, Netflix said today that buying WB would enable it to “significantly expand US production capacity and continue to grow investment in original content over the long term, which will create jobs and strengthen the entertainment industry.”

Uniting Netflix and HBO Max’s libraries could make it easier for streaming subscribers to find content with fewer apps and fewer subscriptions. However, subscribers could also be negatively impacted (especially around pricing) if Netflix gains too much power, both as a streaming company and media rights holder.

In WBD’s most recent earnings report, its streaming business reported $45 million in quarterly earnings before interest, taxes, depreciation, and amortization. Netflix reported a quarterly net income of $2.55 billion in its most recent earnings report.

Netflix hasn’t detailed plans for the HBO cable channel. But given Netflix’s streaming ethos, the linear network may not endure in the long term. But since the HBO brand is valuable, we expect the name to persist, even if it’s just as a section of prestige titles within Netflix.

“A noose around the theatrical marketplace”

Among the stakeholders most in arms about the planned acquisition is the movie theater industry. Netflix’s co-CEO Ted Sarandos has historically seen minimal value in theaters as a distribution method. In April, he said that making movies “for movie theaters, for the communal experience” is “an outmoded idea.”

Today, Sarandos said that under Netflix, all WB movies will still hit theaters as planned, which brings us through 2029, per Variety.

During a conference call today, Sarandos said he has no “opposition to movies in theaters,” adding, per Variety:

My pushback has been mostly in the fact of the long exclusive windows, which we don’t really think are that consumer-friendly. But when we talk about keeping HBO operating, largely as it is, that also includes their output movie deal with Warner Bros., which includes a life cycle that starts in the movie theater, which we’re going to continue to support.

Notably, the executive said that “Netflix movies will take the same strides they have, which is, some of them do have a short run in the theater beforehand.”

Anticipating today’s announcement, the movie theater industry has been pushing for regulatory scrutiny over the sale of WB.

Michael O’Leary, CEO and president of Cinema United, the biggest exhibition trade organization, said in a statement today about the Netflix acquisition:

Regulators must look closely at the specifics of this proposed transaction and understand the negative impact it will have on consumers, exhibition, and the entertainment industry.

In a letter sent to Congress members this month, an anonymous group that described itself as “concerned feature film producers” wrote that Netflix’s purchase of WB would “effectively hold a noose around the theatrical marketplace” by reducing the number of theatrical releases and driving down the price of licensing fees for films after their theatrical release, as reported by Variety.

Up next: Regulatory hurdles

In the coming weeks, we’ll get a clearer idea of how antitrust concerns and politics may affect Netflix’s acquisition plans.

Recently, other media companies, such as Paramount, have been accused of trying to curry favor with US President Donald Trump in order to get deals approved. The US Department of Justice (DOJ) could try to block Netflix’s acquisition of WB. But there’s reason for Netflix and WB to remain optimistic if that happens. In 2017, Time Warner and AT&T successfully defeated the DOJ’s attempted merger block.

Still, Netflix and WB have their work cut out for them, as skepticism around the deal grows. Last month, US Senators Elizabeth Warren (D-Mass.), Richard Blumenthal (D-Conn.), and Bernie Sanders (I-Vt.) wrote to the DOJ’s antitrust division urging that any WB deal “is grounded in the law, not President Trump’s political favoritism.”

In a letter to Attorney General Pam Bondi last month, Rep. Darrel Issa (R-Calif.) said that buying WB would “enhance” Netflix’s “unequaled market power” and be “presumptively problematic under antitrust law.”

In a statement about Netflix’s announcement shared by NBC News today, a spokesperson for the California attorney general’s office said:

“The Department of Justice believes further consolidation in markets that are central to American economic life—whether in the financial, airline, grocery, or broadcasting and entertainment markets—does not serve the American economy, consumers, or competition well.”

Netflix’s rivals may also seek to challenge the deal. Attorneys for Paramount questioned the “fairness and adequacy” of WBD’s sales process ahead of today’s announcement.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Netflix’s $72B WB acquisition confounds the future of movie theaters, streaming Read More »