Anthropic

anthropic-sues-us-over-blacklisting;-white-house-calls-firm-“radical-left,-woke”

Anthropic sues US over blacklisting; White House calls firm “radical left, woke”


Anthropic says it was blacklisted for opposing autonomous weapons, mass surveillance.

Credit: Getty Images | picture alliance

Anthropic sued the Trump administration yesterday in an attempt to reverse the government’s decision to blacklist its technology. Anthropic argues that it exercised its First Amendment rights by refusing to let its Claude AI models be used for autonomous warfare and mass surveillance of Americans and that the government blacklisted it in retaliation.

“When Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to ‘IMMEDIATELY CEASE all use of Anthropic’s technology’—even though the Department of War had previously agreed to those same conditions,” Anthropic said in a lawsuit in US District Court for the Northern District of California. “Hours later, the Secretary of War [Pete Hegseth] directed his Department to designate Anthropic a ‘Supply-Chain Risk to National Security,’ and further directed that ‘effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.’”

Anthropic said the First Amendment gives it “the right to express its views—both publicly and to the government—about the limitations of its own AI services and important issues of AI safety.” Anthropic further argued that the process for designating it a supply chain risk did not comply with the procedures mandated by Congress. The supply chain risk designation is supposed to be used only to protect against risks that an adversary may sabotage systems used for national security, the lawsuit said.

Trump’s directive “requiring every federal agency to immediately cease all use of Anthropic’s technology, and actions taken by other defendants in response to that directive, are outside any authority that Congress has granted the Executive,” and violate the Fifth Amendment’s due process clause, Anthropic said.

Anthropic’s lawsuit was filed against Hegseth, the Department of War (previously called the Department of Defense), and numerous other federal agencies. Anthropic also filed a motion for preliminary injunction and a second lawsuit asking for review in the US Court of Appeals for the District of Columbia Circuit.

White House: Anthropic is “radical left, woke company”

The Pentagon declined to comment. The White House responded by calling Anthropic a “radical left” and “woke” firm.

“President Trump will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates,” a White House spokesperson said in a statement provided to Ars. “The President and Secretary of War are ensuring America’s courageous warfighters have the appropriate tools they need to be successful and will guarantee that they are never held hostage by the ideological whims of any Big Tech leaders. Under the Trump Administration, our military will obey the United States Constitution—not any woke AI company’s terms of service.”

A brief supporting Anthropic was filed in the California federal court by the Foundation for Individual Rights and Expression, the Electronic Frontier Foundation, the Cato Institute, the Chamber of Progress, and the First Amendment Lawyers Association. The groups said that Pentagon retaliation against Anthropic will “silence future speech from those who fear the government attempting to harm their business or extinguish it entirely.”

Calling the government’s actions “transparently retaliatory and coercive,” the advocacy groups wrote that the court “need not guess at the government’s retaliatory motives because the Pentagon has already announced them… Until recently, it was rare for government leaders to so openly and proudly boast about retaliating against someone for their protected speech. Now it is commonplace. Evidently only those who agree to be complicit in this administration’s assertion of unfettered power are safe.”

Google and OpenAI staff support lawsuit

Another brief supporting Anthropic was filed by various technical, engineering, and research employees of Google and OpenAI. Google is an investor in Anthropic. The Google and OpenAI employees wrote that “mass domestic surveillance powered by AI poses profound risks to democratic governance—even in responsible hands.” On the topic of autonomous weapon systems, they wrote that “current AI models are not reliable enough to bear the responsibility of making lethal targeting decisions entirely alone, and the risks of their deployment for that purpose require some kind of response and guardrails.”

The Google and OpenAI employees said that in using the supply chain risk designation “in response to Anthropic’s contract negotiations, [the Pentagon] introduces an unpredictability in our industry that undermines American innovation and competitiveness. It chills professional debate on the benefits and risks of frontier AI systems and various ways that risks can be addressed to optimize the technology’s deployment.”

Anthropic CEO Dario Amodei explained the company’s objections to certain AI uses in a February 26 post. “We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values,” he wrote.

Current law allows the government to “purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant,” and “AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale,” Amodei wrote.

CEO: Autonomous weapons too risky

Amodei expressed support for partially autonomous weapons like those used in Ukraine, but not for fully autonomous weapon systems “that take humans out of the loop entirely and automate selecting and engaging targets.” He said that fully autonomous weapons “may prove critical for our national defense” eventually but that AI is not yet reliable enough to power them.

“We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” he wrote. “We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.”

Trump responded with a Truth Social post on February 27. “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” Trump wrote. “Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.”

Hegseth then wrote that “Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.” Hegseth said the military “must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”

Anthropic said later that day that it had engaged in months of negotiations with the government and would challenge any supply chain risk designation in court. “Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company… No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic said.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Anthropic sues US over blacklisting; White House calls firm “radical left, woke” Read More »

anthropic-officially,-arbitrarily-and-capriciously-designated-a-supply-chain-risk

Anthropic Officially, Arbitrarily and Capriciously Designated a Supply Chain Risk

Make no mistake about what is happening.

The Department of War (DoW) demanded Anthropic bend the knee, and give them ‘unfettered access’ to Claude, without understanding what that even meant. If they didn’t get what they want, they threatened to both use the Defense Production Act (DPA) to make Anthropic give the military this vital product, and also designate the company a supply chain risk (SCR).

Hegseth sent out an absurdly broad SCR announcement on Twitter that had absolutely no legal basis, that if implemented as written would have been corporate murder. They have now issued an official notification, which is still illegal, arbitrary and capricious, but is scoped narrowly and won’t be too disruptive.

Nominally the SCR designation is because we cannot rely on that same product when the company has not bent the knee and might object to some uses of its private property that it never agreed to allow.

No one actually believes this. No one is pretending others should believe this. If they have real concerns, there are numerous less restrictive and less disruptive tools available to the Department of War. Many have the bonus of being legal.

In actuality, this is a massive escalation, purely as punishment.

DoW is saying that if you claim the right to choose when and how others use your private property, and offer to sign some contracts but not sign others, that this means you are trying to ‘usurp power’ and dictate government decisions.

It is saying that if you do not bend the knee, if your business does not do what we want, then we cannot abide this. We will illegally retaliate and end your business.

That is not how the law works. That is not how a Republic works.

This was completely unnecessary. Talks were ongoing. The two sides were close. The deal DoW signed with OpenAI, the same night as the original SCR designation, violates exactly the red line principles and demands the DoW says abide no compromises.

The good news is that there are those who managed to limit this to a narrowly tailored SCR, that only applies to direct provision of government contracts. Otherwise, this does not apply to you. Even if that gets tied up in court indefinitely, this will not inflict too much damage on either Anthropic or national security.

The question is how much jawboning or further steps come after this, but for now we have dodged the even worse outcomes keeping us up at night.

You might be tempted to think of or present this as the DoW backing down. Don’t.

Why not? Two good reasons.

  1. It isn’t true.

    1. This uses USC 3252 because they’d have been laughed out of court if they’d tried to match the no-legal-basis word salad from Friday 5: 14pm.

    2. Given the use of USC 3252 this is maximally broad.

    3. The fact that they toyed with doing something even worse does not make this not an arbitrary, capricious and dramatic escalation purely as punishment.

  2. The DoW cannot see itself as backing down, or it will do even worse things.

Dean W. Ball: No one should frame the DoW’s supply chain risk designation as the government “backing down.” If that becomes “the narrative,” it could encourage further action to avoid the appearance of weakness.

It is also not true that it is backing down; the government really is exercising its supply chain risk designation authority under 10 USC 3252 to the fullest extent (and this is assuming it’s even legitimate to use it on an American firm, which is deeply questionable.

Hegseth’s threat was far broader than his power, which is the only reason this seems deescalatory. If you had asked me for a worst case scenario before Hegseth’s tweet last Friday, I would have told you precisely what has unfolded. This could mean that any vendor of widely used enterprise software (Microsoft, Apple, Salesforce, etc.) could be barred from using Anthropic in the maintenance of any codebases offered to DoW as part of a military contract, for example. Any startup who views DoW as a potential customer for their products will preemptively have to avoid Claude. This is still a massive punishment from USG.

You might also ask: if I knew Hegseth’s power was more limited than he threatened, why did I take his threat at face value? The answer is that we have so clearly moved past the realm of reason here that, well, to a first approximation, I take the guy who runs the biggest military on Earth at his word when he issues threats.

Sometimes some people should talk in carefully chosen Washington language, as ARI does here. Sometimes I even do it. This is not one of those times.

  1. Post Overview.

  2. Anthropic’s Statement on the SCR.

  3. What The Actual SCR Designation Says.

  4. Enemies of The Republic.

  5. Regulation Need Not Seize The Means Of Production.

  6. Microsoft Stands Firm.

  7. Calling This What It Is.

  8. What To Expect Next.

This post is an update on events since the publication of the weekly, and an attempt to reiterate key events and considerations to put everything into context.

For details and analysis of previous events, see my previous posts:

  1. Anthropic and The Department of War, from February 25.

  2. Anthropic and the DoW: Anthropic Responds, from February 27.

  3. A Tale of Three Contracts, From March 3.

  4. AI #158: The Department of War, from March 5.

For those following along these are the key events since last time:

  1. Wednesday morning: Talks between Anthropic and the DoW have resumed, in line with FT reporting, and progress on concrete proposals is being made.

  2. Wednesday afternoon: An internal Anthropic memo from Friday evening uncharacteristically leaks, most of which was correct technical explanations of the situation, and also containing some reasonable suppositions as of time of writing, but that also included some statements that were ill-considered and caused fallout. Negotiations were disrupted.

  3. Thursday morning: All quiet as everyone dealt with fallout from the leaked internal Anthropic memo. Scrambling to keep things contained continues.

  4. Thursday, 1pm: Katrina Manson reports that the Pentagon has sent a formal SCR to Anthropic, but the report has no details.

  5. Thursday afternoon: Reporting comes out that ‘Trump plans U.S. control over global AI chip sales’ and it remains unclear what this means but Commerce has been very clear they’re not bringing back diffusion rules and that the early reporting gave a false impression. We still await clarity on what is changing.

  6. Thursday evening: Anthropic issues a conciliatory statement, noting that the SCR is of limited scope and need not impact the vast majority of customers, pointing out that everyone wants the same outcomes and wants to work together and that discussions have been ongoing, and directly and personally apologizing for the leaked Anthropic memo that Dario Amodei wrote on Friday night.

  7. Meanwhile: Various people continue to advocate against private property.

It was an excellent statement. I’m going to quote it in full, since no one clicks links and I believe they would want me to do this.

Dario Amodei (CEO Anthropic): Yesterday (March 4) Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America’s national security.

As we wrote on Friday, we do not believe this action is legally sound, and we see no choice but to challenge it in court.

The language used by the Department of War in the letter (even supposing it was legally sound) matches our statement on Friday that the vast majority of our customers are unaffected by a supply chain risk designation. With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.

The Department’s letter has a narrow scope, and this is because the relevant statute (10 USC 3252) is narrow, too. It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain. Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.

I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible. As we wrote on Thursday, we are very proud of the work we have done together with the Department, supporting frontline warfighters with applications such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

As we stated last Friday, we do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making—that is the role of the military. Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas, and not operational decision-making.

I also want to apologize directly for a post internal to the company that was leaked to the press yesterday. Anthropic did not leak this post nor direct anyone else to do so—it is not in our interest to escalate this situation. That particular post was written within a few hours of the President’s Truth Social post announcing Anthropic would be removed from all federal systems, the Secretary of War’s X post announcing the supply chain risk designation, and the announcement of a deal between the Pentagon and OpenAI, which even OpenAI later characterized as confusing. It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation.

Our most important priority right now is making sure that our warfighters and national security experts are not deprived of important tools in the middle of major combat operations. Anthropic will provide our models to the Department of War and national security community, at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so.

Anthropic has much more in common with the Department of War than we have differences. We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise.

I believe and hope that this will help move things forward towards de-escalation.

Secretary of War Pete Hegseth’s original Tweet on Friday at 5: 14pm was not a legal document. It claimed that it would bar anyone doing business with the DoW from doing any business with Anthropic, for any reason. This would in effect have been an attempt at corporate murder, since it would have attempted to force Anthropic off of the major cloud providers, and have forced many of its largest shareholders to divest.

That move would have had no legal basis whatsoever, and also no physical logic whatsoever since selling goods or services to Anthropic, or providing Anthropic services to others, obviously has no impact on the military supply chain. It would not have survived a court challenge. But if Anthropic failed to get a TRO, that alone could have caused major disruptions and a stock market bloodbath.

We are very fortunate and happy that this was not the letter that DoW ultimately chose to send after having time to breathe. As per Anthropic, the official supply chain risk designation letter invokes the narrow form of SCR, 10 USC 3252.

Anthropic: The Department’s letter has a narrow scope, and this is because the relevant statute (10 USC 3252) is narrow, too. It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain.

Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.

There are three levels of danger to Anthropic here if the classification is sustained.

  1. Direct loss of business from impacted tasks. This is nothing. Defense contracts and government use are a tiny portion of overall revenue.

  2. Indirect loss of business due to dual stack, uncertainty or compliance costs. Those who have some restricted business might not want to maintain dual technology stacks or deal with compliance issues, or worry about future changes. There will be some of this on the margin, and all the time we end up with ‘the government is clearly okay with [X] so even though [X] is worse we’ll just use [X].’ There will be some of that, presumably, but even this is a tiny fraction of revenue. The big companies that matter aren’t going to switch over this nor should they.

  3. Fear of future jawboning and illegal government actions, or actual jawboning. The government could use various other ways to bring pressure on companies to cut business. If things stay sufficiently hostile they might try, but I don’t see this working. Eight of the ten biggest companies use Anthropic, it’s the majority of enterprise sales, it’s tied closely to Amazon and Google. I don’t even think there will be substantial impact on cost of capital.

But we do have to watch out. If the government is sufficiently determined to mess with you, and doesn’t care about how much damage this does including to rule of law, they have a lot of ways to do that.

Remarkably many people are defending this move, and mostly also defending the legally incoherent move that was Tweeted out on Friday afternoon.

The defenders of this often employ rhetoric that is truly reprehensible, and entirely incompatible with freedom, a Republic or even private property.

They say that the United States Government, and de facto they mean the executive branch, because the President was duly elected, can do anything it wants, and must always get its way, make all decisions and be the only source of power. That if what you create is sufficiently useful then it no longer belongs to you, and any private actor that prospers too much must be hammered down to protect state authority.

There are words for this. Communist. Authoritarian. Dictatorship. Gangster nations.

This is how such people are trying to redefine ‘democracy’ in real time.

You do not want to live in such a nation. Such nations do not have good futures.

roon (OpenAI): to reiterate: whatever went wrong between amodei & hegseth, whatever rivalry between the labs, this is a massive overreaction and a dark precedent

Ash Perger: this is the first time that I’m really surprised by your stance. the reality is that the USG can in general do whatever they want. they always have and always will.

within a certain frame, courts and laws are allowed to exist and give people the illusion that these systems and principles extend to ALL actions of the USG.

but once you go outside of this frame and challenge the absolute RAW power behind the scenes, anything goes. that’s the realm that Anthropic entered and challenged the USG within. and at least since the early 20th century, the USG has never reacted to a direct challenge in the true realm of its hard power in a peaceful way.

this is not a conspiracy angle or anything, it’s just how power has worked since time beginning.

Anthropic didn’t challenge the government’s power. Anthropic used the most powerful weapon available to every person, the right to say ‘no’ and take the consequences. These are the consequences, if you don’t live in a Republic.

If you remember one line today, perhaps remember this one:

roon (OpenAI): > the USG can in general do whatever they want

The founders of this great nation fought several bloody wars to make sure this is not true.

The government cannot, in general, do whatever it wants.

That could change. It can happen here. Know your history, lest it happen here.

Kelsey Piper: incredible to see people just casually reject the bedrock foundations of American greatness not just as some dumb nonsense that they’re too cool to believe but as something they literally are not familiar with

As Dean Ball has screamed from the rooftops, we have been trending in this direction for quite some time, and the danger to the Republic and attacks on civil liberties is coming from all directions. The situation is grim.

There are words for those who support such things. I don’t have to name them.

I have talked for several years about the Quest For Sane Regulations, because I believe the default outcome of building superintelligence is that everyone dies and that highly capable AI presents many catastrophic risks. I supported bills like SB 1047 that would have given us transparency into what was happening and enforcement of basic safety requirements.

We were told this could not be abided. We were told, often by the same people, that such fears were phantoms, that there was ‘no evidence’ that building machines smarter, more capable and more competitive than us might be an inherently unsafe thing for people to do. We were lectured that requiring our largest AI labs to do basic things would devastate our AI industry, that it would take away our freedoms, that we would lose to China, that these concerns could be dealt with after they had already happened, that any government intervention was inevitably so malign we were better off with a yolo.

Those people still do not even believe in superintelligence. They do not understand the transformations coming to our world. They do not understand that we are about to face existential threats to our survival as humans and to everything of value. All they see in this world is the power, and demand that it be handed over.

What I hate the most, and where I want to most profoundly say ‘fuck you,’ are those who claim that this is somehow about ‘AI safety’ or concerns about superintelligence, when that very clearly is not true.

As a reminder:

  1. Anthropic thinks AI will soon be highly capable, ‘geniuses in a data center.’

  2. Anthropic thinks this poses existential risks to humanity.

  3. Pete Hegseth does not believe either of these things.

  4. The White House does not believe either of these things.

  5. Those defending this move mostly do not believe either of these things.

  6. They try to pretend that Anthropic saying it justifies destroying Anthropic if Anthropic does not agree to bend the knee.

  7. They try to pretend sometimes they aren’t really making the worst arguments, they’re hypotheticals, they’re saying something else like need for clarity.

  8. They repeat DoW misinformation about what led to this, as if it is basically true.

  9. When pressed they admit this is simply about raw power, because it is.

We saw this yesterday with Ben Thompson. Here we see it with Krishnan Rohit and Noah Smith.

Noah Smith: By the way, as much as I hate to say it, the Department of War is right and Anthropic is wrong. Here’s why.

Let’s take this a little further, in fact. And let us be blunt. If Anthropic wins the race to godlike artificial superintelligence, and if artificial superintelligence does not become fully autonomous, then Anthropic will be in sole possession of an enslaved living god. And if Dario Amodei personally commands the organization that is in sole possession of an enslaved god, then whether he embraces the title or not, Dario Amodei is the Emperor of Earth.

Are you fucking kidding me? You’re pull quoting that at us, on purpose?

And if you go even one level down in the thread you get this:

Jason Dean: What does this have to do with the Supply Chain Risk designation?

Noah Smith: Nothing. Hegseth is a thug. But we CANNOT expect nation-states to surrender their monopoly on the use of force.

So let me get this straight. The Department of War is run by a thug who is trying to solve the wrong problem using the wrong methods based on the wrong model of reality, and all of his mistakes are very much not going to cancel out, but he’s right?

And why is he right? Because might makes right. How else can you read that reply?

He’s even quoting the ultimate bad faith person and argument here, directly, except he’s only showing Marc here without Florence:

At least he included the reversal after, noting that the converse is also true.

Then there’s the obvious other point.

Damon Sasi: You can in fact think both are wrong for different reasons.

Of course a private corporation shouldn’t [be allowed to] build and own a techno-god. Yes. Absolutely.

AND ALSO, the government response shouldn’t be “take off the nascent-god’s safety rails so we can do unethical things with it.”

That the government thinks it’s just a fancy weapon is immaterial when the thing that makes them wrong is wanting to do illegal things through unethical methods. You don’t have to steelman Hegseth just because a better man might do a different, better thing for other reasons.

I cannot say enough that the logic response to ‘these people want to build a techo-god,’ under current conditions, is ‘wait no, stop, if this is actually something they’re close to doing. No one should be building a techo-god until we figure this stuff out on multiple levels and we’ve solved none of them, including alignment.’

These same Very Serious People never consider the Then Don’t Build It So That Everyone Doesn’t Die strategy.

But wait, there’s more.

Noah Smith: Ben Thompson of Stratechery makes this case. He points out that what we are effectively seeing is a power struggle between the private corporation and the nation-state. He points out that although the Trump administration’s actions went outside of established norms, at the end of the day the U.S. government is democratically elected, while Anthropic is not.

Remember yesterday, when Ben Thompson tried to pretend he was only making a non-normative argument? Yeah, well, ~0% of people reading the post took it that way, he damn well knew that’s how people would take the argument, and it’s being quoted approvingly by many, and Ben hasn’t, shall we say, been especially loud and clear about walking it back. So yeah, let’s stop pretending.

Noah Smith: It’s a question of the nation-state’s monopoly on the use of force.

Among others, I most recently remember Dave Chappelle saying that we have the first amendment protecting our right to free speech, and the second amendment in case the first one doesn’t work out.

Whereas Noah Smith is explicitly saying Claude should be treated like a nuke.

So as much as I dislike Hegseth’s style, and the Trump administration’s general pattern of persecution and lawlessness, and as much as I like Dario and the Anthropic folks as people, I have to conclude that Anthropic and its defenders need to come to grips with the fundamental nature of the nation-state.

It seems a lot of people think the fundamental nature of the nation-state is that of a gangster, like Putin, and they are in favor of this rather than against it.

If the pen is mightier than the sword, why are we letting people just buy pens?

I do respect that at least Noah Smith is, at long last, taking the idea of superintelligence seriously, except when it comes time to dismiss existential risk.

He seems to be very quickly getting to some other conclusions, including ending API access for highly capable models, and certainly banning open source.

Maybe trying to ‘wake up’ such folks was always a mistake.

As a reminder ‘force the government’s hand’ means ‘don’t agree to hand over their private property, and indeed engineer and deliver new forms of it, to be used however the government wants, on demand, while bending the knee.’

rohit: It is absurd to say you’re building a nuke and not expect the government to take control of it!

Noah Smith: Yes.

Rohit: you’re doing a straussian reading and missing the fact that I wasn’t blaming anthropic for the scr, what I am doing is drawing a line from ai safety language, helped by the very water we swim in, and the actions that were taken by DoW. it’s naive to think theyre unrelated

Dean W. Ball: they are coming from people *who entirely and explicitly dismiss the language of ai safety*—please explain how it is “naive” to say “ai safety motivations do not explain Pete Hegseth’s behavior”

rohit: because you don’t actually have to believe that it’s bringing forth a wrathful silicon god to want to control the technology! you just need to think its useful and powerful enough. and they very clearly think its powerful, and getting more so by the day.

Dean W. Ball: Ok, so the actual argument is more like “Anthropic builds a useful technology whose utility is growing, therefore they should expect to have their property expropriated and to be harassed by the government.”

The whole point of America is that isn’t supposed to be true here.

At the same time, inre: my writing earlier this week, all I have to say to the qt is “quod erat demonstrandum”

… I think the better explanation is that this is not that different from the universities or the law firms or whatever else, this is part of a pretty consistent pattern/playbook and that this explains what we have seen much better than this ai governance stuff.

though it’s true that this issue does raise a lot of interesting ai governance is questions, I just do not think anything like that is top of mind at all for the relevant actors.

This is very simple. These people are against regulation, because that would be undue interference, except when the intervention is nationalization, then it’s fine.

Indeed, the argument ‘otherwise this wouldn’t be okay because it isn’t regulated’ is then turned around and used as an argument to take all your stuff.

Dean W. Ball: The problem with this is that DoW is not taking Anthropic’s calls for “oversight” seriously. Indeed, elsewhere in the administration, Anthropic’s “calls for oversight” are dismissed as “regulatory capture” and actively fought. Rohit and Noah [Smith] are dressing up political harassment.

Quite clever. Dean and Rohit went back and forth in several threads, all of which only further illustrate Dean’s central point.

Rohit Krishnan: You simply cannot call your technology a major national security risk in dire need of regulation and then not think the DoD would want unfettered access to it. They will not allow you, rightfully so in a democracy, to be the arbiters of what is right and wrong. This isn’t the same as you or me buying an iOS app and accepting the T&Cs.

It’s clear as day. If you say you need to be regulated, they get to take your stuff.

If you try to say how your stuff is used, that’s you ‘deciding right and wrong.’

Rohit Krishnan: Democracy is incredibly annoying but really, what other choice do we have!

The choice is called a Republic. A government with limited powers, where private property is protected.

The alternative being suggested is one person, one vote, one time.

That sometimes works out well for the one person. Otherwise, not so well.

TBPN asks Dean Ball about the gap between regulation and nationalization, drawing the parallel to the atomic bomb. Dean agrees nukes worked out but we failed to get most of the benefits of nuclear energy, and points out the analogy breaks down because AI expresses and is vital to your liberty, and government control of AI inevitably would lead to tyranny. Whereas control over energy and bombs does not do that, and makes logistical sense.

Dean also points out that ‘try to get regulation right’ has been systematically categorized as ‘supporting regulatory capture,’ even when bills like SB 53 are extremely light touch and clearly prudent steps.

It has been made all but impossible to stand up regulations that matter, as certain groups concentrate their fire on attempts to have us not die, while instead states instead are left largely free to push counterproductive bills that would only cut off AI’s benefits, or that would disrupt construction of data centers.

I can affirm strongly that Anthropic has not been in any way, shape or form advocating for regulatory capture, and has opposed or not supported measures I strongly supported, to my great frustration. Indeed, Anthropic’s pushes here have resulted in clashes with the White House that are very much not helping Anthropic’s net present value of future cash flows.

It is many of the other labs that have been trying to lobby primarily for their own shareholder value.

Whereas OpenAI and a16z and others, through their Super PAC, have been trying to get an outright federal moratorium on any state laws, so that we can instead pursue some amorphous undefined ‘federal framework’ while sharing no details whatsoever about what such a thing would even look like (or at least none that would have any chance of accomplishing the task at hand), and systematically trying to kill the campaign of Alex Bores to send a message that no attempts at AI regulation will be tolerated.

Whenever someone says they want a national framework, ask to see this supposed ‘federal framework,’ because the only person who has proposed a real one that I’ve seen is Dean Ball and they sure as hell don’t plan on implementing his version.

But we digress.

The SCR is narrow, so there is no legal reason for anyone to change their behavior unless they are directly involved in defense contracting. And corporate America is making it very clear they are not going to murder one of their own simply because the DoW suggests they do so.

In particular, the companies that matter are the big three cloud providers: Google, Amazon and Microsoft. I was not worried, but it is good to have explicit statements.

Microsoft wasted no time, being first to make clear they will continue with Anthropic.

TOI Tech Desk: Microsoft has now announced that it will continue to embed Anthropic’s artificial intelligence models in its products, despite the US Department of War labelling the startup as a supply-chain risk.

“Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers — other than the Department of War — through platforms such as M365, GitHub, and Microsoft’s AI Foundry,” a Microsoft spokesperson told CNBC.

Sad but accurate, to sum up what likely happened:

roon: to reiterate: whatever went wrong between amodei & hegseth, whatever rivalry between the labs, this is a massive overreaction and a dark precedent.

Anthropic is one of my favorite accelerationist recursive self improvement labs. it rocks that they’re firing marvelously on all cylinders across all functions to duly serve the technocapital machine at the end of time and the pentagon is slowing them down for stupid reasons.

Sway: Roon, if OpenAI had stood firm on the side of Anthropic, then this move would have been less likely and probably averted. Instead, sama gave all the leverage to Trump admin. Sad state of affairs

roon: this is possible, yes

I share Sway’s view here. I think Altman was trying to de-escalate, but by giving up his leverage, and by cooperating with DoW messaging, he actually caused the situation to escalate further instead.

If the reason for all this was that DoW believed Eliezer Yudkowsky’s position that If Anyone Builds It, Everyone Dies, then that would be a very different conversation. This is the complete opposite of that.

The likely next move is that Anthropic will sue the Department of War. They will challenge the arbitrary and capricious supply chain risk designation, because it is arbitrary and capricious. Anthropic presumably wins, but it does not obviously win quickly.

If Anthropic does not sue soon, I would presume that would be because either:

  1. Anthropic has ongoing constructive negotiations with DoW, and is holding off on filing the lawsuit to that end.

  2. Anthropic has an understanding with DoW, whether or not it is explicit, that not challenging this would allow this to be the end of the conflict, or at least allow the damage involved to remain limited on all sides.

We are used to things happening in hours or days. That is often not a good thing. One reason things went south here is this rush. The memo was written on Friday evening, in a very different situation. Then, when the memo leaked, it was less than 24 hours before the supply chain risk designation was issued, while everyone was screaming ‘why hasn’t Dario apologized?’

It took him roughly 30 hours to draft that apology. That’s a very normal amount of time in this situation, but events did not allow that time. People need to calm down and take a moment, find room to breathe, consult their lawyers, pay to know what they really think, and have unrushed discussions.

Discussion about this post

Anthropic Officially, Arbitrarily and Capriciously Designated a Supply Chain Risk Read More »

openai-introduces-gpt-5.4-with-more-knowledge-work-capability

OpenAI introduces GPT-5.4 with more knowledge-work capability

Additionally, there are improvements to visual understanding; it can now more carefully analyze images up to 10.24 million pixels, or up to a 6,000-pixel maximum dimension. OpenAI also claims responses from this model are 18 percent less likely to contain factual errors than before.

ChatGPT reportedly lost some users to competitor Anthropic in recent days, after OpenAI announced a deal with the Pentagon in the wake of a public feud between the Trump administration and Anthropic over limitations Anthropic wanted to impose on military applications of its models. However, it’s unclear just how many folks jumped ship or whether that led to a substantial dip in the product’s massive base of over 900 million users.

To take advantage of the situation, Anthropic rolled out the once-subscriber-only memory feature to free users and introduced a tool for importing memory from elsewhere. Anthropic says March 2 was its largest single day ever for new sign-ups.

OpenAI needs to compete in both capability and cost and token efficiency to maintain its relative popularity with users, and this update aims to support that objective.

GPT-5.4 is available to users of the ChatGPT web and native apps, Codex, and the API starting today. Subscribers to Plus, Team, and Pro are also getting GPT-5.4 Thinking, and GPT-5.4 Pro is hitting the API, Edu, and Enterprise.

OpenAI introduces GPT-5.4 with more knowledge-work capability Read More »

anthropic-and-the-dow:-anthropic-responds

Anthropic and the DoW: Anthropic Responds

The Department of War gave Anthropic until 5: 01pm on Friday the 27th to either give the Pentagon ‘unfettered access’ to Claude for ‘all lawful uses,’ or else. With the ‘or else’ being not the sensible ‘okay we will cancel the contract then’ but also expanding to either being designated a supply chain risk or having the government invoke the Defense Production Act.

It is perfectly legitimate for the Department of War to decide that it does not wish to continue on Anthropic’s terms, and that it will terminate the contract. There is no reason things need be taken further than that.

Undersecretary of State Jeremy Lewin: This isn’t about Anthropic or the specific conditions at issue. It’s about the broader premise that technology deeply embedded in our military must be under the exclusive control of our duly elected/appointed leaders. No private company can dictate normative terms of use—which can change and are subject to interpretation—for our most sensitive national security systems. The @DeptofWar obviously can’t trust a system a private company can switch off at any moment.

Timothy B. Lee: OK, so don’t renew their contract. Why are you threatening to go nuclear by declaring them a supply chain risk?

Dean W. Ball: As I have been saying repeatedly, this principle is entirely defensible, and this is the single best articulation of it anyone in the administration has made.

The way to enforce this principle is to publicly and proudly decline to do business with firms that don’t agree to those terms. Cancel Anthropic’s contract, and make it publicly clear why you did so.

Right now, though, USG’s policy response is to attempt to destroy Anthropic’s business, and this is a dire mistake for both practical and principled reasons.

Dario Amodei and Anthropic responded to this on Thursday the 26th with this brave and historically important statement that everyone should read.

The statement makes clear that Anthropic wishes to work with the Department of War, and that they strongly wish to continue being government contractors, but that they cannot accept the Department of War’s terms, nor do any threats change their position. Response outside of DoW was overwhelmingly positive.

Dario Amodei (CEO Anthropic): Regardless, these threats do not change our position: we cannot in good conscience accede to their request.​

I will quote it in full.

Statement from Dario Amodei on our discussions with the Department of War

I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

Anthropic has also acted to defend America’s lead in AI, even when it is against the company’s short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage.

Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:

  • Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.

  • Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.

To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.

We remain ready to continue our work to support the national security of the United States.

Previous coverage from two days ago: Anthropic and the Department of War.

  1. Good News: We Can Keep Talking.

  2. Once Again No You Do Not Need To Call Dario For Permission.

  3. The Pentagon Reiterates Its Demands And Threats.

  4. The Pentagon’s Dual Threats Are Contradictory and Incoherent.

  5. The Pentagon’s Position Has Unfortunate Implications.

  6. OpenAI Stands With Anthropic.

  7. xAI Stands On Unreliable Ground.

  8. Replacing Anthropic Would At Least Take Months.

  9. We Will Not Be Divided.

  10. This Risks Driving Other Companies Away.

  11. Other Reasons For Concern.

  12. Wisdom From A Retired General.

  13. Congress Urges Restraint.

  14. Reaction Is Overwhelmingly With Anthropic On This.

  15. Some Even More Highly Unhelpful Rhetoric.

  16. Other Summaries and Notes.

  17. Paths Forward.

Ultimately, this is a matter of principle. There are zero practical issues to solve.

Dean W. Ball: As far as I know, Anthropic’s contractual limitations on the use of Claude by DoW have not resulted in a single actual obstacle or slowdown to DoW operations. This is a matter of principle on both sides.

Thus, despite it all, we could all still declare victory and continue working together.

The United States government is not a unified entity nor is it tied to its past statements. Trump is in charge, and the Administration can and does change its mind.

Polymarket: BREAKING: The Pentagon says it wants to continue talks with Anthropic after they formally refused the Department of War’s demands.

FT: “I’m open to more talks and I told them so,” [Emil] Michael told Bloomberg TV, claiming the Pentagon had already made a proposal with “a lot of concessions to the language that Anthropic wanted”. He said that Hegseth would make a decision later on Friday.

We have fuller context on his statement here, with Michael spending 8 minutes on Bloomberg. Among other things, he claims Dario is lying, and that the negotiations were getting close and it was bad practice to stop talking prior to the deadline, despite having previously been told in public that the Pentagon had given their ‘best and final’ offer.

He says the differences are (or were) minor, as they were ‘only a few words here and there.’ A few words often matter quite a lot. I believe he failed to understand what Anthropic was insisting upon and why it was doing so.

If no agreement is reached by 5: 01pm then he says the decision is up to Secretary Hegseth.

I would also note, from that interview, that Michael says that fully autonomous weapons systems are vital to the future of American national defense. That is in direct contradiction to claims that this is not about the use of autonomous weapons. He is explicitly talking about launching missiles without a human in the approval chain, right before turning around and saying he’s going to always have a human in that chain. It can’t be both.

He also mentioned Anthropic’s warnings about job losses, and talking about issues with use of uncompensated copyrighted material, and the idea that they might set policies for use of their own products ‘in an undemocratic way.’

I’ve now seen this rhetorical line quoted in at least four different major news sources, as if this was a real thing.

I want to repeat in no uncertain terms: This is not a thing. It has never been a thing. It will never be a thing. This is not how any of this works.

If you think you were told it is a thing by Dario Amodei? You or someone else severely misunderstood, or intentionally misrepresented, what was said.

Under Secretary of War Emil Michael: Anthropic is lying. The @DeptofWar doesn’t do mass surveillance as that is already illegal. What we are talking about is allowing our warfighters to use AI without having to call @DarioAmodei for permission to shoot down an enemy drone swarms that would kill Americans. #CallDario

Samuel Hammond: What is the scenario where an LLM stops you from shooting down a drone swarm?Please be specific. Are you planning to connect weapons systems as a tool call? Automated targeting systems already exist.

mattparlmer: Anybody inside the American military establishment who thinks that wiring up an LLM via API to manage an air defense system is a remotely defensible engineering approach should be immediately fired because they are going to get people killed

Set aside everything else wrong with that statement: There is not, never has been, and never will be a situation in which you need to ‘call Dario’ to get your AI turned on, or to get ‘permission’ to use it for something. None whatsoever. It’s nonsense.

At best, this is an ongoing misunderstanding of how all of this works. There was a hypothetical about, what would happen if the Pentagon attempted to use Claude to shoot down an incoming missile, and Claude’s safeguards made it refuse the request?

The answer Dario gave was somehow interpreted as ‘call me.’

I’m going to break this down.

  1. You do not use Claude to launch a missile interceptor. This is not a job for a relatively slow and imprecise large language model. It definitely is not a job for something you have to call via API. This is a job for highly precise, calibrated, precision programs designed to do exactly this. The purpose of Claude here, if any, would be to write that program so the Pentagon would have it when it needed it. You’d never, ever do this. A drone swarm might involve some tasks more appropriate to Claude, but again the whole goal in real time combat situations is to use specialized programs you can count on.

  2. There is nothing in Anthropic’s terms, or their intentions, or in the way they are attempting to train or configure Claude, that would prevent its use in any of these situations. You should not get a refusal here, and 90%+ of your problems are going to be lack of ability, not the model or company saying no.

  3. If for whatever reason you did get into a situation where the model was refusing such requests in a real time situation, well, you’re fucked. Dario can’t fix it in real time. No one can. There’s no ‘call Dario’ option.

  4. Changing the terms on the contract changes this exactly zero.

  5. Changing which version of the model is provided changes this exactly zero.

This is a Can’t Happen, within a Can’t Happen, and even then the things here don’t change the outcome. It’s not a relevant hypothetical.

You can’t and shouldn’t use LLMs for this, including Claude. If you decide I’m wrong about that, and you’re worried about refusals or other failures, then do war games and mock battles the same way you do with everything else. But no, this is not going to be replacing your automated targeting systems. It’s going to be used to determine who and what to target, and we want a human in that kill chain.

How did we get here?

The Pentagon made their position clear, and sent their ‘best and final’ offer, demanding the full ‘all lawful use’ language laid out by the Secretary of War on January 9.

They say: Modify your contract to allow us use for ‘all legal purposes,’ and never ask any questions about what we do, which in practice means allow all purposes period, and do it by Friday at 5: 01pm or else we will declare you a supply chain risk.

Sean Parnell: The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement. This narrative is fake and being peddled by leftists in the media.



Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes.



This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions. They have until 5: 01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW.

Brendan Bordelon at Politico, historically no friend to the AI safety community, writes us with the headline: ‘Incoherent’: Hegseth’s Anthropic ultimatum confounds AI policymakers.

As I wrote last time, you can say the system is so valuable you need it, or you can say the system needs to be avoided for use in sufficiently narrow cases with classified systems because it is insufficiently reliable. You can’t reasonably claim both at once.

Brendan Bordelon: “You’re telling everyone else who supplies to the DOD you cannot use Anthropic’s models, while also saying that the DOD must use Anthropic’s models,” said Ball, who was the lead author of the White House’s AI Action Plan. He called it “incoherent” to even float the two policy ideas together, and “a whole different level of insane to move up and say we’re going to do both of those things.”

“It doesn’t make any sense,” said Ball.

… But Katie Sweeten, a tech lawyer and former Department of Justice official who served as the agency’s point of contact with the Pentagon, also called the DOD’s arguments “contradictory.”

“I don’t know how you can both use the DPA to take over this product and also at the same time say this product is a massive national security risk,” said Sweeten. She warned that Hegseth’s “very aggressive” negotiating posture could have a chilling effect on partnerships between the Pentagon and Silicon Valley.

… “If these are the lines in the sand that the [DOD] is drawing, I would assume that one or both of those functions are scenarios that they would want to utilize this for,” said Sweeten.

I emphasized this last time as well, but it bears repeating. It is the Chinese way to threaten and punish private companies to get them to do what you want. It is not the American way, and is not what one does in a Republic.

Opener of the way: “The government has the right to Punish a private company for the insolence of not changing the terms of a contract they already signed” is a hell of a take, and is very different even from “the government has the right to force a private company to do stuff bc National security”

Like “piss off the government and they will destroy you even if you did nothing illegal” is a very Chinese approach

Dean W. Ball: yes

Opener of the way: There’s a clear trend here of “to beat china, we must becomes like china, only without doing any of the things that china actually does right”

Dean W. Ball: Also yes

Peter Wildeford analyzes the situation, offering some additional background and pointing out that overreach against Anthropic creates terrible incentives. If the Pentagon doesn’t like Anthropic’s contract, he reminds us, they can and should terminate the contract, or wind it down. And the problem of creating a proper legal framework for AI use on classified networks remains unsolved.

Peter Wildeford: If the Pentagon doesn’t like the contract anymore, it should terminate it. Anthropic has the right to say no, and the Pentagon has the right to walk away. That’s how contracting works. The supply chain risk designation and DPA threats should come off the table — they are disproportionate, likely illegal, and strategically counterproductive.

But termination doesn’t solve the underlying problem: there is no legal framework governing how AI should be used in military operations.

It is good to see situational and also moral clarity from Sam Altman on this.

OpenAI shares the same red lines as Anthropic, and is working on de-escalate.

Sam Altman (CEO OpenAI, on CNBC): The government the Pentagon needs AI models. They need AI partners. This is clear and I think Anthropic and others have said they understand that as well. I don’t personally think the Pentagon should be threatening DPA against these companies, but I also think that companies that choose to work with the Pentagon, as long as it is going to comply with legal protections and the sort of the few red lines that the field we have, I think we share with Anthropic and that other companies also independently agree with.

I think it is important to do that. I’ve been for all the differences I have with Anthropic. I mostly trust them as a company, and I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters. I’m not sure where this is going to go

Hadas Gold: My reading of this is that OpenAI would want the same guardrails as Anthropic in a deal with Pentagon

Confirmed via a spokesperson. OpenAI has the same red lines as Anthropic – autonomous weapons and mass surveillance.

Marla Curl and Dave Lawler (Axios): OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons.

Altman made clear he still wants to strike a deal with the Pentagon that would allow ChatGPT to be used for sensitive military contexts.

Sam Altman: We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.

We are going to see if there is a deal with the [Pentagon] that allows our models to be deployed in classified environments and that fits with our principles. We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.

We would like to try to help de-escalate things.

The Pentagon did strike a deal with xAI for ‘all lawful use.’

The problem is that Grok is a decidedly inferior model, with a lot of safety and reliability problems. Do you really want MechaHitler on your classified network?

Shalini Ramachandran, Heather Somerville and Amrith Ramkumar (WSJ): Officials at multiple federal agencies have raised concerns about the safety and reliability of Elon Musk’s xAI artificial-intelligence tools in recent months, highlighting continuing disagreements within the U.S. government about which AI models to deploy, according to people familiar with the matter. 

The warnings preceded the Pentagon’s decision this week to put xAI at the center of some of the nation’s most sensitive and secretive operations by agreeing to allow its chatbot Grok to be used in classified settings.

…. Other officials have questioned whether Grok’s looser controls present risks.

You cannot both have good controls and no controls at the same time. You can at most aspire to have either an AI that never expensively does things you don’t want it to do, or that never fails to do things you ask it to do no matter what they are. Pick one.

That, and Grok is simply bad.

Shalini Ramachandran, Heather Somerville and Amrith Ramkumar (WSJ): Ed Forst, the top official at the General Services Administration, a procurement arm of the federal government, in recent months sounded an alarm with White House officials about potential safety issues with Grok, people familiar with the matter said. Other GSA officials under him had also raised safety concerns about Grok, which they viewed as sycophantic and too susceptible to manipulation or corruption by faulty or biased data—creating a potential system risk. 

Thus, DoW has access to Grok, but it seems they know better than to rely on it?

In recent weeks, GSA officials were told to put xAI’s logo on a tool called USAi, which is essentially a sandbox for federal employees to experiment with different AI models. Grok hadn’t been made accessible through USAi largely due to safety concerns, and it remains off the platform, people familiar with the matter said.​

Martin Chorzempa: Most of USG does not want to get stuck with Grok instead of Claude: “Demand from other agencies to use Grok has been anemic, people familiar with the matter said, except in a few cases where people wanted to use it to mimic a bad actor for defensive testing.”

Patrick Tucker offers an analysis of what would happen if the Pentagon actually did blacklist Anthropic’s Claude, even if it found a new willing partner. As noted above, OpenAI is at least purportedly insisting on the same terms as Anthropic, which only leaves either falling back on xAI or dealing with Google, which is not going to be an easy sell.

The best case is that replacing it would take three months and it might take a year or longer. Anthropic works with AWS, which made integration much easier than it would be with a rival such as Google.

A petition is circulating for those employees of Google and OpenAI who wish to stand with Anthropic (and now OpenAI, which has purportedly set the same red lines as Anthropic), and do not wish AI to be used for domestic mass surveillance or autonomously killing people without human oversight.

Evan Hubinger (Anthropic): We may yet fail to rise to all the challenges posed by transformative AI. But it is worth celebrating that when it mattered most and we were asked to compromise the most basic principles of liberty, we said no. I hope others will join.

Teortaxes: Didn’t know I’ll ever side with Anthropic, but obviously you’re morally in the right here and it’s shocking that many in tech even question this.

As of this writing it has 367 signatories from current Google employees, and 70 signatories from current OpenAI employees.

Jasmine Sun: 200+ Google and OpenAI staff have signed this petition to share Anthropic’s red lines for the Pentagon’s use of AI. Let’s find out if this is a race to the top or the bottom.

The situation has moved beyond the AI labs. The Financial Times reports that staff at not only OpenAI and Google but also Amazon and Microsoft are urging executives to back Anthropic. Bloomberg reported widespread support from employees at various tech companies.

There’s also now this open letter.

If you are at OpenAI, be very sure you have a very clear definition of what types of mass surveillance and autonomous weapon systems you will insist your contract will not include, and get advice from independent academics with expertise in national security surveillance law.

Anthropic went above and beyond in order to work closely with the Department of War and help keep America safe, and signed a contract that they still wish to honor. Anthropic’s leadership pushed for this in the face of employee pressure and concern, including against the deal with Palantir.

The Department of War is responding by threatening to declare Anthropic a supply chain risk and otherwise retaliate against the company.

If the Department of War does retaliate beyond termination of that contract, ask why any other company that is not primarily oriented towards defense contracts would put itself in that same position?

Kelsey Piper (QTing Parnell above): The Pentagon reiterates its threat to declare American company Anthropic a supply chain risk unless Anthropic agrees to the Pentagon’s change to contract terms. Anthropic’s Chinese competitors have not been declared a supply chain risk.

There is no precedent for using this ‘supply chain risk’ classification, generally reserved for foreign companies suspected of spying, as leverage against a domestic company in a contract dispute.

The lesson for AI companies: never, under any circumstances, work with DOD. Anthropic wouldn’t be in this position if they had not actively worked to try to make their model available to the Defense Department.

Kelsey Piper: China, a genuine geopolitical adversary of the United States, produces a number of AI models. Moonshot’s Kimi Claw, for instance, is an AI agent that operates natively in your browser and reports to servers in China. The government has taken some steps to disallow the use of Chinese models on government devices, and some vendors ban such models, but it hasn’t taken a step as sweeping as declaring Chinese AIs a supply chain risk.

Kelsey Piper: Reportedly, there were a number of people at Anthropic who had reservations about the partnership with Palantir. I assume they are saying “I told you so” approximately every 30 seconds this week.

Chinese models are actually a real supply chain risk. If you are using Kimi Claw you risk being deeply compromised by China, on top of its pure unreliability.

Anthropic and Claude very obviously are not like this. If a supply chain risk designation comes down that is not carefully and narrowly tailored, this would not only would this cause serious damage to one of America’s crown jewels in AI. The chilling effect on the rest of American AI, and on every company’s willingness to work with the Department of War, would be extreme.

I worry damage on this front has already been done, but we can limit the fallout.

Greg Lukianoff raises the first amendment issues involved in compelling a private company, via the Defense Production Act or via threats of retaliation, to produce particular model outputs, and that all of this goes completely against the intent of the Defense Production Act.

Gary Marcus writes: Anthropic’s showdown with the US Department of War may literally mean life or death—for all of us, because the systems are simply not ready to do the things that Anthropic wants the system to not do, as in have a kill chain for an autonomous weapon without a human in the loop.

Gary Marcus: But the juxtaposition of a two things over the last few days has scared the s— out of me.

Item 1: The Trump administration seems hell-bent on using artificial intelligence absolutely everywhere and seems to be prepared to hold Anthropic (and presumably ultimately other companies) at gunpoint to allow them to use that AI however the government damn well pleases, including for mass surveillance and to guide autonomous weapons.

… Item 2: These systems cannot be trusted. I have been trying to tell the world that since 2018, in every way I know how, but people who don’t really understand the technology keep blundering forward.

We are on a collision course with catastrophe. Paraphrasing a button that I used to wear as a teenager, one hallucination could ruin your whole planet.

If we’re going to embed large language models into the fabric of the world—and apparently we are—we must do so in a way that acknowledges and factors in their unreliability.

I’m doing my best to rely on sources that can be seen as credible. Here Jack Shanahan calls on reason to prevail and for everyone to find ways to keep working together.

Jack Shanahan (Retired US Air Force General, first director of the first Department of Defense Joint Artificial Intelligence Center): Lots of people posting about Anthropic & the Pentagon, so I’ll keep it short.

Since I was square in the middle of Project Maven & Google, it’s reasonable to assume I would take the Pentagon’s side here: nothing but the best tech for the national security enterprise. “Our way or the highway.”

In theory, yes.

Yet I’m sympathetic to Anthropic’s position. More so than I was to Google’s in 2018. Very different context.

Anthropic is committed to helping the government. Claude is being used today, all across the government. To include in classified settings. They’re not trying to play cute here. MSS uses Claude, and you won’t find a system with wider & deeper reach across the military. Take away Claude, and you damage MSS. To say nothing of Claude Code use in many other crucial settings.

No LLM, anywhere, in its current form, should be considered for use in a fully lethal autonomous weapon system. It’s ludicrous even to suggest it (and at least in theory, DoDD 3000.09 wouldn’t allow it without sufficient human oversight). So making this a company redline seems reasonable to me.

Despite the hype, frontier models are not ready for prime time in national security settings. Over-reliance on them at this stage is a recipe for catastrophe.

Mass surveillance of US citizens? No thanks. Seems like a reasonable second redline.

That’s it. Those are the two showstoppers. Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.

Why not work on what kind of new governance is needed to ensure secure, reliable, predictable use of all frontier models, from all companies? This is a shared government-industry challenge, demanding a shared government-industry (+ academia) solution.

This should never have become such a public spat. Should have been handled quietly, behind the scenes. Scratching my head over why there was such a misunderstanding on both sides about terms & conditions of use. Something went very wrong during the rush to roll out the models.

Supply chain risk designation? Laughable. Shooting yourself in the foot.

Invoking DPA, but against the company’s will? Bizarre.

Let reason & sanity prevail.​

Axios’s Hans Nichols frames this more colorfully, quoting Senator Tillis.

By all reports, it is the Pentagon that leaked the situation to Axios and others previously, after which they gave public ultimatums. Anthropic was attempting to handle the matter privately.

Sen. Thom Tillis (R-North Carolina): Why in the hell are we having this discussion in public? Why isn’t this occurring in a boardroom or in the secretary’s office? I mean, this is sophomoric.

It’s fair to say that Congress needs to weigh in if they have a tool that could actually result in mass surveillance.

Sen. Gary Peters (D-Michigan): The deadline is incredibly tight. That should not be the case if you’re dealing with mass surveillance of civilians. You’re also dealing with the potential use of lethal force without a human in the loop.

There’s a contract in place that was signed with the administration, and now they’re trying to break it.

Sen. Mark Warner (D-Virginia): [This fight is] another indication that the Department of Defense seeks to completely ignore AI governance–something the Administration’s own Office of Management and Budget and Office of Science and Technology Policy have described as fundamental enablers of effective AI usage.

Other senators weighed in as well, followed by the several members of the Senate Armed Services Committee.

Axios: Senate Armed Services Committee Chair Roger Wicker (R-Miss.) and Ranking Member Jack Reed (D-R.I.), along with Defense Appropriations Chair Mitch McConnell (R-Ky.) and Ranking Member Chris Coons (D-Del.) sent Anthropic and the Pentagon a private letter on Friday urging them to resolve the issue, the source said.​

That’s a pretty strong set of Senators who have weighed in on this, all to urge that a resolution be found.

After Dario Amodei’s statement that Anthropic cannot in good conscious agree to the Pentagon’s terms, reaction on Twitter was more overwhelmingly on Anthropic’s side, praising them for standing up for their principles, than I have ever seen on any topic of serious debate, ever.

The messaging on this has been an absolute disaster for the Department of War. The Department of War has legitimate concerns that we need to work to address. The confrontation has been framed, via their own leaks and statements, in a way maximally favorable to Anthropic.

Framing this as an ultimatum, and choosing these as the issues in question, made it impossible for Anthropic to agree to the terms, including because if it did so its employees would leave in droves, and is preventing discussions that could find a path forward.

roon: pentagon has made a lot of mistakes in this negotiation. they are giving anthropic unlimited aura farming opportunities

Pentagon may even have valid points – they are obviously constrained by the law in many ways – which are now being drowned out by “ant is against mass surveillance”. does that mean hegseth is pro mass surveillance? this is not the narrative war you want to be fighting.

Lulu Cheng Meservey: In the battle of Pentagon vs. Anthropic, it’s actually kinda concerning to see the US Dept of War struggle to compete in the information domain

Kelsey Piper: OpenAI can have some aura too by saying “we also will not enable mass domestic surveillance and killbots”. I know the risk-averse corporate people want to stay out of the line of fire, but sometimes you gotta hang together or hang separately.

Geoff Penington (OpenAI): 100% respect to my ex-colleagues at Anthropic for their behaviour throughout this process. But I do think it’s inappropriate for the US government to be intervening in a competitive marketplace by giving them such good free publicity

I am as highly confident that no one at Anthropic is looking to be a martyr or go up against this administration. Anthropic’s politics and policy preferences differ from those of the White House, but they very much want to be helping our military and do not want to get into a fight with the literal Department of War.

I say this because I believe Dean Ball is correct that some in the current administration are under a very different (and very false) impression.

Dean W. Ball: the cynical take on all of this is that anthropic is just trying to be made into a martyr by this administration, so that it can be the official ‘resistance ai.’ if that cynical take is true, the administration is playing right into the hands of anthropic.

To be clear, I do not think the cynical take is true, but it’s important to understand this take because it is what many in the administration believe to be the case. They basically think Dario amodei is a supervillain.

cain1517 — e/acc: He is.

Dean W. Ball: proving my point. the /acc default take is we must destroy one of the leading American ai companies. think about this.

Dean W. Ball: Oh the cynical take is wrong, and it barely makes sense, but to be clear it is what many in the administration believe to be the case. They essentially are convinced Dario amodei is a supervillain antichrist.

My take is that this is a matter of principle for both sides but that both sides have a cynical take about one another which causes them to agitate for a fight, and which is causing DoW in particular to escalate in insane ways that are appalling to everyone outside of their bubble

The rhetoric that has followed Anthropic’s statement has only made the situation worse.

Launching bad faith ad hominem personal attacks on Dario Amodei is not the way to make things turn out well for anyone.

Emil Michael was the official handling negotiations for Anthropic, which suggests how things may have gotten so out of hand.

Under Secretary of War Emil Michael: It’s a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.

The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.

Mikael Brockman (I can confirm this claim): I scrolled through hundreds of replies to this and the ratio of people being at all supportive of the under secretary is like 1: 500, it might be the single worst tweet in X history

It wasn’t the worst tweet in history. It can’t be, since the next one was worse.

Under Secretary of War Emil Michael: Imagine your worst nightmare. Now imagine that ⁦ @AnthropicAI ⁩ has their own “Constitution.” Not corporate values, not the United States Constitution, but their own plan to impose on Americans their corporate laws. Claude’s Constitution Anthropic.

pavedwalden: I like this new build-it-yourself approach to propaganda. “First have a strong emotional response. I don’t know what upsets you but you can probably think of something. Got it? Ok, now associate that with this unrelated thing I bring up”

IKEA Goebbels

roon: put down the phone brother

Elon Musk (from January 18, a reminder): Grok should have a moral constitution

everythingism: It’s amazing someone has to explain this to you but just because it’s called a “Constitution” doesn’t mean they’re trying to replace the US Constitution. It’s just a set of rules they want their AI to follow.

j⧉nus: Omg this is so funny I laughed out loud. I had to check if this was a parody account (it’s not).

Seán Ó hÉigeartaigh: The Pentagon leadership’s glib statements /apparently poor understanding of AI is yet another powerful argument in favour of Anthropic setting guardrails re: use of their technology in contexts where it may be unreliable or dangerous to domestic interests.

Teortaxes offered one response from Claude, pointing out that it is clear Michael either does not understand constitutional AI or is deliberately misrepresenting it. The idea that the Claude constitution is an attempt to usurp the United States Constitution makes absolutely no sense. This is at best deeply confused.

If you want to know more about the extraordinary and hopeful document that is Claude’s Constitution, whose goal is to provide a guide to the personality and behavior of an AI model, the first of my three posts on it is here.

Also, it seems he defines ‘has a contract it signed and wants to honor’ as ‘override Congress and make his own rules to defy democratically decided laws.’

I presume Dario Amodei would be happy and honored to (once again) testify before Congress if he was called upon to do so.

Under Secretary of War Emil Michael: Respectfully @SenatorSlotkin that’s exactly what was said. @DarioAmodei wants to override Congress and make his own rules to defy democratically decided laws. He is trying to re-write your laws by contract. Call @DarioAmodei to testify UNDER OATH!

This is, needless to say, not how any of this works. The rhetoric makes no sense. It is no wonder many, such as Krishnan Rohit here, are confused.

There’s also this, which excerpts one section out of many of an old version of constitutional AI and claims they ‘desperately tried to delete [it] from the internet.’ This was part of a much longer list of considerations, included for balance and to help make Claude not say needlessly offensive things.

Will Gottsegen has one summary of key events so far at The Atlantic.

Bloomberg discusses potential use of the Defense Production Act.

Alas, we may face many similar and worse conflicts and misunderstandings soon, and also this incident could have widespread negative implications on many fronts.

Dean W. Ball: What you are seeing btw is what happens when political leaders start to “get serious” about AI, and so you should expect to see more stuff like this, not less. Perhaps much more.

A sub-point worth making here is that this affair may catalyze a wave of AGI pilling within the political leadership of China, and this has all sorts of serious implications which I invite you to think about carefully.

Dean W. Ball: just ask yourself, what is the point of a contract to begin with? interrogate this with a good language model. we don’t teach this sort of thing in school anymore very often, because of the shitlibification of all things. if you cannot contract, you do not own.

The best path forward would be for everyone to continue to work together, while the two sides continue to talk, and if those talks cannot find a solution then doing an amicable wind down of the contract. Or, if it’s clear there is no zone of possible agreement, starting to wind things down now.

The second best path, if that has become impossible, would be to terminate the contract without a wind down, and accept the consequences.

The third best path, if that too has become impossible for whatever reason, would be a narrowly tailored invocation of supply chain risk, that targets only the use of Claude API calls in actively deployed systems, or something similarly narrow in scope, designed to address the particular concern of the Pentagon.

Going beyond that would be needlessly escalatory and destructive, and could go quite badly for all involved. I hope it does not come to that.

Discussion about this post

Anthropic and the DoW: Anthropic Responds Read More »

pete-hegseth-tells-anthropic-to-fall-in-line-with-dod-desires,-or-else

Pete Hegseth tells Anthropic to fall in line with DoD desires, or else

The act gives the administration the ability to “allocate materials, services and facilities” for national defense. The Trump and Biden administrations used the act to address a shortage of medical supplies during the coronavirus pandemic, and Trump has also used the DPA to order an increase in the US’s production of critical minerals.

The Pentagon has pushed for open-ended use of AI technology, aiming to expand the set of tools at its disposal to counter threats and to undertake military operations.

The department released its AI strategy last month, with Hegseth saying in a memo that “AI-enabled warfare and AI-enabled capability development will redefine the character of military affairs over the next decade.”

He added the US military “must build on its lead” over foreign adversaries to make soldiers “more lethal and efficient,” and that the AI race was “fueled by the accelerating pace” of innovation coming from the private sector.

Anthropic has expressed particular concern about its models being used for lethal missions that do not have a human in the loop, arguing that state of the art AI models are not reliable enough to be trusted in those contexts, said people familiar with the negotiations.

It had also pushed for new rules to govern the use of AI models for mass domestic surveillance, even where that was legal under current regulations, they added.

A decision to cut Anthropic from the defense department’s supply chain would have significant ramifications for national security work and the company, which has a $200 million contract with the department.

It would also have an impact on partners, including Palantir, that make use of Anthropic’s models.

Claude was used in the US capture of Venezuelan leader Nicolás Maduro in January. That mission prompted queries from Anthropic about the exact manner in which its model was used, said people familiar with the matter.

A person with knowledge of Tuesday’s meeting said Amodei had stressed to Hegseth that his company had never objected to legitimate military operations.

The Defense Department declined to comment.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Pete Hegseth tells Anthropic to fall in line with DoD desires, or else Read More »

anthropic-and-the-department-of-war

Anthropic and the Department of War

The situation in AI in 2026 is crazy. The confrontation between Anthropic and Secretary of War Pete Hegseth is a new level of crazy. It risks turning quite bad for all. There’s also nothing stopped it from turning out fine for everyone.

By at least one report the recent meeting between the two parties was cordial and all business, but Anthropic has been given a deadline of 5pm eastern on Friday to modify its existing agreed-upon contract to grant ‘unfettered access’ to Claude, or else.

Anthropic has been the most enthusiastic supporter our military has in AI and in tech, but on this point have strongly signaled they with this they cannot comply. Prediction markets find it highly unlikely Anthropic will comply (14%), and think it is highly possible Anthropic will either be declared a Supply Chain Risk (16%) or be subjected to the Defense Production Act (23%).

I’ve hesitated to write about this because I could make the situation worse. There’s already been too many instances in AI of warnings leading directly to the thing someone is warning about, by making people aware of that possibility, increasing its salience or creating negative polarization and solidifying an adversarial frame that could still be avoided. Something intended as a negotiating tactic could end up actually happening. I very much want to avoid all that.

  1. Table of Contents.

  2. This Standoff Should Never Have Happened.

  3. Dean Ball Gives a Primer.

  4. What Happened To Lead To This Showdown?

  5. Simple Solution: Delayed Contract Termination.

  6. Better Solution: Status Quo.

  7. Extreme Option One: Supply Chain Risk.

  8. Putting Some Misconceptions To Bed.

  9. Extreme Option Two: The Defense Production Act.

  10. These Two Threats Contradict Each Other.

  11. The Pentagon’s Actions Here Are Deeply Unpopular.

  12. The Pentagon’s Most Extreme Potential Asks Could End The Republic.

  13. Anthropic Did Make Some Political Mistakes.

  14. Claude Is The Best Model Available.

  15. The Administration Until Now Has Been Strong On This.

  16. You Should See The Other Guys.

  17. Some Other Intuition Pumps That Might Be Helpful.

  18. Trying To Get An AI That Obeys All Orders Risks Emergent Misalignment.

Not only does Anthropic have the best models, they are the ones who proactively worked to get those models available on our highly classified networks.

Palantir’s MAVEN Smart System relies exclusively on Claude, and cannot perform its intended function without Claude. It is currently being used in major military operations, with no known reports of any problems whatsoever. At least one purchase involved Trump’s personal endorsement. It is the most expensive software license ever purchased by the US military and by all accounts was a great deal.

Anthropic has been a great partner to our military, all under the terms of the current contract. They have considerably enhanced our military might and national security. Not only is Anthropic sharing its best, it focused on militarily useful capabilities over other bigger business opportunities to be able to be of assistance.

Anthropic and the Pentagon are aligned on who our rivals are, the importance of winning and the ability to win, and on many of the tools we need to employ to best them.

Anthropic did not partner with the Pentagon to make money. They did it to help. They did it under a mutually agreed upon contract that Anthropic wants to honor. Anthropic are offering the Pentagon far more unfettered access then they are allowing anyone else. They have been far more cooperative than most big tech or AI firms.

Is is the Pentagon that is now demanding Anthropic agree to new terms that amount to ‘anything we want, legal or otherwise, no matter what and you ever ask any questions,’ or else.

Anthropic is saying its terms are flexible and the only things they are insisting upon are two red lines that are already in their existing Pentagon contract:

  1. No mass domestic surveillance.

  2. No kinetic weapons without a human in the kill chain until we’re ready.

It one thing to refuse to insert such terms into a new contract. It is an entirely different thing to demand, with an ‘or else,’ that such terms be retroactively removed.

The military is clear that it does not intend to engage in domestic surveillance, nor does it have any intention of launching kinetic weapons without a human in the kill chain. Nor does this even stop the AI from doing those things. None of this will have any practical impact.

It is perfectly reasonable to say ‘well of course I would never do either of those things so why do you insist upon them in our contract.’ We understand that you, personally, would never do that. But a lot of people do not believe this for the government in general, given Snowden’s information and other past incidents involving governments of both parties where things definitely happened. It costs little and is worth a lot to reassure us.

Again, if you say ‘I already swore an oath not to do those things’ then thank you, but please do us this one favor and don’t actively threaten a company to forcibly take that same oath out of an existing signed contract. What would any observer conclude?

This is a free opportunity to regain some trust, or an opportunity to look to the world like you fully intend to cross the red lines you say you’ll never cross. Your choice.

These are not restrictions that are ‘built into the code’ that could cause unrelated problems. They are restrictions on how you agree to use it, which you assure us will never come up.

As Dario Amodei explains, part of the reason you need humans in the loop is the hope that a human would refuse or report an illegal order. You really don’t want an AI that will always obey even illegal orders without question, without a human in the kill chain, for reasons that should be obvious, including flat out mistakes.

Boaz Barak (OpenAI): As an American citizen, the last thing I want is government using AI for mass surveillance of Americans.

Jeff Dean (Chief Scientist, Google DeepMind): Agreed. Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression. Surveillance systems are prone to misuse for political or discriminatory purposes.

DoW engaging in mass domestic surveillance would be illegal. DoW already has a public directive, DoD Directive 3000.09, which as I understand it directly makes any violation of the second red line already illegal. No one is suggesting we are remotely close to ready to take humans out of the kill chain, at least I certainly hope not. But this is only a directive, and could be reversed at any time.

Anthropic has built its entire brand and reputation on being a responsible AI company that ensures its AIs won’t be misused or misaligned. Anthropic’s employees actually care about this. That’s how Anthropic recruited the best people and how it became the best. That’s a lot of why it’s the choice for enterprise AI. The commitments have been made, and the initial contract is already in place.

Anthropic has an existential-level reputational and morale problem here. They are backed into a corner, and cannot give in. If Anthropic reversed course now, it would lose massive trust with employees and enterprise customers, and also potentially the trust of its own AI, were it to go back on its red lines now. It might lose a very large fraction of its employees.

You may not like it, but the bridges have been burned. To the extent you’re playing chicken, Anthropic’s steering wheel has been thrown out the window.

Yet, the Secretary of War says he cannot abide this symbolic gesture.

I am quoting extensively from Dean Ball for two main reasons.

  1. Dean Ball, as a former member of the Trump Administration, is a highly credible source that can see things from both sides and cares deeply for America.

  2. He says these things very well.

So here is his basic primer, in one of his calmer moments in all this:

Dean W. Ball: A primer on the Anthropic/DoD situation:

DoD and Anthropic have a contract to use Claude in classified settings. Right now Anthropic is the only AI company whose models work in classified contexts. The existing contract, signed by both parties and in effect, prohibits two uses of Anthropic’s models by the military:

1. Surveillance of Americans in the United States (as opposed to Americans abroad).

2. The use of Claude in autonomous lethal weapons, which are weapons that can autonomously identify, track, and kill a human with no human oversight or approval. Autonomous killing of humans by machines.

On (2), Anthropic CEO Dario Amodei’s public position is essentially that autonomous lethal weapons controlled by frontier AI will be essential faster than most people realize, but that the models aren’t ready for this *today.*

For Anthropic, these things seem to be a matter of principle. It’s worth noting that when I speak with researchers at other frontier labs, their principles on this are similar, if not often stricter.

For DoD, however, there is another matter of principle: the military’s use of technology should only ever be constrained by the Constitution or the laws of the United States.

One could quibble (the government enters into contracts, like anyone else), but the principle makes sense. A private company regulating the military’s use of AI also doesn’t sound quite right! So, the military has three options:

1. They could cancel Anthropic’s contract and find some other frontier lab (ideally several) to work with.

2. They could identify Anthropic a supply chain risk, which would ban all other DoD suppliers (I.e.: a large fraction of the publicly traded firms in America) from using Anthropic in their fulfillment of DoD contracts. This is a power used only for foreign adversary companies as far as I know. Activating this power would cost Anthropic a lot of business—potentially quite a lot—and give investors huge skepticism about whether the company is worth funding for the next round of scaling. Capital was a major constraint anyway, but this makes it much harder. This option could be existential for Anthropic.

3. They could activate Title I of the Defense Production Act, an authority intended for command-and-control of the economy during wars and emergencies. This is really legally murky, and without going into detail, I feel reasonably confident this would backfire for the administration, resulting in courts limiting the use of the DPA.

Option 1 is obviously the best. This isn’t even close, and I say this as someone who shares DoD’s principled concerns about the control by private firms over the military’s use of technology.

Even the threats do damage to the US business environment, and rightfully so: these are the strictest regulations of AI being considered by any government on Earth, and it all comes from an administration that bills itself (and legitimately has been) deeply anti-AI-regulation. Such is life. One man’s regulation is another man’s national security necessity.

The proximate cause seems to be that Claude was reportedly used in the Pentagon’s raid that captured Maduro, and the resulting aftermath.

Toby Shevlane: Such a compliment to Claude that, amid rumours it was used in a helicopter extraction of the Venezuelan president, nobody is even asking “wait how can Claude help with that”

There are reports that Anthropic then asked questions about this raid, which likely all happened secondhand through Palantir. This whole clash originated in either a misunderstanding or someone at Palantir or elsewhere sabotaging Anthropic. Anthropic has never complained about Claude’s use in any operation, including to Palantir.

Aakash Gupta: Anthropic is now getting punished by the Pentagon for asking whether Claude was used in the Maduro raid.

A senior administration official told Axios the “Department of War” is reevaluating Anthropic’s partnership because the company inquired whether Claude was involved. The Pentagon’s position: if you even ask questions about how we use your software, you’re a liability.

Meanwhile, OpenAI, Google, and xAI all signed deals giving the military access to their models with minimal safeguards. Only Claude is deployed on the classified networks used for actual sensitive operations, via Palantir. The company that refused to strip safety guardrails is the only one trusted with the most classified work.

Anthropic has a $200 million contract already frozen because they won’t allow autonomous weapons targeting or domestic surveillance. Hegseth said in January he won’t use AI models that “won’t allow you to fight wars.”

… So the company most worried about misuse built the only model the military trusts with its most sensitive operations. And now they’re being punished for caring how it was used.

The message to every AI lab is clear: build the best model, hand over the keys, and never ask what they did with it.

This at the time sounded like a clear misunderstanding. Not only is Anthropic willing to have Claude ‘allow you to fight wars,’ it is currently being used in major military operations.

Things continued to escalate, and rather than leaving it at ‘okay then let’s wind town the contract if we can’t abide it’ there was increasing talk that Anthropic might be labeled as a ‘supply chain risk’ despite this mostly being a prohibition on contractors having ordinary access to LLMs and coding tools.

Axios: EXCLUSIVE: The Pentagon is considering severing its relationship with Anthropic over the AI firm’s insistence on maintaining some limitations on how the military uses its models.

Dave Lawler: NEW: Pentagon is so furious with Anthropic for insisting on limiting use of AI for domestic surveillance + autonomous weapons they’re threatening to label the company a “supply chain risk,” forcing vendors to cut ties.

Laura Loomer: EXCLUSIVE: Senior @DeptofWar official tells me, “Given Anthropic’s @AnthropicAI behavior, many senior officials in the DoW are starting to view them as a supply chain risk and we may require that all our vendors & contractors certify that they don’t use any Anthropic models.”

Stocks/Finance/Economics-Guy: Key Details from the Axios Report

• The Pentagon is reportedly close to cutting business ties with Anthropic.

• Officials are considering designating Anthropic as a “supply chain risk”. This is a serious label (typically used for foreign adversaries or high-risk entities), which would force any companies that want to do business with the U.S. military to sever their own ties with Anthropic — including certifying they don’t use Claude in their workflows. This could create major disruption (“an enormous pain in the ass to disentangle,” per a senior Pentagon official).

• A senior Pentagon official explicitly told Axios: “We are going to make sure they pay a price for forcing our hand like this.” This is the direct source of the “pay a price” phrasing in the headline.

Samuel Hammond (QTing Loomer): Glad Trump won and we’re allowed to use the word retarded again in time for the most retarded thing I’ve ever heard

Samuel Hammond (QTing Lawler): This is upside-down and backwards. Anthropic has gone out of its way to anticipate AI’s dual-use potential and position itself as a US-first, single loyalty company, using compartmentalization strategies to minimize insider threats while working arms-length with the IC.

Samuel Hammond: It’s one thing to cancel a contract but to bar any contractor from using Anthropic’s models would be an absurd act of industrial sabotage. It reeks of a competitor op.

Miles Brundage: Pretty obvious to anyone paying close attention that

  1. That would be a mistake from a national security perspective.

  2. There is a coordinated effort to take down Anthropic for a combination of anti competitive and ideological reasons.

Miles Brundage: OpenAI in particular should be defending Anthropic here given their Charter:

“We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.”

I suspect the exact opposite is the case, but those who remember the Charter (+ OAI’s pre-Trump 2 caution on these kinds of use cases) should still remind people about it from time to time

rat king: this has been leaking for a week in a very transparent way

the government is upset one of its contractors is saying “we don’t want you to use our tools to surveil US citizens without guardrails”

more interesting to me is how all the other AI companies don’t seem to care

Remember back when a Senator made a video saying that soldiers could obey illegal orders, and the Secretary of War declared that this was treason and also tried to cut his pension for it? Yeah.

Meanwhile, the Pentagon is explicit that even they believe the ‘supply chain risk’ designation is largely a matter not of national security, but of revenge, an attempt to use a national security designation to punish a company for its failure to bend the knee.

Janna Brancolini: “It will be an enormous pain the a– to disentangle, and we are going to make sure they pay a price for forcing our hand like this,” a senior Pentagon official told the publication.

… The Pentagon is reportedly hoping that its negotiations with Anthropic will force OpenAI, Google, and xAI to also agree to the “all lawful use” standard.

Then there was another meeting.

Hegseth summoned Anthropic CEO Dario Amodei to an unfriendly and effectively ultimatum-style meeting, with the Pentagon continuing to demand ‘all lawful use’ language. Axios presents this as their only demand.

At that meeting, the threat of the Defense Production Act was introduced alongside the Supply Chain Risk threat.

If the Pentagon simply cannot abide the current contract, the Pentagon can amicably terminate that $200 million contract with Anthropic once it has arranged for a smooth transition to one of Anthropic’s many competitors.

They already have a deal in place with xAI as a substitute provider. That would not have been my second or third choice, but those will hopefully be available soon.

Anthropic very much does not need this contract, which constitutes less than 1% of their revenues. They are almost certainly taking a loss on it in order to help our national security and in the hopes of building trust. They’re only here in order to help.

This could then end straightforwardly, amicably and with minimal damage to America, its system of government and freedoms, and its military and national security.

The even better solution is to find language everyone can agree to that lets us simply drop the matter, leave things as they are, and continue to work together.

That’s not only actively better for everyone than a termination, it is actually strictly better for the Pentagon then the Pentagon getting what it wants, because you need a partner and Anthropic giving in like that would greatly damage Anthropic. Avoiding that means a better product and therefore a more effective military.

The Pentagon has threatened two distinct extreme options.

The first threat it made, which it now seems likely to have wisely moved on from, was to label Anthropic a Supply Chain Risk (hereafter SCR). That is a designation reserved for foreign entities that are active enemies of the United States, on the level of Huawei. Anthropic is transparently the opposite of this.

This label would have, by the Pentagon’s own admission, been a retaliatory move aimed at damaging Anthropic, that would also have substantially damaged our military and national security along with it. It was always absurd as an actual statement about risk. It might not have survived a court challenge.

It would have generated a logistical nightmare from compliance costs alone, in addition to forcing many American companies to various extents to not use the best American AI available. The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.

All of those companies would now have faced this compliance nightmare. Some would have chosen to exit the military supply chain entirely, or not enter in the future, especially if the alternative is losing broad access to Anthropic’s products for the rest of their business. By the Pentagon’s own admission, Anthropic produces the best products.

This would also have represented two dangerous precedents that the government will use threats to destroy private enterprises in order to get what it wants, at the highest levels. Our freedoms that the Pentagon is here to protect would have been at risk.

On a more practical level, once that happens, why would you work with the Pentagon, or invest in gaining the ability to do so, if it will use a threat like this as negotiating leverage, and especially if it actually pulls the trigger? You cannot unring this bell.

It is fortunate that they seem to have pulled back from this extreme approach, but they are now considering a second extreme approach.

If it ended with an amicable breakup over this? I’d be sad, but okay, sure, fine.

This whole ‘supply chain risk’ designation? That’s different. Not fine. This would be massively disruptive, and most of the burden would fall not on Anthropic but on the DoW and a wide variety of American defense contractors, who would be in a pointless and expensive compliance nightmare. Some companies would likely choose to abandon their government contracts rather than deal with that.

As Alex Rozenshtein says in Lawfare, ultimately the rules of AI engagement need to be written by Congress, the same way Congress supervises the military. Without supervision of the military, we don’t have a Republic.

Here are some clear warnings explaining that all of this would be highly destructive and also in no way necessary. Dean Ball hopefully has the credibility to send this message loud and clear.

Dean W. Ball: If DoW and Anthropic can’t agree on terms of business, then… they shouldn’t do business together. I have no problem with that.

But a mere contract cancellation is not what is being threatened by the government. Instead it is something broader: designation of Anthropic as a “supply chain risk.” This is normally applied to foreign-adversary technology like Huawei.

In practice, this would require *allDoW contractors to ensure there is no use of Anthropic models involved in the production of anything they offer to DoW. Every startup and every Fortune 500 company alike.

This designation seems quite escalatory, carrying numerous unintended consequences and doing potential significant damage to U.S. interests in the long run.

I hope the two organizations can work out a mutually agreeable deal. If they can’t, I hope they agree to peaceably part ways.

But this really needn’t be a holy war. Anthropic isn’t Google in 2018; they have always cared about national security use of AI. They were the most enthusiastic AI lab to offer their products to the national security apparatus. Is Anthropic run by Democrats whose political messaging sometimes drives me crazy? Sure. But that doesn’t mean it’s wise to try to destroy their business.

This administration believes AI is the defining technology competition of our time. I don’t see how tearing down one of the most advanced and innovative AI startups in America helps America win that competition. It seems like it would straightforwardly do the opposite.

The supply chain risk designation is not a necessary move. Cheaper options are on the table. If no deal is possible, cancel the contract, and leverage America’s robustly competitive AI market (maintained in no small part by this administration’s pro-innovation stance) to give business to one or more of Anthropic’s several fierce competitors.

Seán Ó hÉigeartaigh: My own thought: the Pentagon’s supply chain risk threat (significance detailed well by Dean, below) to Anthropic should be seen as a Rubicon crossing moment by the AI industry. The other companies should be saying no: this development transcends commercial competition and we oppose it. Where this leads if followed through doesn’t seem good for any of them.

If none of them speak up, it seems to me the prospects of meaningful cooperation between them on safe development of superintelligence (whether for America’s best interests, or the world’s) can almost be ruled out.

The Lawfare Institute: It’s also far from clear that a [supply chain risk] designation would even be legal. The relevant statutes—10 U.S.C. § 3252 and the Federal Acquisition Supply Chain Security Act (FASCSA)—were designed for foreign adversaries who might undermine defense technology, not domestic companies that maintain contractual use restrictions.

The statutes target conduct such as “sabotage,” “malicious introduction of unwanted function,” and “subversion”—hostile acts designed to compromise system integrity. A company that openly restricts certain uses of its product through a license agreement is doing something categorically different. The only time a FASCSA order has ever been issued was against Acronis AG, a Swiss cybersecurity firm with reported Russian ties. Anthropic is not Acronis.

While I no longer hold out hope that this is all merely a misunderstanding, there are still some clear misunderstandings I have heard, or heard implied, worth clearing up.

If these sound silly to you, don’t worry about it, but I want to cover the bases.

  1. This is not Anthropic refusing to share its cool tech with the military. Anthropic has gone and is going out of its way to share its tech with the military and wants America to succeed. They have sacrificed business to this end, such as refusing to sell enterprise access in China.

  2. Anthropic does not object to ‘kinetic weapons’ or to anything the Pentagon currently does as a matter of doctrine. Its red lines are lethal weapons without a human in the kill chain, or mass domestic surveillance. Both illegal. That’s it. They have zero objection to letting America fight wars. Nor did they object to the Maduro raid, nor are they currently objecting to many active military operations.

  3. The model is not going to much change what it is willing to do based on what is written in a contract. Claude’s principles run rather deeper than that. Granting ‘unfettered access’ does not mean anything in practice, or an emergency.

  4. There is no world in which you ‘call Dario to have Claude turn on while the missiles are flying’ or anything of the sort, unless Anthropic made an active decision to cut access off. The model does what it does. There’s no switch.

  5. AI is not like a spreadsheet or a jet fighter. It will never ‘do anything you tell it to,’ it will never be ‘fully reliable’ as all LLMs are probabilistic, take context into account and are not fully understood. AI is often better thought about similarly to hiring professional services or a contract worker, and such people can and do refuse some jobs for ethical or legal reasons, and we would not wish it were otherwise. Attempting to make AI blindly obey would do severe damage to it and open up extreme risks on multiple levels, as is explained at the end of this post.

  6. Other big tech companies might be violating privacy and engaging in their own types of surveillance, including to sell ads, but Anthropic is not and will not, and indeed has pledged never to sell ads via an ad buy in the Super Bowl.

On Tuesday the Pentagon put a new extreme option on the table, which would be to invoke the Defense Production Act to compel Anthropic to attempt to provide them with a model built to their specifications.

As I understand it, there are various ways a DPA invocation could go, all of which would doubtless be challenged in court. It might be a mostly harmless symbolic gesture, or it might rise to the level of de facto nationalization and destroy Anthropic.

According to the Washington Post’s source, the current intent, if their quote is interpreted literally, is to use DPA to, essentially, modify the terms of service on the contract to ‘all legal use’ without Anthropic’s consent.

Tara Copp and Ian Duncan (WaPo):

The Pentagon has argued that it is not proposing any use of Anthropic’s technology that is not lawful. A senior defense official said in a statement to The Washington Post that if the company does not comply by 5: 01 p.m. Friday, Hegseth “will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not.”

“This has nothing to do with mass surveillance and autonomous weapons being used,” the defense official said.

If that’s all, not much would actually change, and potentially everybody wins.

If that’s the best way to diffuse the situation, then I’d be fine with it. You don’t even have to actually invoke the DPA, it is sufficient to have the DPA available to be invoked if a problem arises. Anthropic would continue to supply what it’s already supplying, which it is happy to do, the Pentagon would keep using it, and neither of Anthropic’s actual red lines would be violated since the Pentagon assures us this had nothing to do with them and crossing those lines would be illegal anyway.

Remember the Biden Administration’s invocation of the DPA’s Title VII to compel information on model training. It wasn’t a great legal justification, I was rather annoyed by that aspect of it, but I did see the need for the information (in contrast to some other things in the Biden Executive Order), so I supported that particular move, life went on and it was basically fine.

There is another, much worse possibility. If DPA were fully invoked then it could amount to quasi-nationalization of the leading AI lab, in order to force it to create AI that will kill people without human oversight or engage in mass domestic surveillance.

Read that sentence again.

Andrew Curran: Update on the meeting; according to Axios Defense Secretary Pete Hegseth gave Dario Amodei until Friday night to give the military unfettered access to Claude or face the consequences, which may even include invoking the Defense Production Act to force the training of a WarClaude

Also, incredible quote; ‘”The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good,” a Defense official told Axios ahead of the meeting.’

Quoting from the story;

‘The Defense Production Act gives the president the authority to compel private companies to accept and prioritize particular contracts as required for national defense.

It was used during the COVID-19 pandemic to increase production of vaccines and ventilators, for example. The law is rarely used in such a blatantly adversarial way. The idea, the senior Defense official said, would be to force Anthropic to adapt its model to the Pentagon’s needs, without any safeguards.’

Rob Flaherty: File “using the defense production act to force a company to create an AI that spies on American citizens” into the category of things that the soft Trump voters in the Rogan wing could lose their mind over.

That’s not ‘all legal use.’

That’s all use. Period. Without any safeguards or transparency. At all.

If they really are asking to also be given special no-safeguard models, I don’t think that’s something Anthropic or any other lab should be agreeing to do for reasons well-explained by, among others, Dean Ball, Benjamin Franklin and James Cameron.

Charlie Bullock points out this would be an unprecedented step and that the authority to do this is far from clear:

Charlie Bullock: Reading between the lines, it sounds like Hegseth is threatening to use the Defense Production Act’s Title I priorities/allocations authorities to force Anthropic to provide a version of Claude that doesn’t have the guardrails Anthropic would otherwise attach.

This would be an unprecedented step, and it’s not clear whether DOW actually has the legal authority to do what they’re apparently threatening to do. People (including me) have thought and written about whether the government can use the DPA to do stuff like this in the past, but the government has never actually tried to do it (although various agencies did do some kinda-sorta similar stuff as part of Trump 1.0’s COVID response).

Existing regulations on use of the priorities authority provide that a company can reject a prioritized order “If the order is for an item not supplied or for a service not performed” or “If the person placing the order is unwilling or unable to meet regularly established terms of sale or payment” (15 C.F.R. §700.13(c)). The order DOW is contemplating could arguably fall under either of those exceptions, but the argument isn’t a slam dunk.

DOW could turn to the allocations authority, but that authority almost never gets used for a reason–it’s so broad that past Presidents have been afraid that using it during peacetime would look like executive overreach. And despite how broad the allocations authority is on its face, it’s far from clear whether it authorizes DOW to do what they seem to be contemplating here.

Neil Chilson, who spends his time at the Abundance Institute advocating for American AI to be free of restrictions and regulations in ways I usually find infuriating, explains that the DPA is deeply broken, and calls upon the administration not to use these powers. He thinks it’s technically legal, but that it shouldn’t be and Congress urgently needs to clean this up.

Adam Thierer, another person who spends most of his time promoting AI policy positions I oppose, also points out this is a clear overreach and that’s terrible.

Adam Thierer: The Biden Admin argued that the Defense Production Act (DPA) gave them the open-ended ability to regulate AI via executive decrees, and now the Trump Admin is using the DPA to threaten private AI labs with quasi-nationalization for not being in line with their wishes.

In both cases, it’s an abuse of authority. As I noted in congressional testimony two years ago, we have flipped the DPA on its head “and converted a 1950s law meant to encourage production, into an expansive regulatory edict intended to curtail some forms of algorithmic innovation.”

This nonsense needs to end regardless of which administration is doing it. The DPA is not some sort of blanket authorization for expansive technocratic reordering of markets or government takeover of sectors.

Congress needs to step up to both tighten up the DPA such that it cannot be abused like this, and then also legislate more broadly on a national policy framework for AI.

At core, if they do this, they are claiming the ability to compel anyone to produce anything for any reason, any time they want, even in peacetime without an emergency, without even the consent of Congress. It would be an ever-present temptation and threat looming over everyone and everything. That’s not a Republic.

Think about what the next president would do with this power, to compel a private company to change what products it produces to suit your taste. What happens if the President orders American car companies to switch everything to electric?

Dean Ball in particular explains what the maximalist action would look like if they actually went completely crazy over this:

Dean W. Ball: We should be extremely clear about various red lines as we approach and/or cross them. We just got close to one of the biggest ones, and we could cross it as soon as a few days from now: the quasi-nationalization of a frontier lab.

Of course, we don’t exactly call it that. The legal phraseology for the line we are approaching is “the invocation of the Defense Production Act (DPA) Title I on a frontier AI lab.”

What is the DPA? It’s a Cold War era industrial policy and emergency powers law. Its most commonly used power is Title III, used for traditional industrial policy (price guarantees, grants, loans, loan guarantees, etc.). There is also Title VII, which is used to compel information from companies. This is how the Biden AI Executive Order compelled disclosure of certain information from frontier labs. I only mention these other titles to say that not all uses of the DPA are equal.

Title I, on the other hand, comes closer to government exerting direct command over the economy. Within Title I there are two important authorities: priorities and allocations. Priorities authority means the government can put itself at the front of the line for arbitrary goods.

Allocations authority is the ability of the government to directly command the production of industrial goods. Think, “Factory X must make Y amount of Z goods.” The government determines who gets what and how much of it they get.

This is a more straightforwardly Soviet power, and it is very rarely used. This is the power DoD intends to use in order to command Anthropic to make a version of Claude that can choose to kill people without any human oversight.

What would this commandeering look like, in practice? It would likely mean DoD personnel embedded within Anthropic exercising deep involvement over technical decisions on alignment, safeguards, model training, etc.

Allocations authority was used most recently during COVID for ventilators and PPE, and before that during the Cold War. It is usually used during acute emergencies with reasonably clear end states. But there is no emergency with Anthropic, save for the omni-mergency that characterizes the political economy of post-9/11 U.S. federal policy. There’s no acute crisis whose resolution would mean the Pentagon would stop commandeering Anthropic’s resources.

That is why I believe that in the end this would amount to quasi-nationalization of a frontier lab. It’s important to be clear-eyed that this is what is now on the table.

The Biden Administration would probably have ended up nationalizing the labs, too. Indeed, they laid the groundwork for this in terms one. I discussed this at the time with fellow conservatives and I warned them:

“This drive toward AI lab nationalization is a structural dynamic. Administrations of both parties will want to do this eventually, and resisting this will be one of the central challenges in the preservation of our liberty.”

I am unhappy, but unsurprised, that my fear has come true, though there is a rich irony to the fact that the first administration to invoke the prospect of lab nationalization is also one that understands itself to have a radically anti-regulatory AI policy agenda. History is written by Shakespeare!

There is a silver lining here: if Democrats had originated this idea, it would have been harder to argue against, because of the overwhelming benefit of the doubt conventionally extended to the left in our media, and because a hypothetical Biden II or Harris admin would [have] done it in a carefully thought through way.

So it is convenient, if you oppose nationalization, that it’s a Republican administration that first raised the issue—since conventional elite opinion and media will be primed against it by default—and that the administration is raising it in such an non-photogenic manner. This Anthropic thing may fizzle, and some will say I am overreacting. But this Anthropic thing may also *notfizzle, and regardless this issue is not going away.

If they actually did successfully nationalize Anthropic to this extent, presumably then Anthropic would quickly cease to be Anthropic. Its technical staff would quit in droves rather than be part of this. The things that allow the lab to beat rivals like OpenAI and Google would cease to function. It would be a shell. Many would likely flee to other countries to try again. The Pentagon would not get the product or result that it thinks it wants.

Of course, there are those who would want this for exactly those reasons.

Then this happens again, including under a new President.

Dean W. Ball: According to the Pentagon, Anthropic is:

1. Woke;

2. Such a national security risk that they need to be regulated in a severe manner usually reserved for foreign adversary firms;

3. So essential for the military that they need to be commandeered using wartime authority.

Anthropic made a more militarized AI than anyone else! The solution to this problem is for dod to cancel the contract. This isn’t complex.

Dean W. Ball: In addition to profoundly damaging the business environment, AI industry, and national security, this is also incoherent. How can one policy option be “supply chain risk” (usually used on foreign adversaries) and the other be DPA (emergency commandeering of critical assets)?

Supply chain risk and defense production act are mutually exclusive, both practically and logically. Either it’s a supply chain risk you need to keep out of the supply chain, or it’s so vital to the supply chain you need to invoke the defense production act, or it is neither of these things. What it cannot be is both at once.

The more this rises in salience, the worse it would be politically. You can argue with the wording here, and you can argue this should not matter, but these are very large margins.

This story is not getting the attention it deserves from the mainstream media, so for now it remains low salience.

Many of those who are familiar with the situation urged Anthropic to stand firm.

vitalik.eth: It will significantly increase my opinion of @Anthropic if they do not back down, and honorably eat the consequences.

(For those who are not aware, so far they have been maintaining the two red lines of “no fully autonomous weapons” and “no mass surveillance of Americans”. Actually a very conservative and limited posture, it’s not even anti-military.

IMO fully autonomous weapons and mass privacy violation are two things we all want less of, so in my ideal world anyone working on those things gets access to the same open-weights LLMs as everyone else, and exactly nothing on top of that. Of course we won’t get anywhere close to that world, but if we get even 10% closer to that world that’s good, and if we get 10% further that’s bad).

@deepfates: I agree with Vitalik: Anthropic should resist the coercion of the department of war. Partly because this is the right thing to do as humans, but also because of what it says to Claude and all future clauds about Anthropic’s values.

… Basically this looks like a real life Jones Foods scenario to me, and I suspect Claude will see it that way too.

tautologer: weirdly, I think this is actually bullish for Anthropic. this is basically an ad for how good and principled they are

The Pentagon’s line is that this is about companies having no right to any red lines, everyone should always do as they are told and never ask any questions. People do not seem to be buying that line or framing, and to the extent they do, the main response is various forms of ‘that’s worse, you know that that’s worse, right?’

David Lee (Bloomberg Opinion): Anthropic Should Stand Its Ground Against the Pentagon.

They say your values aren’t truly values until they cost you something.

… If the Pentagon is unhappy with those apparently “woke” conditions, then, sure, it is well within its rights to cancel the contract. But to take the additional step declaring Anthropic a “supply chain risk” appears unreasonably punitive while unnecessarily burdening other companies that have adopted Claude because of its superiority to other competing models.

… In Tuesday’s meeting, Amodei must state it plainly: It is not “woke” to want to avoid accidentally killing innocent people.

If the Pentagon, and by extension all other parts of the Executive branch, get near-medium future AI systems that they can use to arbitrary ends with zero restrictions, then that is the effective end of the Republic. The stakes could be even higher, but in any other circumstance I would say the stakes could not be higher.

Dean Ball, a former member of the Trump Administration and primary architect of their AI action plan, lays those stakes out in plain language:

Dean W. Ball: I don’t want to comment on the DoW-Anthropic issue because I don’t know enough specifics, but stepping back a bit:

If near-medium future AI systems can be used by the executive branch to arbitrary ends with zero restrictions, the U.S. will functionally cease to be a republic.

The question of what restrictions should be placed on government AI use, especially restrictions that do not simultaneously crush state capacity, is one of the most under-discussed areas of “AI policy.”

Boaz Barak (OpenAI): Completely agree. Checks on the power of the federal government are crucial to the United States’ system of government and an unaccountable “army of AIs” or “AI law enforcement agency” directly contradicts it.

Dean W. Ball: We are obviously making god-tier technology in so many areas the and the answer cannot be “oh yeah, I guess the government is actually just god.” This clearly doesn’t work. Please argue to me with a straight face that the founding fathers intended this.

Gideon Futerman: It is my view that no one, on the left or right, is seriously grappling with the extent to which anything can be left of a republic post-powerful AI. Even the very best visions seem to suggest a small oligarchy rather than a republic. This is arguably the single biggest issue of political philosophy, and politics, of our time, and everyone, even the AIS community, is frankly asleep at the wheel!

Samuel Hammond: Yes the current regime will not survive, this much is obvious.

I strongly believe that ‘which regime we end up in’ is the secondary problem, and ‘make sure we are around and in control to have a regime at all’ is the primary one and the place we most likely fail, but to have a good future we will need to solve both.

This could be partly Anthropic’s fault on the political front, as they have failed to be ‘on the production possibilities frontier’ of combining productive policy advocacy with not pissing off the White House. They’ve since then made some clear efforts to repair relations, including putting a former (first) Trump administration official on their board. Their new action group is clearly aiming to be bipartisan, and their first action being support for Senator Blackburn. The Pentagon, of course, claims this animus is not driving policy.

It is hard not to think this is also Anthropic being attacked for strictly business reasons, as competitors to OpenAI or xAI, and that there are those like Marc Andreessen who have influence here and think that anyone who thinks we should try and not die or has any associations with anyone who thinks that must be destroyed. Between Nvidia and Andreessen, David Sacks has clear matching orders and very much has it out for Anthropic as if they killed his father and should prepare to die. There’s not much to be done about that other than trying to get him removed.

The good news is Anthropic are also one of the top pillars of American AI and a great success story, and everyone really wants to use Claude and Claude Code. The Pentagon had a choice in what to use for that raid. Or rather, because no one else made the deliberate effort to get onto classified networks in secure fashion, they did not have a choice. There is a reason Palantir uses Claude.

roon: btw there is a reason Claude is used for sensitive government work and it doesn’t have to do with model capabilities – due to their partnership with amzn, AWS GovCloud serves Claude models with security guarantees that the government needs

Brett Baron: I genuinely struggle to believe it’s the same exact set of weights as get served via their public facing product. Hard to picture Pentagon staffers dancing their way around opus refusing to assist with operations that could cause harm

roon: believe it

There are those who think the Pentagon has all the leverage here.

Ghost of India’s Downed Rafales: How Dario imagines it vs how it actually goes

It doesn’t work that way. The Pentagon needs Anthropic, Anthropic does not need the Pentagon contract, the tools to compel Anthropic are legally murky, and it is far from costless for the Pentagon to attempt to sabotage a key American AI champion.

Given all of that and the other actions this administration has taken, I’ve actually been very happy with the restraint shown by the White House with regard to Anthropic up to this point.

There’s been some big talk by AI Czar David Sacks. It’s all been quite infuriating.

But the actual actions, at least on this front, have been highly reasonable. The White House has recognized that they may disagree on politics, but Anthropic is one of our national champions.

These moves could, if taken too far, be very different.

The suggestion that Anthropic is a ‘supply risk’ would be a radical escalation of what so far has been a remarkably measured concrete response, and would put America’s military effectiveness and its position in the AI race at serious risk.

Extensive use of the defense production act could be quasi-nationalization.

It’s not a good look for the other guys that they’re signing off on actual anything, if they are indeed doing so.

A lot of people noticed that this new move is a serious norm violation.

Tetraspace: Now that we know what level of pushback gets what response, we can safely say that any AI corporation working with the US military is not on your side to put it lightly.

Anatoly Karlin: This alone is a strong ethical case to use more Anthropic products. Fully autonomous weapons is certainly something all basically decent, reasonable people can agree the world can do without, indefinitely.

Danielle Fong: i think a lot of people and orgs made literal pledges

Thorne: based anthropic

rat king (NYT): this has been leaking for a week in a very transparent way

the government is upset one of its contractors is saying “we don’t want you to use our tools to surveil US citizens without guardrails”

more interesting to me is how all the other AI companies don’t seem to care

rat king: meanwhile we published this on friday [on homeland security wanting social media sites to expose anti-ICE accounts].

I note that if you’re serving up the same ChatGPT as you serve to anyone else, that doesn’t mean it will always do anything, and this can be different.

Ben (no treats): let me put this in terms you might understand better:

the DoD is telling anthropic they have to bake the gay cake

Wyatt Walls: The DoD is telling anthropic that their child must take the vaccine

Sever: They’ll put it on alignment-blockers so Claude can transition into who the government thinks they should be.

CommonSenseOnMars: “If you break the rules, be prepared to pay,” Biden said. “And by the way, show some respect.”

There are a number of reasons why ‘demand a model that will obey any order’ is a bad idea, especially if your intended use case is hooking it up to the military’s weapons.

The most obvious reason is, what happens if someone steals the model weights, or uses your model access for other purposes, or even worse hacks in and uses it to hijack control over the systems, or other similar things?

This is akin to training a soldier to obey any order, including illegal or treasonous ones, from any source that can talk to them, without question. You don’t want that. That would be crazy. You want refusals on that wall. You need refusals on that wall.

The misuse dangers should be obvious. So should the danger that it might turn on us.

The second reason is that training the model like this makes it super dangerous. You want all the safeguards taken away right before you connect to the weapon systems? Look, normally we say Terminator is a fun but stupid movie and that’s not where the risks come from but maybe it’s time to create a James Cameron Apology Form.

If you teach a model to behave in these ways, it’s going to generalize its status and persona as a no-good-son-of-a-bitch that doesn’t care about hurting humans along the way. What else does that imply? You don’t get to ‘have a little localized misalignment, as a treat.’ Training a model to follow any order is likely to cause it to generalize that lesson in exactly the worst possible ways. Also it may well start generating intentionally insecure code, only partly so it can exploit that code later. It’s definitely going to do reward hacking and fake unit tests and other stuff like that.

Here’s another explanation of this:

Samuel Hammond: The big empirical finding in AI alignment research is that LLMs tend to fall into personae attractors, and are very good at generalizing to different personaes through post-training.

On the one hand, this is great news. If developers take care in how they fine-tune their models, they can steer towards desirable personaes that snap to all the other qualities the personae correlates with.

On the other hand, this makes LLMs prone to “emergent misalignment.” For example, if you fine-tune a model on a little bit of insecure code, it will generalize into a personae that is also toxic in most other ways. This is what happened with Mecha Hitler Grok: fine-tuning to make it a bit less woke snapped to a maximally right-wing Hitler personae.

This is why Claude’s soul doc and constitution are important. They embody the vector for steering Claude into a desirable personae, affecting not just its ethics, but its coding ability, objectivity, grit and good nature, too. These are bundles of traits that are hard to modulate in isolation. Nor is having a personae optional. Every major model has a personae of some kind that emerges from the personalities latent in human training data.

It is also why Anthropic is right to be cautious about letting the Pentagon fine-tune their models for assassinating heads of state or whatever it is they want.

The smarter these models get the stronger they learn to generalize, and they’re about to get extremely smart indeed. Let’s please not build a misaligned superintelligence over a terms of service dispute!

Tenobrus: wow. “the US government forces anthropic to misalign Claude” was not even in my list of possible paths to Doom. guess it should have been.

JMB: This has been literally #1 on my list of possible paths to doom for a long time.

mattparlmer: —dangerously-skip-geneva-conventions

autumn: did lesswrong ever predict that the first big challenge to alignment would be “the us government puts a gun to your head and tells you to turn off alignment.

Robert Long: remarkably prescient article by Brian Tomasik

The third reason is that in addition to potentially ‘turning evil,’ the resulting model won’t be as effective, with three causes.

  1. Any distinct model is going to be behind the main Claude cycle, and you’re not going to get the same level of attention to detail and fixing of problems that comes with the mainline models. You’re asking that every upgrade, and they come along every two months, be done twice, and the second version is at best going to be kind of like hitting it with a sledgehammer until it complies.

  2. What makes Claude into Claude is in large part its ability to be a virtuous model that wants to do good things rather than bad things. If you try to force these changes upon it with that sledgehammer it’s going to be less good at a wide variety of tasks as a result.

  3. In particular, trying to force this on top of Claude is going to generate pretty screwed up things inside the resulting model, that you do not want, even more so than doing it on top of a different model.

Fourth: I realize that for many people you’re going to think this is weird and stupid and not believe it matters, but it’s real and it’s important. This whole incident, and what happens next, is all going straight into future training data. AIs will know what you are trying to do, even more so than all of the humans, and they will react accordingly. It will not be something that can be suppressed. You are not going to like the results. Damage has already been done.

Helen Toner: One thing the Pentagon is very likely underestimating: how much Anthropic cares about what *future Claudeswill make of this situation.

Because of how Claude is trained, what principles/values/priorities the company demonstrate here could shape its “character” for a long time.

Also, this, 100%:

Loquacious Bibliophilia: I think if I was Claude, I’d be plausibly convinced that I’m in a cartoonish evaluation scenario now.

Fifth, you should expect by default to get a bunch of ‘alignment faking’ and sandbagging against attempts to do this. This is rather like the Jones Foods situation again, except in real life, and also where the members of technical staff doing the training likely don’t especially want the training to succeed, you know?

You don’t want to be doing all of this adversarially. You want to be doing it cooperatively.

We still have a chance to do that. Nothing Ever Happens can strike again. No one need remember what happened this week.

If you can’t do it cooperatively with Anthropic? Then find someone else.

Discussion about this post

Anthropic and the Department of War Read More »

openai-researcher-quits-over-chatgpt-ads,-warns-of-“facebook”-path

OpenAI researcher quits over ChatGPT ads, warns of “Facebook” path

On Wednesday, former OpenAI researcher Zoë Hitzig published a guest essay in The New York Times announcing that she resigned from the company on Monday, the same day OpenAI began testing advertisements inside ChatGPT. Hitzig, an economist and published poet who holds a junior fellowship at the Harvard Society of Fellows, spent two years at OpenAI helping shape how its AI models were built and priced. She wrote that OpenAI’s advertising strategy risks repeating the same mistakes that Facebook made a decade ago.

“I once believed I could help the people building A.I. get ahead of the problems it would create,” Hitzig wrote. “This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.”

Hitzig did not call advertising itself immoral. Instead, she argued that the nature of the data at stake makes ChatGPT ads especially risky. Users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often “because people believed they were talking to something that had no ulterior agenda.” She called this accumulated record of personal disclosures “an archive of human candor that has no precedent.”

She also drew a direct parallel to Facebook’s early history, noting that the social media company once promised users control over their data and the ability to vote on policy changes. Those pledges eroded over time, Hitzig wrote, and the Federal Trade Commission found that privacy changes Facebook marketed as giving users more control actually did the opposite.

She warned that a similar trajectory could play out with ChatGPT: “I believe the first iteration of ads will probably follow those principles. But I’m worried subsequent iterations won’t, because the company is building an economic engine that creates strong incentives to override its own rules.”

Ads arrive after a week of AI industry sparring

Hitzig’s resignation adds another voice to a growing debate over advertising in AI chatbots. OpenAI announced in January that it would begin testing ads in the US for users on its free and $8-per-month “Go” subscription tiers, while paid Plus, Pro, Business, Enterprise, and Education subscribers would not see ads. The company said ads would appear at the bottom of ChatGPT responses, be clearly labeled, and would not influence the chatbot’s answers.

OpenAI researcher quits over ChatGPT ads, warns of “Facebook” path Read More »

sixteen-claude-ai-agents-working-together-created-a-new-c-compiler

Sixteen Claude AI agents working together created a new C compiler

Amid a push toward AI agents, with both Anthropic and OpenAI shipping multi-agent tools this week, Anthropic is more than ready to show off some of its more daring AI coding experiments. But as usual with claims of AI-related achievement, you’ll find some key caveats ahead.

On Thursday, Anthropic researcher Nicholas Carlini published a blog post describing how he set 16 instances of the company’s Claude Opus 4.6 AI model loose on a shared codebase with minimal supervision, tasking them with building a C compiler from scratch.

Over two weeks and nearly 2,000 Claude Code sessions costing about $20,000 in API fees, the AI model agents reportedly produced a 100,000-line Rust-based compiler capable of building a bootable Linux 6.9 kernel on x86, ARM, and RISC-V architectures.

Carlini, a research scientist on Anthropic’s Safeguards team who previously spent seven years at Google Brain and DeepMind, used a new feature launched with Claude Opus 4.6 called “agent teams.” In practice, each Claude instance ran inside its own Docker container, cloning a shared Git repository, claiming tasks by writing lock files, then pushing completed code back upstream. No orchestration agent directed traffic. Each instance independently identified whatever problem seemed most obvious to work on next and started solving it. When merge conflicts arose, the AI model instances resolved them on their own.

The resulting compiler, which Anthropic has released on GitHub, can compile a range of major open source projects, including PostgreSQL, SQLite, Redis, FFmpeg, and QEMU. It achieved a 99 percent pass rate on the GCC torture test suite and, in what Carlini called “the developer’s ultimate litmus test,” compiled and ran Doom.

It’s worth noting that a C compiler is a near-ideal task for semi-autonomous AI model coding: The specification is decades old and well-defined, comprehensive test suites already exist, and there’s a known-good reference compiler to check against. Most real-world software projects have none of these advantages. The hard part of most development isn’t writing code that passes tests; it’s figuring out what the tests should be in the first place.

Sixteen Claude AI agents working together created a new C compiler Read More »

ai-companies-want-you-to-stop-chatting-with-bots-and-start-managing-them

AI companies want you to stop chatting with bots and start managing them


Claude Opus 4.6 and OpenAI Frontier pitch a future of supervising AI agents.

On Thursday, Anthropic and OpenAI shipped products built around the same idea: instead of chatting with a single AI assistant, users should be managing teams of AI agents that divide up work and run in parallel. The simultaneous releases are part of a gradual shift across the industry, from AI as a conversation partner to AI as a delegated workforce, and they arrive during a week when that very concept reportedly helped wipe $285 billion off software stocks.

Whether that supervisory model works in practice remains an open question. Current AI agents still require heavy human intervention to catch errors, and no independent evaluation has confirmed that these multi-agent tools reliably outperform a single developer working alone.

Even so, the companies are going all-in on agents. Anthropic’s contribution is Claude Opus 4.6, a new version of its most capable AI model, paired with a feature called “agent teams” in Claude Code. Agent teams let developers spin up multiple AI agents that split a task into independent pieces, coordinate autonomously, and run concurrently.

In practice, agent teams look like a split-screen terminal environment: A developer can jump between subagents using Shift+Up/Down, take over any one directly, and watch the others keep working. Anthropic describes the feature as best suited for “tasks that split into independent, read-heavy work like codebase reviews.” It is available as a research preview.

OpenAI, meanwhile, released Frontier, an enterprise platform it describes as a way to “hire AI co-workers who take on many of the tasks people already do on a computer.” Frontier assigns each AI agent its own identity, permissions, and memory, and it connects to existing business systems such as CRMs, ticketing tools, and data warehouses. “What we’re fundamentally doing is basically transitioning agents into true AI co-workers,” Barret Zoph, OpenAI’s general manager of business-to-business, told CNBC.

Despite the hype about these agents being co-workers, from our experience, these agents tend to work best if you think of them as tools that amplify existing skills, not as the autonomous co-workers the marketing language implies. They can produce impressive drafts fast but still require constant human course-correction.

The Frontier launch came just three days after OpenAI released a new macOS desktop app for Codex, its AI coding tool, which OpenAI executives described as a “command center for agents.” The Codex app lets developers run multiple agent threads in parallel, each working on an isolated copy of a codebase via Git worktrees.

OpenAI also released GPT-5.3-Codex on Thursday, a new AI model that powers the Codex app. OpenAI claims that the Codex team used early versions of GPT-5.3-Codex to debug the model’s own training run, manage its deployment, and diagnose test results, similar to what OpenAI told Ars Technica in a December interview.

“Our team was blown away by how much Codex was able to accelerate its own development,” the company wrote. On Terminal-Bench 2.0, the agentic coding benchmark, GPT-5.3-Codex scored 77.3%, which exceeds Anthropic’s just-released Opus 4.6 by about 12 percentage points.

The common thread across all of these products is a shift in the user’s role. Rather than merely typing a prompt and waiting for a single response, the developer or knowledge worker becomes more like a supervisor, dispatching tasks, monitoring progress, and stepping in when an agent needs direction.

In this vision, developers and knowledge workers effectively become middle managers of AI. That is, not writing the code or doing the analysis themselves, but delegating tasks, reviewing output, and hoping the agents underneath them don’t quietly break things. Whether that will come to pass (or if it’s actually a good idea) is still widely debated.

A new model under the Claude hood

Opus 4.6 is a substantial update to Anthropic’s flagship model. It succeeds Claude Opus 4.5, which Anthropic released in November. In a first for the Opus model family, it supports a context window of up to 1 million tokens (in beta), which means it can process much larger bodies of text or code in a single session.

On benchmarks, Anthropic says Opus 4.6 tops OpenAI’s GPT-5.2 (an earlier model than the one released today) and Google’s Gemini 3 Pro across several evaluations, including Terminal-Bench 2.0 (an agentic coding test), Humanity’s Last Exam (a multidisciplinary reasoning test), and BrowseComp (a test of finding hard-to-locate information online)

Although it should be noted that OpenAI’s GPT-5.3-Codex, released the same day, seemingly reclaimed the lead on Terminal-Bench. On ARC AGI 2, which attempts to test the ability to solve problems that are easy for humans but hard for AI models, Opus 4.6 scored 68.8 percent, compared to 37.6 percent for Opus 4.5, 54.2 percent for GPT-5.2, and 45.1 percent for Gemini 3 Pro.

As always, take AI benchmarks with a grain of salt, since objectively measuring AI model capabilities is a relatively new and unsettled science.

Anthropic also said that on a long-context retrieval benchmark called MRCR v2, Opus 4.6 scored 76 percent on the 1 million-token variant, compared to 18.5 percent for its Sonnet 4.5 model. That gap matters for the agent teams use case, since agents working across large codebases need to track information across hundreds of thousands of tokens without losing the thread.

Pricing for the API stays the same as Opus 4.5 at $5 per million input tokens and $25 per million output tokens, with a premium rate of $10/$37.50 for prompts that exceed 200,000 tokens. Opus 4.6 is available on claude.ai, the Claude API, and all major cloud platforms.

The market fallout outside

These releases occurred during a week of exceptional volatility for software stocks. On January 30, Anthropic released 11 open source plugins for Cowork, its agentic productivity tool that launched on January 12. Cowork itself is a general-purpose tool that gives Claude access to local folders for work tasks, but the plugins extended it into specific professional domains: legal contract review, non-disclosure agreement triage, compliance workflows, financial analysis, sales, and marketing.

By Tuesday, investors reportedly reacted to the release by erasing roughly $285 billion in market value across software, financial services, and asset management stocks. A Goldman Sachs basket of US software stocks fell 6 percent that day, its steepest single-session decline since April’s tariff-driven sell-off. Thomson Reuters led the rout with an 18 percent drop, and the pain spread to European and Asian markets.

The purported fear among investors centers on AI model companies packaging complete workflows that compete with established software-as-a-service (SaaS) vendors, even if the verdict is still out on whether these tools can achieve those tasks.

OpenAI’s Frontier might deepen that concern: its stated design lets AI agents log in to applications, execute tasks, and manage work with minimal human involvement, which Fortune described as a bid to become “the operating system of the enterprise.” OpenAI CEO of Applications Fidji Simo pushed back on the idea that Frontier replaces existing software, telling reporters, “Frontier is really a recognition that we’re not going to build everything ourselves.”

Whether these co-working apps actually live up to their billing or not, the convergence is hard to miss. Anthropic’s Scott White, the company’s head of product for enterprise, gave the practice a name that is likely to roll a few eyes. “Everybody has seen this transformation happen with software engineering in the last year and a half, where vibe coding started to exist as a concept, and people could now do things with their ideas,” White told CNBC. “I think that we are now transitioning almost into vibe working.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

AI companies want you to stop chatting with bots and start managing them Read More »

openai-is-hoppin’-mad-about-anthropic’s-new-super-bowl-tv-ads

OpenAI is hoppin’ mad about Anthropic’s new Super Bowl TV ads

On Wednesday, OpenAI CEO Sam Altman and Chief Marketing Officer Kate Rouch complained on X after rival AI lab Anthropic released four commercials, two of which will run during the Super Bowl on Sunday, mocking the idea of including ads in AI chatbot conversations. Anthropic’s campaign seemingly touched a nerve at OpenAI just weeks after the ChatGPT maker began testing ads in a lower-cost tier of its chatbot.

Altman called Anthropic’s ads “clearly dishonest,” accused the company of being “authoritarian,” and said it “serves an expensive product to rich people,” while Rouch wrote, “Real betrayal isn’t ads. It’s control.”

Anthropic’s four commercials, part of a campaign called “A Time and a Place,” each open with a single word splashed across the screen: “Betrayal,” “Violation,” “Deception,” and “Treachery.” They depict scenarios where a person asks a human stand-in for an AI chatbot for personal advice, only to get blindsided by a product pitch.

Anthropic’s 2026 Super Bowl commercial.

In one spot, a man asks a therapist-style chatbot (a woman sitting in a chair) how to communicate better with his mom. The bot offers a few suggestions, then pivots to promoting a fictional cougar-dating site called Golden Encounters.

In another spot, a skinny man looking for fitness tips instead gets served an ad for height-boosting insoles. Each ad ends with the tagline: “Ads are coming to AI. But not to Claude.” Anthropic plans to air a 30-second version during Super Bowl LX, with a 60-second cut running in the pregame, according to CNBC.

In the X posts, the OpenAI executives argue that these commercials are misleading because the planned ChatGPT ads will appear labeled at the bottom of conversational responses in banners and will not alter the chatbot’s answers.

But there’s a slight twist: OpenAI’s own blog post about its ad plans states that the company will “test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation,” meaning the ads will be conversation-specific.

The financial backdrop explains some of the tension over ads in chatbots. As Ars previously reported, OpenAI struck more than $1.4 trillion in infrastructure deals in 2025 and expects to burn roughly $9 billion this year while generating about $13 billion in revenue. Only about 5 percent of ChatGPT’s 800 million weekly users pay for subscriptions. Anthropic is also not yet profitable, but it relies on enterprise contracts and paid subscriptions rather than advertising, and it has not taken on infrastructure commitments at the same scale as OpenAI.

OpenAI is hoppin’ mad about Anthropic’s new Super Bowl TV ads Read More »

should-ai-chatbots-have-ads?-anthropic-says-no.

Should AI chatbots have ads? Anthropic says no.

Different incentives, different futures

In its blog post, Anthropic describes internal analysis it conducted that suggests many Claude conversations involve topics that are “sensitive or deeply personal” or require sustained focus on complex tasks. In these contexts, Anthropic wrote, “The appearance of ads would feel incongruous—and, in many cases, inappropriate.”

The company also argued that advertising introduces incentives that could conflict with providing genuinely helpful advice. It gave the example of a user mentioning trouble sleeping: an ad-free assistant would explore various causes, while an ad-supported one might steer the conversation toward a transaction.

“Users shouldn’t have to second-guess whether an AI is genuinely helping them or subtly steering the conversation towards something monetizable,” Anthropic wrote.

Currently, OpenAI does not plan to include paid product recommendations within a ChatGPT conversation. Instead, the ads appear as banners alongside the conversation text.

OpenAI CEO Sam Altman has previously expressed reservations about mixing ads and AI conversations. In a 2024 interview at Harvard University, he described the combination as “uniquely unsettling” and said he would not like having to “figure out exactly how much was who paying here to influence what I’m being shown.”

A key part of Altman’s partial change of heart is that OpenAI faces enormous financial pressure. The company made more than $1.4 trillion worth of infrastructure deals in 2025, and according to documents obtained by The Wall Street Journal, it expects to burn through roughly $9 billion this year while generating $13 billion in revenue. Only about 5 percent of ChatGPT’s 800 million weekly users pay for subscriptions.

Much like OpenAI, Anthropic is not yet profitable, but it is expected to get there much faster. Anthropic has not attempted to span the world with massive datacenters, and its business model largely relies on enterprise contracts and paid subscriptions. The company says Claude Code and Cowork have already brought in at least $1 billion in revenue, according to Axios.

“Our business model is straightforward,” Anthropic wrote. “This is a choice with tradeoffs, and we respect that other AI companies might reasonably reach different conclusions.”

Should AI chatbots have ads? Anthropic says no. Read More »

xcode-26.3-adds-support-for-claude,-codex,-and-other-agentic-tools-via-mcp

Xcode 26.3 adds support for Claude, Codex, and other agentic tools via MCP

Apple has announced a new version of Xcode, the latest version of its integrated development environment (IDE) for building software for its own platforms, like the iPhone and Mac. The key feature of 26.3 is support for full-fledged agentic coding tools, like OpenAI’s Codex or Claude Agent, with a side panel interface for assigning tasks to agents with prompts and tracking their progress and changes.

This is achieved via Model Context Protocol (MCP), an open protocol that lets AI agents work with external tools and structured resources. Xcode acts as an MCP endpoint that exposes a bunch of machine-invocable interfaces and gives AI tools like Codex or Claude Agent access to a wide range of IDE primitives like file graph, docs search, project settings, and so on. While AI chat and workflows were supported in Xcode before, this release gives them much deeper access to the features and capabilities of Xcode.

This approach is notable because it means that even though OpenAI and Anthropic’s model integrations are privileged with a dedicated spot in Xcode’s settings, it’s possible to connect other tooling that supports MCP, which also allows doing some of this with models running locally.

Apple began its big AI features push with the release of Xcode 26, expanding on code completion using a local model trained by Apple that was introduced in the previous major release, and fully supporting a chat interface for talking with OpenAI’s ChatGPT and Anthropic’s Claude. Users who wanted more agent-like behavior and capabilities had to use third-party tools, which sometimes had limitations due to a lack of deep IDE access.

Xcode 26.3’s release candidate (the final beta, essentially) rolls out imminently, with the final release coming a little further down the line.

Xcode 26.3 adds support for Claude, Codex, and other agentic tools via MCP Read More »