Author name: Paul Patrick

“extremely-angry”-trump-threatens-“massive”-tariff-on-all-chinese-exports

“Extremely angry” Trump threatens “massive” tariff on all Chinese exports

The chairman of the House of Representatives’ Select Committee on the Chinese Communist Party (CCP), John Moolenaar (R-Mich.), issued a statement, suggesting that, unlike Trump, he’d seen China’s rare earths move coming. He pushed Trump to interpret China’s export controls as “an economic declaration of war against the United States and a slap in the face to President Trump.”

“China has fired a loaded gun at the American economy, seeking to cut off critical minerals used to make the semiconductors that power the American military, economy, and devices we use every day including cars, phones, computers, and TVs,” Moolenaar said. “Every American will be negatively affected by China’s action, and that’s why we must address America’s vulnerabilities and build our own leverage against China.”

To strike back forcefully, Moolenaar suggested passing a law he sponsored that he said would “end preferential trade treatment for China, build a resilient resource reserve of critical minerals, secure American research and campuses from Chinese influence, and strangle China’s technology sector with export controls instead of selling it advanced chips.”

Moolenaar also emphasized steps he recommended back in September that he claimed Trump could take to “create real leverage with China” in the face of its stranglehold on rare earths.

Those included “restricting or suspending Chinese airline landing rights in the US,” “reviewing export control policies governing the sale of commercial aircraft, parts, and maintenance services to China,” and “restricting outbound investment in China’s aviation sector in coordination with key allies.”

“These steps would send a clear message to Beijing that it cannot choke off critical supplies to our defense industries without consequences to its own strategic sectors,” Moolenaar wrote in his September letter to Trump. “By acting together, the US and its allies can strengthen our resilience, reinforce solidarity, and create real leverage with China.”

“Extremely angry” Trump threatens “massive” tariff on all Chinese exports Read More »

uk-regulators-plan-to-force-google-changes-under-new-competition-law

UK regulators plan to force Google changes under new competition law

Google is facing multiple antitrust actions in the US, and European regulators have been similarly tightening the screws. You can now add the UK to the list of Google’s governmental worries. The country’s antitrust regulator, known as the Competition and Markets Authority (CMA), has confirmed that Google has “strategic market status,” paving the way to more limits on how Google does business in the UK. Naturally, Google objects to this course of action.

The designation is connected to the UK’s new digital markets competition regime, which was enacted at the beginning of the year. Shortly after, the CMA announced it was conducting an investigation into whether Google should be designated with strategic market status. The outcome of that process is a resounding “yes.”

This label does not mean Google has done anything illegal or that it is subject to immediate regulation. It simply means the company has “substantial and entrenched market power” in one or more areas under the purview of the CMA. Specifically, the agency has found that Google is dominant in search and search advertising, holding a greater than 90 percent share of Internet searches in the UK.

In Google’s US antitrust trials, the rapid rise of generative AI has muddied the waters. Google has claimed on numerous occasions that the proliferation of AI firms offering search services means there is ample competition. In the UK, regulators note that Google’s Gemini AI assistant is not in the scope of the strategic market status designation. However, some AI features connected to search, like AI Overviews and AI Mode, are included.

According to the CMA, consultations on possible interventions to ensure effective competition will begin later this year. The agency’s first set of antitrust measures will likely expand on solutions that Google has introduced in other regions or has offered on a voluntary basis in the UK. This could include giving publishers more control over how their data is used in search and “choice screens” that suggest Google alternatives to users. Measures that require new action from Google could be announced in the first half of 2026.

UK regulators plan to force Google changes under new competition law Read More »

apple-and-google-reluctantly-comply-with-texas-age-verification-law

Apple and Google reluctantly comply with Texas age verification law

Apple yesterday announced a plan to comply with a Texas age verification law and warned that changes required by the law will reduce privacy for app users.

“Beginning January 1, 2026, a new state law in Texas—SB2420—introduces age assurance requirements for app marketplaces and developers,” Apple said yesterday in a post for developers. “While we share the goal of strengthening kids’ online safety, we are concerned that SB2420 impacts the privacy of users by requiring the collection of sensitive, personally identifiable information to download any app, even if a user simply wants to check the weather or sports scores.”

The Texas App Store Accountability Act requires app stores to verify users’ ages and imposes restrictions on those under 18. Apple said that developers will have “to adopt new capabilities and modify behavior within their apps to meet their obligations under the law.”

Apple’s post noted that similar laws will take effect later in 2026 in Utah and Louisiana. Google also recently announced plans for complying with the three state laws and said the new requirements reduce user privacy.

“While we have user privacy and trust concerns with these new verification laws, Google Play is designing APIs, systems, and tools to help you meet your obligations,” Google told developers in an undated post.

The Utah law is scheduled to take effect May 7, 2026, while the Louisiana law will take effect July 1, 2026. The Texas, Utah, and Louisiana “laws impose significant new requirements on many apps that may need to provide age appropriate experiences to users in these states,” Google said. “These requirements include ingesting users’ age ranges and parental approval status for significant changes from app stores and notifying app stores of significant changes.”

New features for Texas

Apple and Google both announced new features to help developers comply.

“Once this law goes into effect, users located in Texas who create a new Apple Account will be required to confirm whether they are 18 years or older,” Apple said. “All new Apple Accounts for users under the age of 18 will be required to join a Family Sharing group, and parents or guardians will need to provide consent for all App Store downloads, app purchases, and transactions using Apple’s In-App Purchase system by the minor.”

Apple and Google reluctantly comply with Texas age verification law Read More »

nepa,-permitting-and-energy-roundup-#2

NEPA, Permitting and Energy Roundup #2

It’s been about a year since the last one of these. Given the long cycle, I have done my best to check for changes but things may have changed on any given topic by the time you read this.

NEPA is a constant thorn in the side of anyone attempting to do anything.

A certain kind of person responds with: “Good.”

That kind of person does not want humans to do physical things in the world.

  1. They like the world as it is, or as it used to be.

  2. They do not want humans messing with it further.

  3. They often also think humans are bad, and should stop existing entirely.

  4. Or believe humans deserve to suffer or do penance.

  5. Or do not trust people to make good decisions and safeguard what matters.

  6. To them: If humans want to do something to the physical world?

  7. That intention is highly suspicious.

  8. We probably should not let them do that.

This is in sharp contrast with the type of person who:

  1. Cares about the environment.

  2. Who wants good things rather than bad to happen to people.

  3. Who wants the Earth not to boil and the air to be clean and so on.

That person notices that NEPA long ago started doing more harm than good.

The central problem lies is the core structure.

NEPA is based on following endlessly expanding procedural requirements. NEPA does not ask whether costs exceed benefits, or whether something is a good idea.

It only asks about whether procedure was followed sufficiently, or whether blame can be identified somewhere.

Is to never go full NEPA.

Instead, one of Balsa’s central policy goals is an entire reimagining of NEPA.

The proposal is to replace NEPA’s procedural requirement with (when necessary) an analysis of costs and benefits, followed by a vote of stakeholders on whether to proceed. Ask the right question, whether the project is worthwhile, not the wrong question of what paperwork is in order.

This post is not about laying out that procedure. This post is mostly about telling various Tales From the NEPA. It is also telling tales of energy generation from around the world, including places that do not share our full madness.

Versions and components of this post have been in my drafts for a long time, so not all of them will be as recent as is common here.

That was the plan, but the vibes have changed, and NEPA is pretty clearly a large net negative for climate change, which has to win in a fight at this point over the local concerns it protects. There’s a new plan.

Kill it. Repeal NEPA. Full stop.

Emmett Shear: Previously I believed that there was probably enough protection offered by NEPA / CEQA that it offset the damage. At this point, it’s pretty clear we should simply repeal it and figure out if we need to replace anything later.

Repeal NEPA.

Eli Dourado: NEPA is the most harmful law in the United States and must be repealed. In addition to causing forest fires and miring Starbase in litigation, it results in delays and endless litigation for any project that the federal government touches. It should be target #1 for DOGE.

Sadly this is not yet within the Overton window of relevant Congressional staff. We need to make this happen.

The problem is DOGE is working via cutting off payments, which doesn’t let you hit NEPA. But if you want to strike a blow that matters? This is it.

Thomas Hochman: Trump has revoked Carter’s 1977 EO – the one that empowered CEQ to issue binding NEPA regulations.

This could dramatically reshape how federal agencies conduct NEPA reviews. In the post-Marin Audobon landscape, this is a HUGE deal!

Let’s walk through a few of the specifics.

CEQ will formally propose repealing the existing NEPA regulations that have guided agencies since the late 1970s.

This is major: those regs currently supply the standard NEPA procedures (e.g., EIS format, “major federal action,” significance criteria, scoping, etc.).

Rescinding them will leave agencies free to adopt leaner, agency-specific processes—or rely on new guidance.

CEQ will lead a “working group” composed of representatives from various agencies.

This group’s job is to develop or revise each agency’s own NEPA procedures so that they’re consistent with the new (post-rescission) approach.

As I wrote in Green Tape, establishing this internal guidance at the agency level will be crucial.

And finally: general permits and permits-by-rule!!!

Eli Dourado: NEPA is still there but CEQ’s authority to issue regs is gone (and was already under dispute in the courts). NEPA the statute still applies.

Cremieux: Some pretty major components of permitting reform on day one might be the biggest news in the day one EOs.

There’s trillions in value in these EOs.

I am delighted.

You love to see it. This day one move gave me a lot of hope things would go well, alas other things happened that were less good for my hopes.

As with everything Trump Administration, we will see what actually happens once the lawyers get involved. This is not an area where ‘ignore the law and simply do things’ seems likely to work out. Some of it will stick, but how much?

Yay nuclear deregulation, yes obviously, Alex Tabarrok opens with ‘yes, I know how that sounds’ but actually it sounds great if you’re familiar with the current regulations. I do see why one would pause before going to the level of ‘treat small modular reactors like x-ray machines,’ I’d want to see the safety case and all that, but probably.

Nuclear is making an attempted comeback, now that AI and trying the alternative of doing nothing has awoken everyone to the idea that this will be a good idea.

Alexander Kaufman: The Senate voted nearly unanimously (88-2) to pass major legislation designed to reverse the American nuclear industry’s decades-long decline and launch a reactor-building spree to meet surging demand for green electricity at home and to catch up with booming rivals overseas.

The bill slashes the fees the Nuclear Regulatory Commission charges developers, speeds up the process for licensing new reactors and hiring key staff, and directs the agency to work with foreign regulators to open doors for U.S. exports.

The NRC is also tasked with rewriting its mission statement to avoid unnecessarily limiting the “benefits of nuclear energy technology to society,” essentially reinterpreting its raison d’être to include protecting the public against the dangers of not using atomic power in addition to whatever safety threat reactors themselves pose.

There is a lot of big talk about how much this will change the rules on nuclear power regulation. As usual I remain skeptical of big impacts, but it would not take much to reach a tipping point. As the same post notes when discussing the reactors in Georgia, once you relearn what you are doing things get a lot better and cheaper.

That pair of reactors, which just came online last month at the Alvin W. Vogtle Electric Generating Plant in Georgia, cost more than $30 billion. As the expenses mounted, other projects to build the same kind of reactor elsewhere in the country were canceled.

The timing could hardly have been worse. After completing the first reactor, the second one cost far less and came online faster. But the disastrous launch dissuaded any other utilities from investing in a third reactor, which economists say would take even less time and money now that the supply chains, design and workforce are established.

After seeing the results, the secretary of energy called for ‘hundreds’ more large nuclear reactions, two hundred by 2050.

NextEra looking to restart a nuclear plant in Iowa that closed in 2020.

Ontario eyeing a new nuclear plant near Port Hope, 8-10 GWs.

It seems the world is a mix of people who shut down nuclear power out of spite and mood affiliation and intuitions that nuclear is dangerous or harmful when it is orders of magnitudes safer and fully green, versus those who realize we should be desperate to build more.

Matthew Yglesias: The all-time energy champ

Matthew Yglesias: The Fukushima incident was deadly not because anyone died in the accident but because the post-Fukushima nuclear shutdown caused more Japanese people to freeze to death to conserve energy.

Dean Ball is excited by the bill, including its prize for next-gen nuclear tech and the potential momentum for future action.

There is a long way to go. It seems we do things like this? And the lifetime for nuclear power plants has nothing to do with their physical capabilities or risks?

Alec Stapp: Apparently we have been arbitrarily limiting licenses for nuclear power reactors to 40 years because of… “antitrust considerations”??

Nuclear Regulatory Commission: The Atomic Energy Act authorizes the Nuclear Regulatory Commission to issue licenses for commercial power reactors to operate for up to 40 years. These licenses can be renewed for an additional 20 years at a time. The period after the initial licensing terris known as the period of extended operation. Economic and antitrust considerations, not limitations of nuclear technology, determined the original 40-year term for reactor licenses. However, because of this selected time period, some systems, structures, and components may have been engineered on the basis of an expected 40-year service life.

Or how many nuclear engineers does it take to change a light bulb? $50k worth.

How much for a $200 panel meter in a control room? Trick question, it’s $20k.

And yet nuclear is still at least close to cost competitive.

The Senate also previously forced Biden to drop attempt to renominate Jeff Baran to the Nuclear Regulatory Commission (NRC), the the basis of Baran being starkly opposed to the concept of building nuclear power plants.

Why has Biden effectively opposed nuclear power? My model is that it is the same reason he is effectively opposing power transmission and green energy infrastructure. Biden thinks throwing money and rhetoric at problems makes solutions happen. He does not understand, even in his best moments, that throwing up or not removing barriers to doing things stops those things from happening even when that was not your intention.

Thus, he can also do things like offer $1.5 billion in conditional commitments to support recommissioning a Michigan nuclear power plant, because he understands that more nuclear power plants would be a good thing. And he can say things like ‘White House to support new nuclear power plants in the U.S.’ That does not have to cause him to, in general, do the things that cause there to be more nuclear power plants. Because he cannot understand that those are things like ‘appoint people to the NRC that might ever want to approve a new nuclear power plant in practice.’ Luckily, it sounds like the new bill does indeed help.

Small modular nuclear reactor (SMR) planned for Idaho, called most advanced in the nation, was cancelled in January after customers could not be found to buy the electricity. Only a few months later, everyone is scrambling for more electricity to run their data centers. It seems like if you build it, Microsoft or Google or Amazon will be happy to plop a data center next to that shiny new reactor, no? And certainly plenty of other places would welcome one. So odd that this got slated first for Idaho.

Alberta signs deal to jointly assess the development and deployment of SMRs. One SMR is to be built in Ontario by end of 2028, to be online in 2029.

Slovakia to build a new nuclear reactor. Also talk of increased capacity in France, Italy, Britain, Japan, Canada, Poland and The Netherlands in the thread, from May. From December 2023: Poland authorizes 24 new small nuclear plants.

Philippines are considering nuclear as well.

Support for Nuclear in Australia has increased dramatically to 61%-37%.

Claim that the shutdown of nuclear power in Germany was even more corrupt than we realized, with the Green Party altering expert conclusions to stop a reconsideration. The claims have been denied.

Unfortunately, we are allowing an agreement whereby Korea Hydro & Nuclear Power (KHNP) will not be allowed to bid on new nuclear projects in Western countries, due to an IP issue with Westinghouse, on top of them paying royalties for any Asian projects that move forward. The good news is that if Westinghouse wins the projects, KHNP and KEPCO are prime sub-contractors anyway, so it is unclear this is that much of a functional

India’s energy mix is rapidly improving.

John Raymond Hanger: Good morning with good news: Solar and wind were 92% of India’s generation additions in 2022. It deployed as much solar in 2022 as the UK has ever built. Coal also was down 78%.

India’s large wind & solar additions are vital climate action. Wonderful!

David Bryan: Confusingly written. Coal in India is at 55%. Wind is at 10% & solar is at 12% – sometimes more, sometimes less.

A Zaugurz: mmmkay “India has an estimated 65.3 GW of proposed, on-grid coal capacity under active development: 30.4 GW under construction and 34.9 GW in pre-construction”

Stocks are different from flows are different from changes in flow.

India was still adding more coal capacity even as of December. But almost all of their new capacity was Solar and Wind, and they are clearly turning the corner on new additions. One still has to then make emissions go down, and then make net emissions drop below zero. One step at a time.

Also, 15% of the installed base is already not bad at all. Renewables are a big deal. A shame nuclear is only 2%.

Khavda in India, now the world’s largest renewable energy park using a combination of solar and wind energy.

Back in America, who is actually building the most solar?

Why, Texas, of course. California talks a good game, but what matters most (aside from sunlight where California has the edge) is not getting in the way.

EIAGov: More than half of the new utility-scale solar capacity scheduled to come online in 2024 is planned for three states: Texas (35%), California (10%) and Florida (6%).

Alec Stapp: Blue states talk a big game on clean energy goals while Texas just goes and builds it.

Texas is building grid-scale solar at a much faster rate than California.

Can’t be due to regulations — must be because CA is a small state with little sunshine 🙃

The numbers mean that despite being the state with at least the third most sunlight after Arizona and New Mexico, California is bringing online solar per capita than the nation overall.

If you want to install home solar, it is going to get expensive in the sense that the cost of the panels themselves is now less than 10% of your all-in price.

Patrick Collison: Grid storage to grow 80% in 2024.

This is a great start, but still a drop in the bucket, as I understand it, compared to what we will need if we intend to largely rely on solar and wind in the future.

One enemy of transmission lines and other grid capabilities are NIMBYs who block projects. This includes the projects that never get proposed because of anticipation that they would then be blocked, or would require time and money to not be blocked.

Tyler Cowen reprints an anonymous email he got, that notes that there is also an incentive problem.

When you increase power transmission capacity, you make power fungible between areas. Which is good, unless you are in the power selling business, in which case this could mean more competition and less profit. By sticking to smaller local projects, you can both avoid scrutiny and mostly get the thing actually built, and also avoid competition.

That makes a lot of sense. It suggests we need to look at who is tasked with building new transmission lines, and who should be bearing the costs, including the need to struggle to make the plans and ensure they actually happen.

Why do we produce so little energy in America? Partly because it is so cheap.

Alex Tabarrok: The US has some of the lowest electricity prices in the world. Shown below are industrial retail electricity prices in EU27, USA, UK, China and Japan. Electricity is critical for AI compute, electric cars and more generally reducing carbon footprints. The US needs to build much more electricity infrastructure, by some estimates tripling or quadrupling production. That’s quite possible with deregulation and permitting reform. I am pleased to learn, moreover, that we are starting from a better base than I had imagined.

Amazing how much prices elsewhere have risen lately, and how timid has been everyone’s response.

Harvard was going to do something useful and run a geoengineering experiment. They cancelled it, because of course they did. And their justifications were, well…

James Temple (MIT Technology Review): Proponents of solar geoengineering research argue we should investigate the concept because it may significantly reduce the dangers of climate change. Further research could help scientists better understand the potential benefits, risks and tradeoffs between various approaches. 

But critics argue that even studying the possibility of solar geoengineering eases the societal pressure to cut greenhouse gas emissions.

Maxwell Tabarrok: The moral hazard argument against geoengineering is ridiculous. The central problem of climate change is that firms ignore the cost of carbon emissions.

Since these costs are already ignored, decreasing them will not change their actions, but it will save lives.

It is difficult to grasp how horrible this reasoning actually is. I can’t even. Imagine this principle extended to every other bad thing.

Yes, actually implementing such solutions comes with a lot of costs and dangers. That makes it seem like a good idea to learn now what those are via experiments? Better to find out now than to wait until the crisis gets sufficiently acute that people or nations get desperate?

The alternative hypothesis is that many people who claim to care about the climate crisis are remarkably uninterested in the average temperatures in the world not going up. We have a lot of evidence for this hypothesis.

It goes like this.

Chris Elmendorf: A $650m project would:

– subtract 20 acres from wildlife refuge

– add 35 acres to same refuge

– connect 160 renewable energy projects to grid

Not with NEPA + local enviros standing in the way. Even after “years” of enviro study.

Kevin Stevens: An environmental group successfully blocked the last miles of a nearly complete 102 mile transmission line that would connect 160 renewable sites to the Midwest. Brutal.

I mean it’s completely insane that we would let 20 acres stop this at all, the cost/benefit is so obviously off the charts even purely for the environmental impacts alone. But also they are adding 35 other acres. At some point, you have to wonder why you are negotiating with people who are never willing to take any deal at all.

The answer is, you are forced to ‘negotiate,’ they pretend to do so back, you give them concessions like the above, and then they turn around and keep suing, with each step adding years of delay. The result is known as a ‘doom loop.

Clean energy projects are the very projects most likely to get stuck in the litigation doom loop. A recent Stanford study found that clean energy projects are disproportionately subject to the strictest level of review. These reviews are also litigated at higher rates — 62% of the projects currently pending the strictest review are clean energy projects. The best emissions modelers show that our emissions reductions goals are not possible without permitting reform.

That is why we’re proposing a time limit on injunctions. Under our proposal, after four years of litigation and review, courts could no longer prevent a project from beginning construction. This solution would pair nicely with the two-year deadlines imposed on agencies to finish review in the Fiscal Responsibility Act. If the courts believe more environmental review is necessary, they could order the government to perform it, but they could no longer paralyze new energy infrastructure construction.

This kills projects, and not the ones you want to kill. I am actually surprised the graph here lists rates that are this low.

If we are not going to do any other modifications, a time limit on court challenges seems like the very least we can do. My preferred solution is to change the structure entirely.

The good news is that some actions are exempt. But the exemptions are illustrative.

Thomas Hochman: Perhaps the funniest categorical exclusion under NEPA is the one that allows the Department of the Interior to make an arrest without filling out an environmental assessment.

Alec Stapp: When everything qualifies as a “major federal action” under NEPA, you get absurd outcomes like this where agencies have to waste time creating categorical exclusions for every little thing.

This is how state capacity withers and dies.

So in practice, what does NEPA look like?

Congestion Pricing in NYC was a case in point before Hochul betrayed us.

It looks like this, seriously, read how the UFT itself made its claims.

United Federation of Teachers: In our lawsuit, we assert that this program, scheduled to go into effect this spring, cannot be put in place without the completion of a thorough environmental impact statement that includes the potential effects of the plan on the city’s air quality.

In our lawsuit, we assert that this program, scheduled to go into effect this spring, cannot be put in place without the completion of a thorough environmental impact statement that includes the potential effects of the plan on the city’s air quality.

The current plan would not eliminate air and noise pollution or traffic, but would simply shift that pollution and traffic to the surrounding areas, particularly Staten Island, the Bronx, upper Manhattan and Northern New Jersey, causing greater environmental injustice in our city.

[Copy of lawsuit here.]

Emmett Shear: This NYC teacher’s union in suing to stop congestion pricing by using a claims that it will somehow have a negative impact on the environment when fewer people drive into the city. Truly extraordinary.

Joey Politano: “Teachers Union Sues NYC Over Congestion Pricing Proposal’s Lack of Thorough Environmental Review” would almost be too on the nose for an Onion headline about the problems with American transit & environmental policy, and yet here we are.

Alec Stapp: NYC teachers union claims the environmental review for congestion pricing wasn’t thorough enough. Actual photo of the 4,000-page environmental review:

Alec Stapp: Reminder that congestion pricing was passed by the democratically-elected state legislature in 2019. Vetocracy is bad.

That’s right. Reducing the use of cars via congestion pricing has been insufficiently studied in case it causes air pollution in other areas, and would cause ‘injustice.’ And the review pictured above means they did not take review seriously, it’s not enough.

It is amazing to me we put up with such nonsense.

Alternatively, it looks similar to this, technically the National Historic Preservation Act:

AP: Tribes, environmental groups ask US court to block $10 billion energy transmission project in Arizona.

Alec Stapp: The biggest clean energy project in the country is being sued by environmental groups.

This outdated version of “environmentalism” needs to die.

It’s time to build, not block.

The project is being sued under the National Historic Preservation Act. The NHPA is possibly the second most abused law in this space (the first being NEPA).

This is the last thing you see before your clean energy project gets sued into oblivion.

Same group sues to block geothermal project [in Nevada.]

Here we have a lithium mine and a geothermal project in California, and conservation groups once again are suing.

E&E News: Environmental groups on Thursday sued officials who signed off on a lithium project in the Salton Sea that a top Biden official has helped advance.

Comité Civico del Valle and Earthworks filed the legal complaint in Imperial County Superior Court against county officials who approved conditional permits for Controlled Thermal Resources’ Hell’s Kitchen lithium and geothermal project.

The groups argue that the country’s approval of the direct lithium extraction and geothermal brine project near the southeastern shore of the Salton Sea violates county and state laws, such as the California Environmental Quality Act.

Alec Stapp: Conservation groups suing to stop a lithium and geothermal project in California. Yet another example of conservation groups at direct odds with climate goals. Clean energy deployment requires building stuff in the real world, full stop.

Armand Domalewski: so so so many environmental groups are just climate arsonists

And by rule of three, the kicker:

Thomas Hochman: This is the most classic NEPA story of all time: The US Forest Service wanted to implement a wildfire prevention plan, so it had to fill out an environmental impact statement. Before they could complete the environmental impact statement, though, half the forest burned down.

Scott Lincicome: 10/10. no notes. A little googling here reveals the kicker: the appellant apparently filed the appeal/complaint to protect the forest (a goshawk habitat)… that subsequently burned down bc of her appeal/complaint.

CEQA is like NEPA, only it is by California, and it is even worse.

Dan Federman: It breaks my brain that NIMBYs have succeeded in blocking coastal wind farms that aren’t visible from shore, but yet Santa Barbara somehow has oil rigs visible from its gorgeous beaches 🤯

Max Dubler: You have to understand that California environmental law is chiefly concerned with *preserving the environment that existed in 1972,not protecting nature. For example, oil companies sued under environmental law to block LA’s ban on oil drilling.

Alex Armlovich: According to CEQA, the California Environment of 1970 Quality Act, removing the oil derricks for renewables would impact the visual & cultural resources of this historic beach drilling site

Years of study & litigation needed to protect our heritage drilling environment 🛢️👨‍🏭⛽

Here is one CEQA issue. This also points out that you can write in all the exemptions you want, and none of that will matter unless those in charge actually use them.

Alec Stapp: Environmental review is now holding up bus sheltersby six months. Literally can’t even build the smallest physical infrastructure quickly.

Chris Elmendorf: Why is LA’s transit agency cowering before NIMBYs rather than invoking the new @Scott_Wiener-authored CEQA exemption for transit improvements?

Bus stops certainly would seem to meet SB 922’s definition of “transit prioritization project,” which includes “transit stop access and safety improvement.”

But instead of invoking the exemption, the city prepared a CEQA “negative declaration,” which is the most legally vulnerable kind of CEQA document.

It looks like city’s neg dec was made just months prior to effective date of SB 922. So what? City could have approved an exemption too as soon as SB 922 took effect.

Or city could approve it tomorrow.

Rather than putting bus shelters on hold just b/c a lawsuit was filed.

Halting transit projects just b/c a lawsuit was filed seems especially dumb at the present moment, when Leg has made clear it wants these projects streamlined and elite/journalist opinion has turned against CEQA abuse.

If a court dared to enjoin the project, there’d be uproar & Leg would probably respond by strengthening the transit exemption.

Just look at what the NIMBYs “won” by stopping 500 apartments on a valet parking lot in SF (AB 1633), or student housing in Berkeley (AB 1307).

Is this just a case of bureaucratic risk aversion (@pahlkadot) or autopiloting of dumb processes? Is there an actual problem with SB 922 that makes it unusable for ordinary LA bus stops?

Curious to hear from anyone who knows.

My presumption is it is basically autopiloting, that the people who realize it is dumb do not have the reach to the places where people don’t care. It is all, of course, madness.

The good news is that the recent CEQA ruling says that it should no longer give the ‘fullest possible protection’ to everything, so things should get somewhat better.

I wish this number were slightly higher for effect, but still, seriously:

R Street: 49% of CEQA lawsuits are against environmentally advantageous projects!

Somehow, rather than struggling to improve the situation, many Democrats seem to strive to make the inability to do things even worse.

For example, we have this thread from January detailing the proposed Clean Electricity Transmission Acceleration Act. Here are some highlights of an alternative even worse future, where anyone attempting to do anything is subject to arbitrary hold up for ransom, and also has to compensate any losers of any kind, including social and economic costs, and destroying any limitations on scope of issues. The bill even spends billions to fund these extractive oppositional efforts directly.

Chris Elmendorf: The bill defines “enviro impact” to include not only enviro impacts, but also “aesthetic, historic, cultural, economic, social, or health” effects. (Whereas CEQA is still about “physical environment”–even in the infamous Berkeley case.

The bill creates utterly open-ended authority for fed. agencies to demand a “community benefit agreement” as price of any permit for which an EIS was prepared. This converts NEPA from procedural statute into grant of substantive reg / exaction authority.

In exercising the “community benefit agreement” authority, what is a federal agency supposed to consider? Consideration #1 is the deepness of the permit-applicant’s pocket. Seriously.

And in case the new, expansive definition of “enviro impact” wasn’t clear enough, the bill adds that CBAs may be imposed to offset any *social or economic(as well as enviro) impacts of the project.

The bill would also destroy the caselaw that limits scope of enviro review to scope of agency’s regulatory discretion, not only via the CBA provision but also by expressly requiring analysis of effects “not within control of any federal agency.”

And the bill would send a torrent of federal dollars into the coffers of groups who’d exploit NEPA for labor or other side hustles. – there’s $3 billion of “community engagement” grants to arm nonprofits & others

And in case NEPA turned up to 11 isn’t enough, there’s also a new, judicially enforceable mandate for “community impact reports” if a project may affect an “environmental justice community.”

There’s also a wild provision that seems to prevent federal agencies from considering any project alternatives in an EIS unless (a) the alternative would have no adverse impact on any “overburdened community,” or (b) it serves a compelling interest *in that community.*

One more observation: the bill subtly nudges NEPA toward super-statute status by directing conflicts b/t NEPA “and any other provision of law” to be resolved in favor of NEPA.

Or we could have black-clad anarchists storming electric vehicle factories, as happened in Tesla’s plant in Berlin. Although we do have ‘Georgia greens’ suing over approval of an EV plant there.

It turns out everyone basically let this mess happen because Congress wanted to get home for Christmas? No one understood what they were doing?

This seems like it should be publicized more, as part of the justification for killing this requirement outright, and finding a better way to accomplish the same thing. It is amazing how often the worst laws have origin stories like this.

Patrick McKenzie: Sometimes we spend a trillion dollars because not spending a trillion dollars would require an exhausting amount of discussions and it is almost Christmas.

Please accept a trillion dollars as a handwavy gesture in the direction of the impact of NEPA; my true estimate if I gave myself a few hours to think would probably be higher.

I know everyone says that once you pass a regulation it is almost impossible to remove. But what if… we… did it anyway?

It is good that these exclusions are available. It is rather troublesome that they are so necessary?

Nicholas Bagley: A number of federal agencies have categorical exclusions from NEPA for … picnics.

If you need a special exception to make the lawyers comfortable with picnics, maybe you’ve gone too far?

“29. Approval of recreational activities (such as Coast Guard unit picnic) which do not involve significant physical alteration of the environment, increase disturbance by humans of sensitive natural habitats, or disturbance of historic properties, and which do not occur in, or adjacent to, areas inhabited by threatened or endangered species.”

I mean, modest proposal time, perhaps?

If your physical activity:

  1. Does not significantly physically alter the environment.

  2. Does not disturb sensitive natural habitats.

  3. Does not disturb historic properties.

  4. Does not occur in or adjacent to areas inhabited by threatened or endangered species.

Or, actually, how about if your physical activity:

  1. Does not significantly physically alter the environment.

Then why are we not done? What is there we need to know, that this does not imply?

Shouldn’t we be able to declare this in a common sense way, and then get sued in court if it turns out we were lying or wrong, with penalties and costs imposed if someone sues in profoundly silly fashion, such as over a picnic?

The good news: We are getting some new ones.

Alec Stapp: Huge permitting reform news:

The Bureau of Land Management is giving geothermal energy exploration a categorical exclusion from environmental review under NEPA.

If you care about clean energy abundance, this is a massive win.

Arnab Datta: ICMYI – great news, BLM is adopting categorical exclusions to streamline permitting for geothermal exploration.

What’s the upshot? Exploration for geothermal resources should be a little bit easier.

As a result of the FRA (passed last year), agencies can now more easily adopt the categorical exclusions of other agencies. That’s what BLM is doing, adopting the CXs from the Navy and USFS.

Ex: Here’s the Navy CX. Applications to BLM for geophysical surveys will be easier.

Why is this important? BLM (and the federal government writ-large) owns a LOT of land, particularly in the Mountain West where heat resources are strongest, most ripe for geothermal production.

We previously recommended that BLM expand its CXs for geothermal exploration. This is a great first step, but there’s more to do.

Patrick McKenzie: I’ve been doing some work with a geothermal non-profit, and my inexpert understanding is that while first-of-their-kind projects are the immediate blocker, NEPA lawsuits were a major worry with expanding rollout to blue states after proof of concepts get accomplished and tweaked.

The (without loss of generality) Californias of the world are huge energy consumers, cannot simply import electricity from (without loss of generality) Texas (though you can tweak that assumption a tiny bit on margins), and local organized political opposition is a real factor.

If you’re curious as to why geothermal is likely to be a much larger part of U.S. and world energy mixes than you model currently, see this.

Short version: fracking makes it viable in many more places than it is currently.

There is a lot more to do on the exclusion front. It seems like obvious low-hanging fruit to exclude as many green projects as possible. Yes, this suggests the laws are bad and should be replaced entirely, but until then we work with the system we have.

Alec Stapp: Other federal agencies should start thinking about how to use categorical exclusions from NEPA environmental review to make it easier to build in the US.

Here’s some low-hanging fruit:

@HUDgov should update its categorical exclusion to cover office-to-residential conversions.

That seems like it should fall under ‘wait why do we even need an exclusion again?’

And that’s not all.

Alec Stapp: Good news on permitting reform!

The Department of Energy is giving a categorical exclusion from NEPA environmental review to:

– transmission projects that use existing rights of way

– solar projects on disturbed lands

– energy storage projects on disturbed lands

Sam Drolet: This is huge. It’s good to see agencies starting to use categorical exclusions in a sensible way to streamline permitting.

Christian Fong: A lot of great rules coming out right now from the Biden admin, but one that has gone under the radar is on NEPA reforms from the DOE! Specifically, expanding the list of projects that qualify for categorical exclusions, which can speed up NEPA reviews from 2 years to 2 months!

,,,

For solar, CXes were initially granted only if projects were built in a previously disturbed/developed land and were under 10 acres in size. This rule has removed the acreage limit, so that even projects 1000+ acres in size can still qualify if on previously disturbed lands.

A new CX was established for storage, with similar qualifications around previously disturbed/developed land, as well as the ability for projects to use a small bit of contiguous undisturbed land, as storage may be colocated with existing energy/tx/industry infrastructure.

Given full NEPA EISes can take 2 years, and new tx lines can take 10+ years to build, these rules are particularly important for improving tx capacity through reconductoring, GETs, etc. DOE just released its liftoff report on this topic here.

A new paper suggested a ‘a green bargain’ could be struck on permitting reform, that is a win for everyone. It misunderstands what people are trying to win.

Zachary Liscow: NEW PAPER: “Getting Infrastructure Built: The Law and Economics of Permitting,” on:

– What to consider in design of permitting rules

– The evidence

– A possible “green bargain” that benefits efficiency, the environment, & democracy

Infrastructure is often slowed by permitting rules. One example is NYC congestion pricing, which was passed by the legislature in 2019, had a 4,000-page environmental assessment, and is now subject to 5 lawsuits.

But how can we speed up permitting and make infrastructure less expensive, while still protecting the environment and promoting democratic participation?

Environmental permitting might be part of why infrastructure is so expensive in the US. Urban transit costs about 3x the rich/middle-income country average and 6x some European countries.

At the same time, US environmental outcomes aren’t particularly good. Based on the Yale Center for Environmental Law & Policy’s Environmental Performance Index, the US (at 51, just the 25thpercentile) is considerably worse than the OECD average (at 58).

So what to do? I have a framework w/ 2 dimensions. 1: Improve the capacity of the executive to decide – for example, by limiting the power of litigation to delay. 2: Improve the capacity to plan, including by adding broad-based participation. Currently the US is weak along both.

I propose a “green bargain” that strengthens both executive power and capacity, empowering the executive to decide, but coupling that w/ increased capacity to plan, especially in ways that promote broad-based participation.

Can we create a win-win-win for:

  1. Efficiency

  2. Democracy

  3. The Environment?

Yes, most certainly, in a big way. The current system is horribly inefficient in many ways that benefit neither democracy nor the environment, indeed frequently this problem is harmful to both. If these are the stakeholders, then there any number of reasonable ‘good governance’ plans one could use.

So what is the deal proposed? As far as I can tell it is this:

  1. Increase executive power over decisions.

  2. Rise standards required for judicial review and make court challenges harder in various ways – time limits, standing requirements, limits on later new objections, limits on challenges to negotiated agreements, more categorical exclusions.

  3. Limits on court injunctions to stop projects.

  4. Increase executive capacity on all levels of government so they can handle it.

  5. Improve quality and scope of executive reviews and enhance public participation.

Do I support all of these proposals on the margin? Absolutely. Most would be good individually, the rest make sense as part of the package.

Do I think that this should be convincing to a sincere environmentalist, that they should trust that this will lead to good outcomes? Alas, my answer is essentially no, if this was applied universally.

I do think this should be convincing if it is applied exclusively to green energy projects and complementary infrastructure. If the end goal is solar panels or batteries, and one believes there is a climate crisis, then one should have a strong presumption that this should dominate local concerns and that delays and cost overruns kill projects.

Here is the other core problem: Many obstructionists do not want better outcomes.

Or in other words:

If someone’s goal is to accomplish good things that make life better, such as reducing how much carbon is in the atmosphere or ensuring the air and water are clean, and is willing to engage in trade to make the world improve and not boil, but has different priorities and weightings and values than you have?

Then you can and should engage in trade, talk price. We can make a deal.

If someone’s goal is to stop development and efficiency because they believe development and efficiency are bad, either locally or globally? If they think humanity and civilization (or at least your civilization) are bad and want them to suffer and repent? Or consider every downside a sacred value that should veto any action?

If they actively do not want the problem solved because they want to use the problem as leverage to demand other things, and you are not a fan of those other things?

Then you are very much out of luck. There is no deal.

My expectation is that even if your deal is a clear win for people and the environment, in way they can trust, you are going to get a lot of opposition from environmental groups anyway. Here, I worry that this proposal also does not give them sufficient reason to trust. Half the time the executive will be a Republican.

There is also this issue:

John Arnold: I used to think decarbonization was hard because voters prioritized the goals of the energy system in the following order:

  1. Affordable

  2. Reliable

  3. Secure

  4. Clean

But I missed one. The actual order of prioritization is:

  1. Jobs

  2. Affordable

  3. Reliable

  4. Secure

  5. Clean

That, however infuriating, is something we can work with. There is no inherent conflict between jobs and energy. It trades off with affordable, but we can talk price.

I have so had it with all the ‘yes this saves the Earth but think of the local butterfly species’ arguments, not quite literally this case but yeah, basically.

Alec Stapp: Very funny to me that the framing of this NYT article is sincerely like:

“What’s more important: Saving earth or satisfying the idiosyncratic preferences of a small handful of activists?”

That’s not a close call!

Act fast, this closes July 15: Introducing the Modernizing NEPA Challenge.

In alignment with ongoing efforts at DOT to improve the NEPA process, this Modernizing NEPA Challenge seeks:

  • To encourage project sponsors to publish documents associated with NEPA that increase accessibility and transparency for the public, reviewing agencies, and historically under-represented populations and

  • To incentivize project sponsors to implement collaborative, real-time agency reviews to save time and improve the quality of documents associated with NEPA.

More details at the link. The goal is to get collaborative tools and documents, and interactive documents, that make it easier to navigate the NEPA process.

Thomas Hochman: Almost every pro-NEPA argument can be traced back to two studies: Adelman’s “Permitting Reform’s False Choice” and Ruple’s “Measuring the NEPA Litigation Burden.”

Today on Green Tape, we take a closer look at both studies.

Note that the majority of the pie is green, as in clearly net good for the planet, even if you take the position that fossil fuels are always bad – and I’d argue the opposite, that anything replacing coal on the margin is obviously net good too.

Ruple’s study analyzes 1,499 federal court opinions involving NEPA challenges from 2001-2013. He comes up with two key findings:

  1. Only about 0.22% of NEPA decisions (1 in 450) face legal challenges

  2. Less than 1% of NEPA reviews are environmental impact statements (EISs), and about 5% of NEPA reviews are environmental assessments (EAs).

But in “Measuring the NEPA Litigation Burden,” Ruple makes the same error that he’s made throughout his work on permitting: he takes the average volume of litigation across all NEPA reviews and makes a conclusion about NEPA’s impact on infrastructure in particular. In other words, his denominator is wildly inflated.

Ruple’s dataset includes NEPA reviews at every level of stringency: categorical exclusions (CatExes), EAs, and EISs. And as Ruple himself points out, around 95% of NEPA reviews are CatExes. This is because NEPA is triggered by almost every federal action, and thus CatExes are required for everything from federal hiring to, yes, picnics.

Their findings are remarkable: solar, pipeline, wind, and transmission projects saw litigation rates of 64%, 50%, 38%, and 31% respectively. The cancellation rates for each of these project types were also extraordinarily high, ranging from 12% to 32%.

Barring a rebuttal I do not expect, that seems definitive to me.

What about the other study?

The basic flaw in Adelman’s analysis is that he sees the low percentage of renewable projects that undergo NEPA as evidence that NEPA isn’t a big deal. In reality, the exact opposite is true.

As in, NEPA is so obnoxious that where there would be NEPA issues, the projects never even get proposed. We only get renewable projects, mostly, where they have sufficient protections from this. Again, this seems definitive to me.

My grand solution to NEPA would be to repeal the paperwork and impact statement requirements, and replace them with a requirement for cost-benefit analysis. That is a complex proposal that I am confident would work if done properly, but which I agree is tricky.

The grander, simpler solution is repeal NEPA first and ask questions later. At this point, I think that’s the play.

A solution in bewteen those two would perhaps be to change the remedy for failure, so that any little lapse does not stop an entire project.

This is another approach to the fundamental problem of sacred values versus cost-benefit.

Right now, we are essentially saying that a wide variety of potential harms are sacred values, that we would not compromise at any price, such that if there is any danger that they might be compromised then that is a full prohibition.

But of course that is crazy. With notably rare exceptions, that is not how most anything should ever work.

Thus, an alternative solution is to keep all the requirements in place, and allow all the lawsuits to proceed.

But we change the remedy from injunctions to damages.

As in, suppose a group sues you, and says that your project might violate some statute or do harm in some way. Okay, fine. They file that claim, it is now established. You can choose to wait until the claim is resolved, if the claim actually is big enough and plausible enough that you are worried they might win.

Or, you can convince an insurance company to post a bond for you, covering the potential damages (and let’s say you can get dinged for double or triple the actual harms, more if you ‘did it on purpose’ and knew you were breaking the rules, in some sense, or something). So you can choose to do the project anyway, without a delay, and if it turns out you messed up or broke the rules and the bill comes due, then you have to pay that bill. And since it is a multiplier, everyone is still ahead.

Discussion about this post

NEPA, Permitting and Energy Roundup #2 Read More »

not-a-game:-cards-against-humanity-avoids-tariffs-by-ditching-rules,-adding-explanations

Not a game: Cards Against Humanity avoids tariffs by ditching rules, adding explanations

Cards Against Humanity, the often-vulgar card game, has launched a limited edition of its namesake product without any instructions and with a detailed explanation of each joke, “why it’s funny, and any relevant social, political, or historical context.”

Why? Because, produced in this form, “Cards Against Humanity Explains the Joke” is not a game at all, which would be subject to tariffs as the cards are produced overseas. Instead, the product is “information material” and thus not sanctionable under the law Trump has been using—and CAH says it has obtained a ruling to this effect from Customs and Border Patrol.

“What if DHS Secretary and Dog Murderer Kristi Noem gets mad and decides that Cards Against Humanity Explains the Joke is not informational material?” the company asks in an FAQ about the new edition. (If you don’t follow US politics, Noem really did kill her dog Cricket.) Answer: “She can fuck right off, because we got a binding ruling from Trump’s own government that confirms this product is informational and 100% exempt from his stupid tariffs.”

Pre-orders for the $25 product end on October 15, and it will allegedly never be reprinted. All profits will be donated to the American Library Association “to fight censorship.”

This is the way

Now, I would never claim that Cards Against Humanity is a particularly highbrow form of entertainment; for instance, the website promoting the new edition opens with “Trump is Going to Fuck Christmas” in giant white letters. (That headline refers to Trump’s tariffs… I hope.)

“This holiday season, give your loved ones the gift of knowledge, give America’s libraries the gift of cash, and don’t give Donald Trump a fucking cent,” the site says.

Not a game: Cards Against Humanity avoids tariffs by ditching rules, adding explanations Read More »

it’s-prime-day-2025-part-two,-and-here-are-the-best-deals-we-could-find

It’s Prime Day 2025 part two, and here are the best deals we could find

Skip to content

We’ve got deals on keyboards, laptops, accessories, and all kinds of stuff!

Photograph of Optimus Prime in NYC Photograph of Optimus Prime in NYC

Optimus Prime, the patron saint of Prime Day, observed in midtown Manhattan in June 2023. Credit: Raymond Hall / Getty Images

Optimus Prime, the patron saint of Prime Day, observed in midtown Manhattan in June 2023. Credit: Raymond Hall / Getty Images

Camera deals

Laptop deals

Keyboard and mice

Monitors

Android phones

Indoor security cameras

Outdoor security cameras

Smart locks

TV and streaming devices

Soundbars and speakers

Fitness trackers

Robot vacuums

Cordless vacuums Deals

Gaming

Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Loading Loading comments…

Most Read

  1. Listing image for first story in Most Read: Elon Musk tries to make Apple and mobile carriers regret choosing Starlink rivals

It’s Prime Day 2025 part two, and here are the best deals we could find Read More »

openai-wants-to-make-chatgpt-into-a-universal-app-frontend

OpenAI wants to make ChatGPT into a universal app frontend

While Altman mentioned an “agentic commerce protocol” that will allow app users to enjoy “instant checkout” from within ChatGPT, he later clarified that details on monetization will only be available “soon.”

A full list of third-party apps that will be integrated into ChatGPT in the coming weeks.

A full list of third-party apps that will be integrated into ChatGPT in the coming weeks. Credit: OpenAI

In addition to the apps mentioned above, others like Expedia and Booking.com will be available in ChatGPT starting today. Apps from other launch partners including Peloton, Target, Uber, and Doordash will be available inside ChatGPT “in the weeks ahead.”

Other developers can start building with the SDK today before submitting them to OpenAI for review and publication within ChatGPT “later this year.” Altman said that apps that meet a certain set of “developer guidelines” will be listed in a comprehensive directory, while those meeting “higher standards for design and functionality will be featured more prominently.”

AgentKit and API updates

Elsewhere in the keynote, Altman announced AgentKit, a new tool designed to let OpenAI users create specialized interactive chatbots using a simplified building block GUI interface. The new software includes integrated tools for measuring performance and testing workflows from within the ChatKit interface.

In a live demo, OpenAI platform experience specialist Christina Huang gave herself an eight-minute deadline to use AgentKit to create a live, customized question-answering “Ask Froge” chatbot for the Dev Day website. While that demo was done with time to spare, Huang did make use of a lot of pre-built “widgets” and documents full of prepopulated information about the event to streamline the chatbot’s creation.

OpenAI’s Dev Days keynote in full.

The keynote also announced minor updates for OpenAI’s codex coding agent, including integration with Slack and a new SDK to allow for easier integration into existing coding workflows. Altman also announced some recent models would be newly available to users via API, including Sora 2, GPT5-Pro, and a new smaller, cheaper version of the company’s real-time audio interface.

OpenAI wants to make ChatGPT into a universal app frontend Read More »

deloitte-will-refund-australian-government-for-ai-hallucination-filled-report

Deloitte will refund Australian government for AI hallucination-filled report

The Australian Financial Review reports that Deloitte Australia will offer the Australian government a partial refund for a report that was littered with AI-hallucinated quotes and references to nonexistent research.

Deloitte’s “Targeted Compliance Framework Assurance Review” was finalized in July and published by Australia’s Department of Employment and Workplace Relations (DEWR) in August (Internet Archive version of the original). The report, which cost Australian taxpayers nearly $440,000 AUD (about $290,000 USD), focuses on the technical framework the government uses to automate penalties under the country’s welfare system.

Shortly after the report was published, though, Sydney University Deputy Director of Health Law Chris Rudge noticed citations to multiple papers and publications that did not exist. That included multiple references to nonexistent reports by Lisa Burton Crawford, a real professor at the University of Sydney law school.

“It is concerning to see research attributed to me in this way,” Crawford told the AFR in August. “I would like to see an explanation from Deloitte as to how the citations were generated.”

“A small number of corrections”

Deloitte and the DEWR buried that explanation in an updated version of the original report published Friday “to address a small number of corrections to references and footnotes,” according to the DEWR website. On page 58 of that 273-page updated report, Deloitte added a reference to “a generative AI large language model (Azure OpenAI GPT-4o) based tool chain” that was used as part of the technical workstream to help “[assess] whether system code state can be mapped to business requirements and compliance needs.”

Deloitte will refund Australian government for AI hallucination-filled report Read More »

here’s-the-real-reason-endurance-sank

Here’s the real reason Endurance sank


The ship wasn’t designed to withstand the powerful ice compression forces—and Shackleton knew it.

The Endurance, frozen and keeled over in the ice of the Weddell Sea. Credit: BF/Frank Hurley

In 1915, intrepid British explorer Sir Ernest Shackleton and his crew were stranded for months in the Antarctic after their ship, Endurance, was trapped by pack ice, eventually sinking into the freezing depths of the Weddell Sea. Miraculously, the entire crew survived. The prevailing popular narrative surrounding the famous voyage features two key assumptions: that Endurance was the strongest polar ship of its time, and that the ship ultimately sank after ice tore away the rudder.

However, a fresh analysis reveals that Endurance would have sunk even with an intact rudder; it was crushed by the cumulative compressive forces of the Antarctic ice with no single cause for the sinking. Furthermore, the ship wasn’t designed to withstand those forces, and Shackleton was likely well aware of that fact, according to a new paper published in the journal Polar Record. Yet he chose to embark on the risky voyage anyway.

Author Jukka Tuhkuri of Aalto University is a polar explorer and one of the leading researchers on ice worldwide. He was among the scientists on the Endurance22 mission that discovered the Endurance shipwreck in 2022, documented in a 2024 National Geographic documentary. The ship was in pristine condition partly because of the lack of wood-eating microbes in those waters. In fact, the Endurance22 expedition’s exploration director, Mensun Bound, told The New York Times at the time that the shipwreck was the finest example he’s ever seen; Endurance was “in a brilliant state of preservation.”

As previously reported, Endurance set sail from Plymouth on August 6, 1914, with Shackleton joining his crew in Buenos Aires, Argentina. By the time they reached the Weddell Sea in January 1915, accumulating pack ice and strong gales slowed progress to a crawl. Endurance became completely icebound on January 24, and by mid-February, Shackleton ordered the boilers to be shut off so that the ship would drift with the ice until the weather warmed sufficiently for the pack to break up. It would be a long wait. For 10 months, the crew endured the freezing conditions. In August, ice floes pressed into the ship with such force that the ship’s decks buckled.

The ship’s structure nonetheless remained intact, but by October 25, Shackleton realized Endurance was doomed. He and his men opted to camp out on the ice some two miles (3.2 km) away, taking as many supplies as they could with them. Compacted ice and snow continued to fill the ship until a pressure wave hit on November 13, crushing the bow and splitting the main mast—all of which was captured on camera by crew photographer Frank Hurley. Another pressure wave hit in the late afternoon on November 21, lifting the ship’s stern. The ice floes parted just long enough for Endurance to finally sink into the ocean before closing again to erase any trace of the wreckage.

Once the wreck had been found, the team recorded as much as they could with high-resolution cameras and other instruments. Vasarhelyi, particularly, noted the technical challenge of deploying a remote digital 4K camera with lighting at 9,800 feet underwater, and the first deployment at that depth of photogrammetric and laser technology. This resulted in a millimeter-scale digital reconstruction of the entire shipwreck to enable close study of the finer details.

Challenging the narrative

The ice and wave tank at Aalto University

The ice and wave tank at Aalto University. Credit: Aalto University

It was shortly after the Endurance22 mission found the shipwreck that Tuhkuri realized that there had never been a thorough structural analysis conducted of the vessel to confirm the popular narrative. Was Endurance truly the strongest polar ship of that time, and was a broken rudder the actual cause of the sinking? He set about conducting his own investigation to find out, analyzing Shackleton’s diaries and personal correspondence, as well as the diaries and correspondence of several Endurance crew members.

Tuhkuri also conducted a naval architectural analysis of the vessel under the conditions of compressive ice, which had never been done before. He then compared those results with the underwater images of the Endurance shipwreck. He also looked at comparable wooden polar expedition ships and steel icebreakers built in the late 1800s and early 1900s.

Endurance was originally named Polaris; Shackleton renamed it when he purchased the ship in 1914 for his doomed expedition. Per Tuhkuri, the ship had a lower (tween) deck, a main deck, and a short bridge deck above them that stopped at the machine room in order to make space for the steam engine and boiler. There were no beams in the machine room area, nor any reinforcing diagonal beams, which weakened this significant part of the ship’s hull.

This is because Endurance was originally built for polar tourism and for hunting polar bears and walruses in the Arctic; at the ice edge, ships only needed sufficiently strong planking and frames to withstand the occasional collision from ice floes. However, “In pack ice conditions, where compression from the ice needs to be taken into account, deck beams become of key importance,” Tuhkuri wrote. “It is the deck beams that keep the two ship sides apart and maintain the shape of a ship. Without strong enough deck beams, a vessel gets crushed by compressive ice, more or less irrespective of the thickness of planking and frames.”

The Endurance was nonetheless sturdy enough to withstand five serious ice compression events before her final sinking. On April 4, 1915, one of the scientists on board reported hearing loud rumbling noises from a 3-meter-high ice ridge that formed near the ship, causing the ship to vibrate. Tuhkuri believes this was due to a “compressive failure process” as ice crushed against the hull. On July 14, a violent snowstorm hit, and crew members could hear the ice breaking beneath the ship. The ice ridges that formed over the next few days were sufficiently concerning that Shackleton instituted four-hour watches on deck and insisted on having everything packed in case they had to abandon ship.

Crushed by the ice

Idealized cross sections of early Antarctic ships. Endurance was type (a); Fram and Deutschland were type (b).

Idealized cross sections of early Antarctic ships. Endurance was type (a); Deutschland was type (b). Credit: J. Tuhkuri, 2025

On August 1, an ice floe fractured and grinding noises were heard beneath the ship as the floe piled underneath it, lifting Endurance and causing her to first heel starboard and then heel to port, as several deck beams began to buckle. Similar compression events kept happening until there was a sudden escalation on September 30. The hull began vibrating hard enough to shake the whole rigging as even more ice crushed against the hull. Even the linoleum on the floors buckled; Harry McNish wrote in his diary that it looked like Endurance “was going to pieces.”

Yet another ice compression event occurred on October 17, pushing the vessel one meter into the air as the iron plates on the engine room’s floor buckled and slid over each other. Ship scientist Reginald James wrote that “for a time things were not good as the pressure was mostly along the region of the engine room where there are no beams of any strength,” while Captain Worsley described the engine room as “the weakest part of the ship.”

By the afternoon, Endurance was heeled almost 30 degrees to port, so much so that the keel was visible from the starboard side, per Tuhkuri, although the ice started to fracture in the evening so that the ship could shift upright again. The crew finally abandoned ship on October 27 after an even more severe compression event hit a few days before. Endurance finally sank below the ice on November 21.

Tuhkuri’s analysis of the structural damage to Endurance revealed that the rudder and the stern post were indeed torn off, confirmed by crew correspondence and diaries and by the underwater images taken of the wreck. The keel was also ripped off, with McNish noting in his diary that the ship broke into two halves as a result. The underwater images are less clear on this point, but Tuhkuri writes that there is something “some distance forward from the rudder, on the port side” that “could be the end of a displaced part of the keel sticking up from under the ship.”

All the diaries mentioned the buckling and breaking of deck beams, and there was much structural damage to the ship’s sides; for instance, Worsley writes of “great spikes of ice… forcing their way through the ship’s sides.” There are no visible holes in the wreck’s sides in the underwater images, but Tuhkuri posits that the damage is likely buried in the mud on the sea bed, given that by late October, Endurance “was heavily listed and the bottom was exposed.”

Jukka Tuhkari on the polar ice

Jukka Tuhkuri on the ice. Credit: Aalto University

Based on his analysis, Tuhkuri concluded that the rudder wasn’t the sole or primary reason for the ship’s sinking. “Endurance would have sunk even if it did not have a rudder at all,” Tuhkuri wrote; it was crushed by the ice, with no single reason for its eventual sinking. Shackleton himself described the process as ice floes “simply annihilating the ship.”

Perhaps the most surprising finding is that Shackleton knew of Endurance‘s structural shortcomings even before undertaking the voyage. Per Tuhkuri, the devastating effects of compressive ice on ships were known to shipbuilders in the early 1900s. An early Swedish expedition was forced to abandon its ship Antarctic in February 1903 when it became trapped in the ice. Things progressed much like Endurance: the ice lifted Antarctic up so that the ship heeled over, with ice-crushed sides, buckling beams, broken planking, and a damaged rudder and stern post. The final sinking occurred when an advancing ice floe ripped off the keel.

Shackleton knew of Antarctic‘s fate and had even been involved in the rescue operation. He also helped Wilhelm Filchner make final preparations for Filchner’s 1911–1913 polar expedition with a ship named Deutschland; he even advised his colleague to strengthen the ship’s hull by adding diagonal beams, the better to withstand the Weddell Sea ice. Filchner did so, and as a result, Deutschland survived eight months of being trapped in compressive ice until the ship was finally able to break free and sail home. (It took a torpedo attack in 1917 to sink the good ship Deutschland.)

The same shipyard that modified Deutschland had also just signed a contract to build Endurance (then called Polaris). So both Shackleton and the shipbuilders knew how destructive compressive ice could be and how to bolster a ship against it. Yet Endurance was not outfitted with diagonal beams to strengthen its hull. And knowing this, Shackleton bought Endurance anyway for his 1914–1915 voyage. In a 1914 letter to his wife, he even compared the strength of its construction unfavorably with that of the Nimrod, the ship he used for his 1907–1909 expedition. So Shackleton had to know he was taking a big risk.

“Even simple structural analysis shows that the ship was not designed for the compressive pack ice conditions that eventually sank it,” said Tuhkuri. “The danger of moving ice and compressive loads—and how to design a ship for such conditions—was well understood before the ship sailed south. So we really have to wonder why Shackleton chose a vessel that was not strengthened for compressive ice. We can speculate about financial pressures or time constraints, but the truth is, we may never know. At least we now have more concrete findings to flesh out the stories.”

Polar Record, 2025. DOI: 10.1017/S0032247425100090 (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Here’s the real reason Endurance sank Read More »

nearly-80%-of-americans-want-congress-to-extend-aca-tax-credits,-poll-finds

Nearly 80% of Americans want Congress to extend ACA tax credits, poll finds

According to new polling data, nearly 80 percent of Americans support extending Affordable Care Act (ACA) enhanced premium tax credits, which are set to expire at the end of this year—and are at the center of a funding dispute that led to a shutdown of the federal government this week.

The poll, conducted by KFF and released Friday, found that 78 percent of Americans want the tax credits extended, including 92 percent of Democrats, 59 percent of Republicans—and even a majority (57 percent) of Republicans who identify as Donald Trump-aligned MAGA (Make America Great Again) supporters.

A separate analysis published by KFF earlier this week found that if the credits are not extended, monthly premiums for ACA Marketplace plans would more than double on average. Specifically, the current average premium of $888 would jump to $1,904 in 2026, a 114 percent increase.

Consequences

The polling released today found that, in addition to broad support for the credits, many Americans are unaware that they are in peril. About six in ten adults say they have heard “a little” (30 percent) or “nothing at all” (31 percent) about the credits expiring.

“There is a hot debate in Washington about the looming ACA premium hikes, but our poll shows that most people in the marketplaces don’t know about them yet and are in for a shock when they learn about them in November,” KFF President and CEO Drew Altman said in a statement.

Yet more concerning, the poll found that among people who buy their own insurance plans, 70 percent said they would face a significant disruption to their household finances if their premiums were to double. Furthermore, 42 percent said they would ultimately go without health insurance in such a case. Currently, over 24 million Americans get their insurance through the ACA Marketplace.

Nearly 80% of Americans want Congress to extend ACA tax credits, poll finds Read More »

blue-origin-aims-to-land-next-new-glenn-booster,-then-reuse-it-for-moon-mission

Blue Origin aims to land next New Glenn booster, then reuse it for Moon mission


“We fully intend to recover the New Glenn first stage on this next launch.”

New Glenn lifts off on its debut flight on January 16, 2025. Credit: Blue Origin

There’s a good bit riding on the second launch of Blue Origin’s New Glenn rocket.

Most directly, the fate of a NASA science mission to study Mars’ upper atmosphere hinges on a successful launch. The second flight of Blue Origin’s heavy-lifter will send two NASA-funded satellites toward the red planet to study the processes that drove Mars’ evolution from a warmer, wetter world to the cold, dry planet of today.

A successful launch would also nudge Blue Origin closer to winning certification from the Space Force to begin launching national security satellites.

But there’s more on the line. If Blue Origin plans to launch its first robotic Moon lander early next year—as currently envisioned—the company needs to recover the New Glenn rocket’s first stage booster. Crews will again dispatch Blue Origin’s landing platform into the Atlantic Ocean, just as they did for the first New Glenn flight in January.

The debut launch of New Glenn successfully reached orbit, a difficult feat for the inaugural flight of any rocket. But the booster fell into the Atlantic Ocean after three of the rocket’s engines failed to reignite to slow down for landing. Engineers identified seven changes to resolve the problem, focusing on what Blue Origin calls “propellant management and engine bleed control improvements.”

Relying on reuse

Pat Remias, Blue Origin’s vice president of space systems development, said Thursday that the company is confident in nailing the landing on the second flight of New Glenn. That launch, with NASA’s next set of Mars probes, is likely to occur no earlier than November from Cape Canaveral Space Force Station, Florida.

“We fully intend to recover the New Glenn first stage on this next launch,” Remias said in a presentation at the International Astronautical Congress in Sydney. “Fully intend to do it.”

Blue Origin, owned by billionaire Jeff Bezos, nicknamed the booster stage for the next flight “Never Tell Me The Odds.” It’s not quite fair to say the company’s leadership has gone all-in with their bet that the next launch will result in a successful booster landing. But the difference between a smooth touchdown and another crash landing will have a significant effect on Bezos’ Moon program.

That’s because the third New Glenn launch, penciled in for no earlier than January of next year, will reuse the same booster flown on the upcoming second flight. The payload on that launch will be Blue Origin’s first Blue Moon lander, aiming to become the largest spacecraft to reach the lunar surface. Ars has published a lengthy feature on the Blue Moon lander’s role in NASA’s effort to return astronauts to the Moon.

“We will use that first stage on the next New Glenn launch,” Remias said. “That is the intent. We’re pretty confident this time. We knew it was going to be a long shot [to land the booster] on the first launch.”

A long shot, indeed. It took SpaceX 20 launches of its Falcon 9 rocket over five years before pulling off the first landing of a booster. It was another 15 months before SpaceX launched a previously flown Falcon 9 booster for the first time.

With New Glenn, Blue’s engineers hope to drastically shorten the learning curve. Going into the second launch, the company’s managers anticipate refurbishing the first recovered New Glenn booster to launch again within 90 days. That would be a remarkable accomplishment.

Dave Limp, Blue Origin’s CEO, wrote earlier this year on social media that recovering the booster on the second New Glenn flight will “take a little bit of luck and a lot of excellent execution.”

On September 26, Blue Origin shared this photo of the second New Glenn booster on social media.

Blue Origin’s production of second stages for the New Glenn rocket has far outpaced manufacturing of booster stages. The second stage for the second flight was test-fired in April, and Blue completed a similar static-fire test for the third second stage in August. Meanwhile, according to a social media post written by Limp last week, the body of the second New Glenn booster is assembled, and installation of its seven BE-4 engines is “well underway” at the company’s rocket factory in Florida.

The lagging production of New Glenn boosters, known as GS1s (Glenn Stage 1s), is partly by design. Blue Origin’s strategy with New Glenn has been to build a small number of GS1s, each of which is more expensive and labor-intensive than SpaceX’s Falcon 9. This approach counts on routine recoveries and rapid refurbishment of boosters between missions.

However, this strategy comes with risks, as it puts the booster landings in the critical path for ramping up New Glenn’s launch rate. At one time, Blue aimed to launch eight New Glenn flights this year; it will probably end the year with two.

Laura Maginnis, Blue Origin’s vice president of New Glenn mission management, said last month that the company was building a fleet of “several boosters” and had eight upper stages in storage. That would bode well for a quick ramp-up in launch cadence next year.

However, Blue’s engineers haven’t had a chance to inspect or test a recovered New Glenn booster. Even if the next launch concludes with a successful landing, the rocket could come back to Earth with some surprises. SpaceX’s initial development of Falcon 9 and Starship was richer in hardware, with many boosters in production to decouple successful landings from forward progress.

Blue Moon

All of this means a lot is riding on an on-target landing of the New Glenn booster on the next flight. Separate from Blue Origin’s ambitions to fly many more New Glenn rockets next year, a good recovery would also mean an earlier demonstration of the company’s first lunar lander.

The lander set to launch on the third New Glenn mission is known as Blue Moon Mark 1, an unpiloted vehicle designed to robotically deliver up to 3 metric tons (about 6,600 pounds) of cargo to the lunar surface. The spacecraft will have a height of about 26 feet (8 meters), taller than the lunar lander used for NASA’s Apollo astronaut missions.

The first Blue Moon Mark 1 is funded from Blue Origin’s coffers. It is now fully assembled and will soon ship to NASA’s Johnson Space Center in Houston for vacuum chamber testing. Then, it will travel to Florida’s Space Coast for final launch preparations.

“We are building a series, not a singular lander, but multiple types and sizes and scales of landers to go to the Moon,” Remias said.

The second Mark 1 lander will carry NASA’s VIPER rover to prospect for water ice at the Moon’s south pole in late 2027. Around the same time, Blue will use a Mark 1 lander to deploy two small satellites to orbit the Moon, flying as low as a few miles above the surface to scout for resources like water, precious metals, rare Earth elements, and helium-3 that could be extracted and exploited by future explorers.

A larger lander, Blue Moon Mark 2, is in an earlier stage of development. It will be human-rated to land astronauts on the Moon for NASA’s Artemis program.

Blue Origin’s Blue Moon MK1 lander, seen in the center, is taller than NASA’s Apollo lunar lander, currently the largest spacecraft to have landed on the Moon. Blue Moon MK2 is even larger, but all three landers are dwarfed in size by SpaceX’s Starship. Credit: Blue Origin

NASA’s other crew-rated lander will be derived from SpaceX’s Starship rocket. But Starship and Blue Moon Mark 2 are years away from being ready to accommodate a human crew, and both require orbital cryogenic refueling—something never before attempted in space—to transit out to the Moon.

This has led to a bit of a dilemma at NASA. China is also working on a lunar program, eyeing a crew landing on the Moon by 2030. Many experts say that, as of today, China is on pace to land astronauts on the Moon before the United States.

Of course, 12 US astronauts walked on the Moon in the Apollo program. But no one has gone back since 1972, and NASA and China are each planning to return to the Moon to stay.

One way to speed up a US landing on the Moon might be to use a modified version of Blue Origin’s Mark 1 lander, Ars reported Thursday.

If this is the path NASA takes, the stakes for the next New Glenn launch and landing will soar even higher.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Blue Origin aims to land next New Glenn booster, then reuse it for Moon mission Read More »

a-biological-0-day?-threat-screening-tools-may-miss-ai-designed-proteins.

A biological 0-day? Threat-screening tools may miss AI-designed proteins.


Ordering DNA for AI-designed toxins doesn’t always raise red flags.

Designing variations of the complex, three-dimensional structures of proteins has been made a lot easier by AI tools. Credit: Historical / Contributor

On Thursday, a team of researchers led by Microsoft announced that they had discovered, and possibly patched, what they’re terming a biological zero-day—an unrecognized security hole in a system that protects us from biological threats. The system at risk screens purchases of DNA sequences to determine when someone’s ordering DNA that encodes a toxin or dangerous virus. But, the researchers argue, it has become increasingly vulnerable to missing a new threat: AI-designed toxins.

How big of a threat is this? To understand, you have to know a bit more about both existing biosurveillance programs and the capabilities of AI-designed proteins.

Catching the bad ones

Biological threats come in a variety of forms. Some are pathogens, such as viruses and bacteria. Others are protein-based toxins, like the ricin that was sent to the White House in 2003. Still others are chemical toxins that are produced through enzymatic reactions, like the molecules associated with red tide. All of them get their start through the same fundamental biological process: DNA is transcribed into RNA, which is then used to make proteins.

For several decades now, starting the process has been as easy as ordering the needed DNA sequence online from any of a number of companies, which will synthesize a requested sequence and ship it out. Recognizing the potential threat here, governments and industry have worked together to add a screening step to every order: the DNA sequence is scanned for its ability to encode parts of proteins or viruses considered threats. Any positives are then flagged for human intervention to evaluate whether they or the people ordering them truly represent a danger.

Both the list of proteins and the sophistication of the scanning have been continually updated in response to research progress over the years. For example, initial screening was done based on similarity to target DNA sequences. But there are many DNA sequences that can encode the same protein, so the screening algorithms have been adjusted accordingly, recognizing all the DNA variants that pose an identical threat.

The new work can be thought of as an extension of that threat. Not only can multiple DNA sequences encode the same protein; multiple proteins can perform the same function. To form a toxin, for example, typically requires the protein to adopt the correct three-dimensional structure, which brings a handful of critical amino acids within the protein into close proximity. Outside of those critical amino acids, however, things can often be quite flexible. Some amino acids may not matter at all; other locations in the protein could work with any positively charged amino acid, or any hydrophobic one.

In the past, it could be extremely difficult (meaning time-consuming and expensive) to do the experiments that would tell you what sorts of changes a string of amino acids could tolerate while remaining functional. But the team behind the new analysis recognized that AI protein design tools have now gotten quite sophisticated and can predict when distantly related sequences can fold up into the same shape and catalyze the same reactions. The process is still error-prone, and you often have to test a dozen or more proposed proteins to get a working one, but it has produced some impressive successes.

So, the team developed a hypothesis to test: AI can take an existing toxin and design a protein with the same function that’s distantly related enough that the screening programs do not detect orders for the DNA that encodes it.

The zero-day treatment

The team started with a basic test: use AI tools to design variants of the toxin ricin, then test them against the software that is used to screen DNA orders. The results of the test suggested there was a risk of dangerous protein variants slipping past existing screening software, so the situation was treated like the equivalent of a zero-day vulnerability.

“Taking inspiration from established cybersecurity processes for addressing such situations, we contacted the relevant bodies regarding the potential vulnerability, including the International Gene Synthesis Consortium and trusted colleagues in the protein design community as well as leads in biosecurity at the US Office of Science and Technology Policy, US National Institute of Standards and Technologies, US Department of Homeland Security, and US Office of Pandemic Preparedness and Response,” the authors report. “Outside of those bodies, details were kept confidential until a more comprehensive study could be performed in pursuit of potential mitigations and for ‘patches’… to be developed and deployed.”

Details of that original test are being made available today as part of a much larger analysis that extends the approach to a large range of toxic proteins. Starting with 72 toxins, the researchers used three open source AI packages to generate a total of about 75,000 potential protein variants.

And this is where things get a little complicated. Many of the AI-designed protein variants are going to end up being non-functional, either subtly or catastrophically failing to fold up into the correct configuration to create an active toxin. The only way to know which ones work is to make the proteins and test them biologically; most AI protein design efforts will make actual proteins from dozens to hundreds of the most promising-looking potential designs to find a handful that are active. But doing that for 75,000 designs is completely unrealistic.

Instead, the researchers used two software-based tools to evaluate each of the 75,000 designs. One of these focuses on the similarity between the overall predicted physical structure of the proteins, and another looks at the predicted differences between the positions of individual amino acids. Either way, they’re a rough approximation of just how similar the proteins formed by two strings of amino acids should be. But they’re definitely not a clear indicator of whether those two proteins would be equally functional.

In any case, DNA sequences encoding all 75,000 designs were fed into the software that screens DNA orders for potential threats. One thing that was very clear is that there were huge variations in the ability of the four screening programs to flag these variant designs as threatening. Two of them seemed to do a pretty good job, one was mixed, and another let most of them through. Three of the software packages were updated in response to this performance, which significantly improved their ability to pick out variants.

There was also a clear trend in all four screening packages: The closer the variant was to the original structurally, the more likely the package (both before and after the patches) was to be able to flag it as a threat. In all cases, there was also a cluster of variant designs that were unlikely to fold into a similar structure, and these generally weren’t flagged as threats.

What does this mean?

Again, it’s important to emphasize that this evaluation is based on predicted structures; “unlikely” to fold into a similar structure to the original toxin doesn’t mean these proteins will be inactive as toxins. Functional proteins are probably going to be very rare among this group, but there may be a handful in there. That handful is also probably rare enough that you would have to order up and test far too many designs to find one that works, making this an impractical threat vector.

At the same time, there are also a handful of proteins that are very similar to the toxin structurally and not flagged by the software. For the three patched versions of the software, the ones that slip through the screening represent about 1 to 3 percent of the total in the “very similar” category. That’s not great, but it’s probably good enough that any group that tries to order up a toxin by this method would attract attention because they’d have to order over 50 just to have a good chance of finding one that slipped through, which would raise all sorts of red flags.

One other notable result is that the designs that weren’t flagged were mostly variants of just a handful of toxin proteins. So this is less of a general problem with the screening software and might be more of a small set of focused problems. Of note, one of the proteins that produced a lot of unflagged variants isn’t toxic itself; instead, it’s a co-factor necessary for the actual toxin to do its thing. As such, some of the screening software packages didn’t even flag the original protein as dangerous, much less any of its variants. (For these reasons, the company that makes one of the better-performing software packages decided the threat here wasn’t significant enough to merit a security patch.)

So, on its own, this work doesn’t seem to have identified something that’s a major threat at the moment. But it’s probably useful, in that it’s a good thing to get the people who engineer the screening software to start thinking about emerging threats.

That’s because, as the people behind this work note, AI protein design is still in its early stages, and we’re likely to see considerable improvements. And there’s likely to be a limit to the sorts of things we can screen for. We’re already at the point where AI protein design tools can be used to create proteins that have entirely novel functions and do so without starting with variants of existing proteins. In other words, we can design proteins that are impossible to screen for based on similarity to known threats, because they don’t look at all like anything we know is dangerous.

Protein-based toxins would be very difficult to design, because they have to both cross the cell membrane and then do something dangerous once inside. While AI tools are probably unable to design something that sophisticated at the moment, I would be hesitant to rule out the prospects of them eventually reaching that sort of sophistication.

Science, 2025. DOI: 10.1126/science.adu8578  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

A biological 0-day? Threat-screening tools may miss AI-designed proteins. Read More »