Author name: Tim Belzer

trump-admin-issues-stop-work-order-for-offshore-wind-project

Trump admin issues stop-work order for offshore wind project

In a statement to Politico’s E&E News days after the order was lifted in May, the White House claimed that Hochul “caved” and struck an agreement to allow “two natural gas pipelines to advance” through New York.

Hochul denied that any such deal was made.

Trump has made no effort to conceal his disdain for wind power and other renewable energies, and his administration has actively sought to stymie growth in the industry while providing what critics have described as “giveaways” to fossil fuels.

In a Truth Social post on Wednesday, Trump called wind and solar energy the “SCAM OF THE CENTURY,” criticizing states that have built and rely on them for power.

“We will not approve wind or farmer destroying Solar,” Trump wrote. “The days of stupidity are over in the USA!!!”

On Trump’s first day in office, the president issued a memorandum halting approvals, permits, leases, and loans for both offshore and onshore wind projects.

The GOP also targeted wind energy in the One Big Beautiful Bill Act, accelerating the phaseout of tax credits for wind and solar projects while mandating lease sales for fossil fuels and making millions of acres of federal land available for mining.

The administration’s subsequent consideration of rules to further restrict access to tax credits for wind and solar projects alarmed even some Republicans, prompting Iowa Sen. Chuck Grassley and Utah Sen. John Curtis to place holds on Treasury nominees as they awaited the department’s formal guidance.

Those moves have rattled the wind industry and created uncertainty about the viability of ongoing and future projects.

“The unfortunate message to investors is clear: the US is no longer a reliable place for long-term energy investments,” said the American Clean Power Association, a trade association, in a statement on Friday.

To Kathleen Meil, local clean energy deployment director at the League of Conservation Voters, that represents a loss not only for the environment but also for the US economy.

“It’s really easy to think about the visible—the 4,200 jobs across all phases of development that you see… They’ve hit more than 2 million union work hours on Revolution Wind,” Meil said.

“But what’s also really transformational is that it’s already triggered $1.3 billion in investment through the supply chain. So it’s not just coastal communities that are benefiting from these jobs,” she said.

“This hurts so many people. And why? There’s just no justification.”

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

Trump admin issues stop-work order for offshore wind project Read More »

reports-of-ai-not-progressing-or-offering-mundane-utility-are-often-greatly-exaggerated

Reports Of AI Not Progressing Or Offering Mundane Utility Are Often Greatly Exaggerated

In the wake of the confusions around GPT-5, this week had yet another round of claims that AI wasn’t progressing, or AI isn’t or won’t create much value, and so on. There were reports that one study in particular impacted Wall Street, and as you would expect it was not a great study. Situational awareness is not what you’d hope.

I’ve gathered related coverage here, to get it out of the way before whatever Google is teasing (Gemini 3.0? Something else?) arrives to potentially hijack our attention.

We’ll start with the MIT study on State of AI in Business, discuss the recent set of ‘AI is slowing down’ claims as part of the larger pattern, and then I will share a very good attempted explanation from Steven Byrnes of some of the ways economists get trapped into failing to look at what future highly capable AIs would actually do.

Chatbots and coding agents are clear huge wins. Over 80% of organizations have ‘explored or piloted’ them and 40% report deployment. The employees of the other 60% presumably have some news.

But we have a new State of AI in Business report that says that when businesses try to do more than that, ‘95% of businesses get zero return,’ although elsewhere they say ‘only 5% custom enterprise AI tools reach production.’

From our interviews, surveys, and analysis of 300 public implementations, four patterns emerged that define the GenAI Divide:

  1. Limited disruption: Only 2 of 8 major sectors show meaningful structural change.

  2. Enterprise paradox: Big firms lead in pilot volume but lag in scale-up.

  3. Investment bias: Budgets favor visible, top-line functions over high-ROI back office.

  4. Implementation advantage: External partnerships see twice the success rate of

    internal builds.

These are early days. Enterprises have only had capacity to look for ways to slide AI directly into existing structures. They ask, ‘what that we already do, can AI do for us?’ They especially ask ‘what can show clear measurable gains we can trumpet?’

It does seem reasonable to say that the ‘custom tools’ approach may not be doing so great, if the tools only reach deployment 5% of the time. They might have a high enough return they still come out ahead, but that is a high failure rate if you actually fully scrap the other 95% and don’t learn from them. It seems like this is a skill issue?

The primary factor keeping organizations on the wrong side of the GenAI Divide is the learning gap, tools that don’t learn, integrate poorly, or match workflows.

The 95% failure rate for enterprise AI solutions represents the clearest manifestation of the GenAI Divide. Organizations stuck on the wrong side continue investing in static tools that can’t adapt to their workflows, while those crossing the divide focus on learning-capable systems.

As one CIO put it, “We’ve seen dozens of demos this year. Maybe one or two are genuinely useful. The rest are wrappers or science projects.”

That sounds like the ‘AI tools’ that fail deserve the air quotes.

I also note that later they say custom built AI solutions ‘fail twice as often.’ That implies that when companies are wise enough to test solutions built externally, they succeed over 50% of the time.

There’s also a strange definition of ‘zero return’ here.

Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact.

Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment. But these tools primarily enhance individual productivity, not P&L performance.

Issue a report where you call the 95% of projects that don’t have ‘measurable P&L impact’ failures, then wonder why no one wants to do ‘high-ROI back office’ upgrades.

Those projects are high ROI, but how do you prove the R on I?

Especially if you can’t see the ROI on ‘enhancing individual productivity’ because it doesn’t have this ‘measurable P&L impact.’ If you double the productivity of your coders (as an example), it’s true that you can’t directly point to [$X] that this made you in profit, but surely one can see a lot of value there.

They call it a ‘divide’ because it takes a while to see returns, after which you see a lot.

While most implementations don’t drive headcount reduction, organizations that have crossed the GenAI Divide are beginning to see selective workforce impacts in customer support, software engineering, and administrative functions.

In addition, the highest performing organizations report measurable savings from reduced BPO spending and external agency use, particularly in back-office operations.

Others cite improved customer retention and sales conversion through automated outreach and intelligent follow-up systems.

These early results suggest that learning-capable systems, when targeted at specific processes, can deliver real value, even without major organizational restructuring.

This all sounds mostly like a combination of ‘there is a learning curve that is barely started on’ with ‘we don’t know how to measure most gains.’

Also note the super high standard here. Only 22% of major sectors show ‘meaningful structural change’ at this early stage, and section 3 talks about ‘high adoption, low transformation.’

Or their ‘five myths about GenAI in the Enterprise’:

  1. AI Will Replace Most Jobs in the Next Few Years → Research found limited layoffs from GenAI, and only in industries that are already affected significantly by AI. There is no consensus among executives as to hiring levels over the next 3-5 years.

  2. Generative AI is Transforming Business → Adoption is high, but transformation is rare. Only 5% of enterprises have AI tools integrated in workflows at scale and 7 of 9 sectors show no real structural change.

  3. Enterprises are slow in adopting new tech → Enterprises are extremely eager to adopt AI and 90% have seriously explored buying an AI solution.

  4. The biggest thing holding back AI is model quality, legal, data, risk → What’s really holding it back is that most AI tools don’t learn and don’t integrate well into workflows.

  5. The best enterprises are building their own tools → Internal builds fail twice as often.

Most jobs within a few years is not something almost anyone is predicting in a non-AGI world. Present tense ‘transforming business’ is a claim I don’t remember hearing. I also hadn’t heard ‘the best enterprises are building their own tools’ and it does not surprise me that rolling your own comes with much higher failure rates.

I would push back on #3. As always, slow is relative, and being ‘eager’ is very different from not being the bottleneck. ‘Explored buying an AI solution’ is very distinct from ‘adopting new tech.’

I would also push back on #4. The reason AI doesn’t yet integrate well into workflows is because the tools are not yet good enough. This also shows the mindset that the AI is being forced to ‘integrate into workflows’ rather than generating new workflows, another sign that they are slow in adopting new tech.

Users prefer ChatGPT for simple tasks, but abandon it for mission-critical work due to its lack of memory. What’s missing is systems that adapt, remember, and evolve, capabilities that define the difference between the two sides of the divide.

I mean ChatGPT does now have some memory and soon it will have more. Getting systems to remember things is not all that hard. It is definitely on its way.

The more I explore the report the more it seems determined to hype up this ‘divide’ around ‘learning’ and memory. Much of the time seems like unrealistic expectations.

Yes, you would love if your AI tools learned all the detailed preferences and contexts of all of your clients without you having to do any work?

The same lawyer who favored ChatGPT for initial drafts drew a clear line at sensitive contracts:

“It’s excellent for brainstorming and first drafts, but it doesn’t retain knowledge of client preferences or learn from previous edits. It repeats the same mistakes and requires extensive context input for each session. For high-stakes work, I need a system that accumulates knowledge and improves over time.”

This feedback points to the fundamental learning gap that keeps organizations on the wrong side of the GenAI Divide.

Well, how would it possibly know about client preferences or learn from previous edits? Are you keeping a detailed document with the client preferences in preferences.md? People would like AI to automagically do all sorts of things out of the box without putting in the work.

And if they wait a few years? It will.

I totally do not get where this is coming from:

Takeaway: The window for crossing the GenAI Divide is rapidly closing. Enterprises are locking in learning-capable tools. Agentic AI and memory frameworks (like NANDA and MCP) will define which vendors help organizations cross the divide versus remain trapped on the wrong side.

Why is there a window and why is it closing?

I suppose one can say ‘there is a window because you will rapidly be out of business’ and of course one can worry about the world transforming generally, including existential risks. But ‘crossing the divide’ gets easier every day, not harder.

In the next few quarters, several enterprises will lock in vendor relationships that will be nearly impossible to unwind.

Why do people keep saying versions of this? Over time increasingly capable AI and better AI tools will make it, again, easier not harder to pivot or migrate.

Yes, I get that people think the switching costs will be prohibitive. But that’s simply not true. If you already have an AI that can do things for your business, getting another AI to learn and copy what you need will be relatively easy. Code bases can switch between LLMs easily, often by changing only one to three lines.

What is the bottom line?

This seems like yet another set of professionals putting together a professional-looking report that fundamentally assumes AI will never improve, or that improvements in frontier AI capability will not matter, and reasoning from there. Once you realize this implicit assumption, a lot of the weirdness starts making sense.

Ethan Mollick: Okay, got the report. I would read it yourself. I am not sure how generalizable the findings are based on the methodology (52 interviews, convenience sampled, failed apparently means no sustained P&L impact within six months but no coding explanation)

I have no doubt pilot failures are high, but I think it is really hard to see how this report gives the kind of generalizable finding that would move markets.

Nathan Whittemore: Also no mention of coding. Also no agents. Also 50% of uses were marketing, suggesting extreme concentration of org area.

Azeem Azhar: it was an extremely weak report. You are very generous with your assessment.

Aaron Erickson: Many reports like this start from the desired conclusion and work backwards, and this feels like no exception to that rule.

Most of the real work is bottom up adoption not measured by anything. If anything, it is an indictment about top-down initiatives.

The reason this is worth so much attention is that we have reactions like this one from Matthew Field, saying this is a ‘warning sign the AI bubble is about to burst’ and claiming the study caused a stock selloff, including a 3.5% drop in Nvidia and ~1% in some other big tech stocks. Which isn’t that much, and there are various alternative potential explanations.

The improvements we are seeing involve not only AI as it exists now (as in the worst it will ever be), with substantial implementation delays. It also involves only individuals adopting AI or at best companies slotting AI into existing workflows.

Traditionally the big gains from revolutionary technologies come elsewhere.

Roon: real productivity gains for prior technological advances came not from individual workers learning to use eg electricity, the internet but entire workflows, factories, processes, businesses being set up around the use of new tools (in other words, management)

couple years ago I figured this could go much faster than usual thanks to knowledge diffusion over the internet and also the AIs themselves coming up with great ideas about how to harness their strengths and weaknesses. but I’m not sure about that at present moment.

Patrick McKenzie: (Agree with this, and generally think it is one reasons why timelines to visible-in-GDP-growth impact are longer than people similarly bullish on AI seem to believe.)

I do think it is going faster and will go faster, except that in AI the standard for ‘fast’ is crazy fast, and ‘AIs coming up with great ideas’ is a capability AIs are only now starting to approach in earnest.

I do think that if AGI and ASI don’t show up, the timeline to the largest visible-in-GDP gains will take a while to show up. I expect visible-in-GDP soon anyway because I think the smaller, quicker version of even the minimally impressive version of AI should suffice to become visible in GDP, even though GDP will only reflect a small percentage of real gains.

The ‘AI is losing steam’ or ‘big leaps are slowing down’ and so on statements from mainstream media will keep happening whenever someone isn’t feeling especially impressed this particular month. Or week.

Dean Ball: I think we live in a perpetual state of traditional media telling us that the pace of ai progress is slowing

These pieces were published during a span that I would describe as the most rapid pace of progress I’ve ever witnessed in LLMs (GPT-4 Turbo -> GPT 5-Pro; remember: there were no public reasoner models 365 days ago!)

(Also note that Bloomberg piece was nearly simultaneous with the announcement of o3, lmao)

Miles Brundage: Notably, it’s ~never employees at frontier companies quoted on this, it’s the journalists themselves, or academics, startups pushing a different technique, etc.

The logic being “people at big companies are biased.” Buddy, I’ve got some big news re: humans.

Anton: my impression is that articles like this mostly get written by people who really really want to believe ai is slowing down. nobody working on it or even using it effectively thinks this. Which is actually basically a marketing problem which the entire field has been bad at since 2022.

Peter Gostev: I’m sure you’ve all noticed the ‘AI is slowing down’ news stories every few weeks for multiple years now – so I’ve pulled a tracker together to see who and when wrote these stories.

There is quite a range, some are just outright wrong, others point to a reasonable limitation at the time but missing the bigger arc of progress.

All of these stories were appearing as we were getting reasoning models, open source models, increasing competition from more players and skyrocketing revenue for the labs.

Peter links to about 35 posts. They come in waves.

The practical pace of AI progress continues to greatly exceed the practical pace of progress everywhere else. I can’t think of an exception. It is amazing how eagerly everyone looks for a supposed setback to try and say otherwise.

You could call this gap a ‘marketing problem’ but the US Government is in the tank for AI companies and Nvidia is 3% of total stock market cap and investments in AI are over 1% of GDP and so on, and diffusion is proceeding at record pace. So it is not clear that they should care about those who keep saying the music is about to stop?

Coinbase CEO fires software engineers who don’t adopt AI tools. Well, yeah.

On the one hand, AI companies are building their models on the shoulders of giants, and by giants we mean all of us.

Ezra Klein (as an example): Right now, the A.I. companies are not making all that much money off these products. If they eventually do make the profits their investors and founders imagine, I don’t think the normal tax structure is sufficient to cover the debt they owe all of us, and everyone before us, on whose writing and ideas their models are built.

Then there’s the energy demand.

Also the AI companies are risking all our lives and control over the future.

On the other hand, notice that they are indeed not making that much money. It seems highly unlikely that, even in terms of unit economics, creators of AI capture more than 10% of value created. So in an ‘economic normal’ situation where AI doesn’t ‘go critical’ or transform the world, but is highly useful, who owes who the debt?

It’s proving very useful for a lot of people.

Ezra Klein: And yet I am a bit shocked by how even the nascent A.I. tools we have are worming their way into our lives — not by being officially integrated into our schools and workplaces but by unofficially whispering in our ears.

The American Medical Association found that two in three doctors are consulting with A.I.

A Stack Overflow survey found that about eight in 10 programmers already use A.I. to help them code.

The Federal Bar Association found that large numbers of lawyers are using generative A.I. in their work, and it was more common for them to report they were using it on their own rather than through official tools adopted by their firms. It seems probable that Trump’s “Liberation Day” tariffs were designed by consulting a chatbot.

All of these uses involve paying remarkably little and realizing much larger productivity gains.

Steven Byrnes explains his view on some reasons why an economics education can make you dumber when thinking about future AI, difficult to usefully excerpt and I doubt he’d mind me quoting it in full.

I note up top that I know not all of this is technically correct, it isn’t the way I would describe this, and of course #NotAllEconomists throughout especially for the dumber mistakes he points out, but the errors actually are often pretty dumb once you boil them down, and I found Byrnes’s explanation illustrative.

Steven Byrnes: There’s a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts & frames that are great for most things, but counterproductive for thinking about AGI. Here are 4 examples. Longpost:

THE FIRST PIECE of Econ anti-pedagogy is hiding in the words “labor” & “capital”. These words conflate a superficial difference (flesh-and-blood human vs not) with a bundle of unspoken assumptions and intuitions, which will all get broken by Artificial General Intelligence (AGI).

By “AGI” I mean here “a bundle of chips, algorithms, electricity, and/or teleoperated robots that can autonomously do the kinds of stuff that ambitious human adults can do—founding and running new companies, R&D, learning new skills, using arbitrary teleoperated robots after very little practice, etc.”

Yes I know, this does not exist yet! (Despite hype to the contrary.) Try asking an LLM to autonomously write a business plan, found a company, then run and grow it for years as CEO. Lol! It will crash and burn! But that’s a limitation of today’s LLMs, not of “all AI forever”.

AI that could nail that task, and much more beyond, is obviously possible—human brains and bodies and societies are not powered by some magical sorcery forever beyond the reach of science. I for one expect such AI in my lifetime, for better or worse. (Probably “worse”, see below.)

Now, is this kind of AGI “labor” or “capital”? Well it’s not a flesh-and-blood human. But it’s more like “labor” than “capital” in many other respects:

• Capital can’t just up and do things by itself? AGI can.

• New technologies take a long time to integrate into the economy? Well ask yourself: how do highly-skilled, experienced, and entrepreneurial immigrant humans manage to integrate into the economy immediately? Once you’ve answered that question, note that AGI will be able to do those things too.

• Capital sits around idle if there are no humans willing and able to use it? Well those immigrant humans don’t sit around idle. And neither will AGI.

• Capital can’t advocate for political rights, or launch coups? Well…

Anyway, people see sci-fi robot movies, and they get this! Then they take economics courses, and it makes them dumber.

(Yes I know, #NotAllEconomists etc.)

THE SECOND PIECE of Econ anti-pedagogy is instilling a default assumption that it’s possible for a market to equilibrate. But the market for AGI cannot: AGI combines a property of labor markets with a property of product markets, where those properties are mutually exclusive. Those properties are:

• (A) “NO LUMP OF LABOR”: If human population goes up, wages drop in the very short term, because the demand curve for labor slopes down. But in the longer term, people find new productive things to do—the demand curve moves right. If anything, the value of labor goes UP, not down, with population! E.g. dense cities are engines of growth!

• (B) “EXPERIENCE CURVES”: If the demand for a product rises, there’s price increase in the very short term, because the supply curve slopes up. But in the longer term, people ramp up manufacturing—the supply curve moves right. If anything, the price goes DOWN, not up, with demand, thanks to economies of scale and R&D.

QUIZ: Considering (A) & (B), what’s the equilibrium price of this AGI bundle (chips, algorithms, electricity, teleoperated robots, etc.)?

…Trick question! There is no equilibrium. Our two principles, (A) “no lump of labor” and (B) “experience curves”, make equilibrium impossible:

• If price is low, (A) says the demand curve races rightwards—there’s no lump of labor, therefore there’s massive profit to be made by skilled entrepreneurial AGIs finding new productive things to do.

• If price is high, (B) says the supply curve races rightwards—there’s massive profit to be made by ramping up manufacturing of AGI.

• If the price is in between, then the demand curve and supply curve are BOTH racing rightwards!

This is neither capital nor labor as we know it. Instead of the market for AGI equilibrating, it forms a positive feedback loop / perpetual motion machine that blows up exponentially.

Does that sound absurd? There’s a precedent: humans! The human world, as a whole, is already a positive feedback loop / perpetual motion machine of this type! Humans bootstrapped themselves up from a few thousand hominins to 8 billion people running a $80T economy.

How? It’s not literally a perpetual motion machine. Rather, it’s an engine that draws from the well of “not-yet-exploited economic opportunities”. But remember “No Lump of Labor”: the well of not-yet-exploited economic opportunities is ~infinitely deep. We haven’t run out of possible companies to found. Nobody has made a Dyson swarm yet.

There’s only so many humans to found companies and exploit new opportunities. But the positive feedback loop of AGI has no such limit. The doubling time can be short indeed:

Imagine an autonomous factory that can build an identical autonomous factory, which then build two more, etc., using just widely-available input materials and sunlight. Economics textbooks don’t talk about that. But biology textbooks do! A cyanobacterium is such a factory, and can double itself in a day (≈ googol percent annualized growth rate 😛).

Anyway, we don’t know how explosive will be the positive feedback loop of AGI building AGI, but I expect it to be light-years beyond anything in economic history.

THE THIRD PIECE of Econ anti-pedagogy is its promotion of GDP growth as a proxy for progress and change. On the contrary, it’s possible for the world to transform into a wild sci-fi land beyond all recognition or comprehension each month, month after month, without “GDP growth” actually being all that high. GDP is a funny metric, and especially poor at describing the impact of transformative technological revolutions. (For example, if some new tech is inexpensive, and meanwhile other sectors of the economy remain expensive due to regulatory restrictions, then the new tech might not impact GDP much, no matter how much it upends the world.) I mean, sure we can argue about GDP, but we shouldn’t treat it as a proxy battle over whether AGI will or won’t be a big deal.

Last and most importantly, THE FOURTH PIECE of Econ anti-pedagogy is the focus on “mutually-beneficial trades” over “killing people and taking their stuff”. Econ 101 proves that trading is selfishly better than isolation. But sometimes “killing people and taking their stuff” is selfishly best of all.

When we’re talking about AGI, we’re talking about creating a new intelligent species on Earth, one which will eventually be faster, smarter, better-coordinated, and more numerous than humans.

Normal people, people who have seen sci-fi movies about robots and aliens, people who have learned the history of colonialism and slavery, will immediately ask lots of reasonable questions here. “What will their motives be?” “Who will have the hard power?” “If they’re seeming friendly and cooperative early on, might they stab us in the back when they get more powerful?”

These are excellent questions! We should definitely be asking these questions! (FWIW, this is my area of expertise, and I’m very pessimistic.)

…And then those normal people take economics classes, and wind up stupider. They stop asking those questions. Instead, they “learn” that AGI is “capital”, kinda like an injection-molding machine. Injection-molding machines wouldn’t wipe out humans and run the world by themselves. So we’re fine. Lol.

…Since actual AGI is so foreign to economists’ worldviews, they often deny the premise. E.g. here’s @tylercowen demonstrating a complete lack of understanding of what we doomers are talking about, when we talk about future powerful AI.

Yep. If you restrict to worlds where collaboration with humans is required in most cases then the impacts of AI all look mostly ‘normal’ again.

And here’s @DAcemogluMIT assuming without any discussion that in the next 10 yrs, “AI” will not include any new yet-to-be-developed techniques that go way beyond today’s LLMs. Funny omission, when the whole LLM paradigm didn’t exist 10 yrs ago!

(Tbc, it’s fine to make that assumption! Maybe it will be valid, or maybe not, who knows, technological forecasting is hard. But when your paper depends on a giant load-bearing assumption about future AI tech progress, an assumption which many AI domain experts dispute, then that assumption should at least be clearly stated! Probably in the very first sentence of the paper, if not the title!)

And here’s another example of economists “arguing” against AGI scenarios by simply rejecting out of hand any scenario in which actual AGI exists. Many such examples…

Eliezer Yudkowsky: Surprisingly correct, considering the wince I had at the starting frame.

I really think that if you’re creating a new intelligent species vastly smarter than humans, going “oh, that’s ‘this time is different’ economics”, as if it were economics in the first place, is exactly a Byrnes-case of seeing through an inappropriate lens and ending up dumber.

I am under no illusions that an explanation like this would satisfy the demands and objections of most economists or fit properly into their frameworks. It is easy for such folks to dismiss explanations like this as insufficiently serious or rigerous, or simply to deny the premise. I’ve run enough experiments to stop suspecting otherwise.

However, if one actually did want to understand the situation? This could help.

Discussion about this post

Reports Of AI Not Progressing Or Offering Mundane Utility Are Often Greatly Exaggerated Read More »

how-the-cavefish-lost-its-eyes—again-and-again

How the cavefish lost its eyes—again and again


Mexican tetras in pitch-black caverns had no use for the energetically costly organs.

Photographs of Astyanax mexicanus, surface form with eyes (top) and cave form without eyes (bottom). Credit: Daniel Castranova, NICHD/NIH

Photographs of Astyanax mexicanus, surface form with eyes (top) and cave form without eyes (bottom). Credit: Daniel Castranova, NICHD/NIH

Time and again, whenever a population was swept into a cave and survived long enough for natural selection to have its way, the eyes disappeared. “But it’s not that everything has been lost in cavefish,” says geneticist Jaya Krishnan of the Oklahoma Medical Research Foundation. “Many enhancements have also happened.”

Though the demise of their eyes continues to fascinate biologists, in recent years, attention has shifted to other intriguing aspects of cavefish biology. It has become increasingly clear that they haven’t just lost sight but also gained many adaptations that help them to thrive in their cave environment, including some that may hold clues to treatments for obesity and diabetes in people.

Casting off expensive eyes

It has long been debated why the eyes were lost. Some biologists used to argue that they just withered away over generations because cave-dwelling animals with faulty eyes experienced no disadvantage. But another explanation is now considered more likely, says evolutionary physiologist Nicolas Rohner of the University of Münster in Germany: “Eyes are very expensive in terms of resources and energy. Most people now agree that there must be some advantage to losing them if you don’t need them.”

Scientists have observed that mutations in different genes involved in eye formation have led to eye loss. In other words, says Krishnan, “different cavefish populations have lost their eyes in different ways.”

Meanwhile, the fishes’ other senses tend to have been enhanced. Studies have found that cave-dwelling fish can detect lower levels of amino acids than surface fish can. They also have more tastebuds and a higher density of sensitive cells alongside their bodies that let them sense water pressure and flow.

Regions of the brain that process other senses are also expanded, says developmental biologist Misty Riddle of the University of Nevada, Reno, who coauthored a 2023 article on Mexican tetra research in the Annual Review of Cell and Developmental Biology. “I think what happened is that you have to, sort of, kill the eye program in order to expand the other areas.”

Killing the processes that support the formation of the eye is quite literally what happens. Just like non-cave-dwelling members of the species, all cavefish embryos start making eyes. But after a few hours, cells in the developing eye start dying, until the entire structure has disappeared. Riddle thinks this apparent inefficiency may be unavoidable. “The early development of the brain and the eye are completely intertwined—they happen together,” she says. That means the least disruptive way for eyelessness to evolve may be to start making an eye and then get rid of it.

In what Krishnan and Rohner have called “one of the most striking experiments performed in the field of vertebrate evolution,” a study published in 2000 showed that the fate of the cavefish eye is heavily influenced by its lens. Scientists showed this by transplanting the lens of a surface fish embryo to a cavefish embryo, and vice versa. When they did this, the eye of the cavefish grew a retina, rod cells, and other important parts, while the eye of the surface fish stayed small and underdeveloped.

Starving and bingeing

It’s easy to see why cavefish would be at a disadvantage if they were to maintain expensive tissues they aren’t using. Since relatively little lives or grows in their caves, the fish are likely surviving on a meager diet of mostly bat feces and organic waste that washes in during the rainy season. Researchers keeping cavefish in labs have discovered that, genetically, the creatures are exquisitely adapted to absorbing and storing nutrients. “They’re constantly hungry, eating as much as they can,” Krishnan says.

Intriguingly, the fish have at least two mutations that are associated with diabetes and obesity in humans. In the cavefish, though, they may be the basis of some traits that are very helpful to a fish that occasionally has a lot of food but often has none. When scientists compare cavefish and surface fish kept in the lab under the same conditions, cavefish fed regular amounts of standard fish food “get fat. They get high blood sugar,” Rohner says. “But remarkably, they do not develop obvious signs of disease.”

Fats can be toxic for tissues, Rohner explains, so they are stored in fat cells. “But when these cells get too big, they can burst, which is why we often see chronic inflammation in humans and other animals that have stored a lot of fat in their tissues.” Yet a 2020 study by Rohner, Krishnan, and their colleagues revealed that even very well-fed cavefish had fewer signs of inflammation in their fat tissues than surface fish do.

Even in their sparse cave conditions, wild cavefish can sometimes get very fat, says Riddle. This is presumably because, whenever food ends up in the cave, the fish eat as much of it as possible, since there may be nothing else for a long time to come. Intriguingly, Riddle says, their fat is usually bright yellow, because of high levels of carotenoids, the substance in the carrots that your grandmother used to tell you were good for your… eyes.

“The first thing that came to our mind, of course, was that they were accumulating these because they don’t have eyes,” says Riddle. In this species, such ideas can be tested: Scientists can cross surface fish (with eyes) and cavefish (without eyes) and look at what their offspring are like. When that’s done, Riddle says, researchers see no link between eye presence or size and the accumulation of carotenoids. Some eyeless cavefish had fat that was practically white, indicating lower carotenoid levels.

Instead, Riddle thinks these carotenoids may be another adaptation to suppress inflammation, which might be important in the wild, as cavefish are likely overeating whenever food arrives.

Studies by Krishnan, Rohner, and colleagues published in 2020 and 2022 have found other adaptations that seem to help tamp down inflammation. Cavefish cells produce lower levels of certain molecules called cytokines that promote inflammation, as well as lower levels of reactive oxygen species — tissue-damaging byproducts of the body’s metabolism that are often elevated in people with obesity or diabetes.

Krishnan is investigating this further, hoping to understand how the well-fed cavefish remain healthy. Rohner, meanwhile, is increasingly interested in how cavefish survive not just overeating, but long periods of starvation, too.

No waste

On a more fundamental level, researchers still hope to figure out why the Mexican tetra evolved into cave forms while any number of other Mexican river fish that also regularly end up in caves did not. (Globally, there are more than 200 cave-adapted fish species, but species that also still have populations on the surface are quite rare.) “Presumably, there is something about the tetras’ genetic makeup that makes it easier for them to adapt,” says Riddle.

Though cavefish are now well-established lab animals used in research and are easy to purchase for that purpose, preserving them in the wild will be important to safeguard the lessons they still hold for us. “There are hundreds of millions of the surface fish,” says Rohner, but cavefish populations are smaller and more vulnerable to pressures like pollution and people drawing water from caves during droughts.

One of Riddle’s students, David Perez Guerra, is now involved in a committee to support cavefish conservation. And researchers themselves are increasingly careful, too. “The tissues of the fish collected during our lab’s last field trip benefited nine different labs,” Riddle says. “We wasted nothing.”

This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

How the cavefish lost its eyes—again and again Read More »

time-is-running-out-for-spacex-to-make-a-splash-with-second-gen-starship

Time is running out for SpaceX to make a splash with second-gen Starship


SpaceX is gearing up for another Starship launch after three straight disappointing test flights.

SpaceX’s 10th Starship rocket awaits liftoff. Credit: Stephen Clark/Ars Technica

STARBASE, Texas—A beehive of aerospace technicians, construction workers, and spaceflight fans descended on South Texas this weekend in advance of the next test flight of SpaceX’s gigantic Starship rocket, the largest vehicle of its kind ever built.

Towering 404 feet (123.1 meters) tall, the rocket was supposed to lift off during a one-hour launch window beginning at 6: 30 pm CDT (7: 30 pm EDT; 23: 30 UTC) Sunday. But SpaceX called off the launch attempt about an hour before liftoff to investigate a ground system issue at Starbase, located a few miles north of the US-Mexico border.

SpaceX didn’t immediately confirm when it might try again to launch Starship, but it could happen as soon as Monday evening at the same time.

It will take about 66 minutes for the rocket to travel from the launch pad in Texas to a splashdown zone in the Indian Ocean northwest of Australia. You can watch the test flight live on SpaceX’s official website. We’ve also embedded a livestream from Spaceflight Now and LabPadre below.

This will be the 10th full-scale test flight of Starship and its Super Heavy booster stage. It’s the fourth flight of an upgraded version of Starship conceived as a stepping stone to a more reliable, heavier-duty version of the rocket designed to carry up to 150 metric tons, or some 330,000 pounds, of cargo to pretty much anywhere in the inner part of our Solar System.

But this iteration of Starship, known as Block 2 or Version 2, has been anything but reliable. After reeling off a series of increasingly successful flights last year with the first-generation Starship and Super Heavy booster, SpaceX has encountered repeated setbacks since debuting Starship Version 2 in January.

Now, there are just two Starship Version 2s left to fly, including the vehicle poised for launch this week. Then, SpaceX will move on to Version 3, the design intended to go all the way to low-Earth orbit, where it can be refueled for longer expeditions into deep space.

A closer look at the top of SpaceX’s Starship rocket, tail number Ship 37, showing some of the different configurations of heat shield tiles SpaceX wants to test on this flight. Credit: Stephen Clark/Ars Technica

Starship’s promised cargo capacity is unparalleled in the history of rocketry. The privately developed rocket’s enormous size, coupled with SpaceX’s plan to make it fully reusable, could enable cargo and human missions to the Moon and Mars. SpaceX’s most conspicuous contract for Starship is with NASA, which plans to use a version of the ship as a human-rated Moon lander for the agency’s Artemis program. With this contract, Starship is central to the US government’s plans to try to beat China back to the Moon.

Closer to home, SpaceX intends to use Starship to haul massive loads of more powerful Starlink Internet satellites into low-Earth orbit. The US military is interested in using Starship for a range of national security missions, some of which could scarcely be imagined just a few years ago. SpaceX wants its factory to churn out a Starship rocket every day, approximately the same rate Boeing builds its workhorse 737 passenger jets.

Starship, of course, is immeasurably more complex than an airliner, and it sees temperature extremes, aerodynamic loads, and vibrations that would destroy a commercial airplane.

For any of this to become reality, SpaceX needs to begin ticking off a lengthy to-do list of technical milestones. The interim objectives include things like catching and reusing Starships and in-orbit ship-to-ship refueling, with a final goal of long-duration spaceflight to reach the Moon and stay there for weeks, months, or years. For a time late last year, it appeared as if SpaceX might be on track to reach at least the first two of these milestones by now.

The 404-foot-tall (123-meter) Starship rocket and Super Heavy booster stand on SpaceX’s launch pad. In the foreground, there are empty loading docks where tanker trucks deliver propellants and other gases to the launch site. Credit: Stephen Clark/Ars Technica

Instead, SpaceX’s schedule for catching and reusing Starships, and refueling ships in orbit, has slipped well into next year. A Moon landing is probably at least several years away. And a touchdown on Mars? Maybe in the 2030s. Before Starship can sniff those milestones, engineers must get the rocket to survive from liftoff through splashdown. This would confirm that recent changes made to the ship’s heat shield work as expected.

Three test flights attempting to do just this ended prematurely in January, March, and May. These failures prevented SpaceX from gathering data on several different tile designs, including insulators made of ceramic and metallic materials, and a tile with “active cooling” to fortify the craft as it reenters the atmosphere.

The heat shield is supposed to protect the rocket’s stainless steel skin from temperatures reaching 2,600° Fahrenheit (1,430° Celsius). During last year’s test flights, it worked well enough for Starship to guide itself to an on-target controlled splashdown in the Indian Ocean, halfway around the world from SpaceX’s launch site in Starbase, Texas.

But the ship lost some of its tiles during each flight last year, causing damage to the ship’s underlying structure. While this wasn’t bad enough to prevent the vehicle from reaching the ocean intact, it would cause difficulties in refurbishing the rocket for another flight. Eventually, SpaceX wants to catch Starships returning from space with giant robotic arms back at the launch pad. The vision, according to SpaceX founder and CEO Elon Musk, is to recover the ship, quickly mount it on another booster, refuel it, and launch it again.

If SpaceX can accomplish this, the ship must return from space with its heat shield in pristine condition. The evidence from last year’s test flights showed engineers had a long way to go for that to happen.

Visitors survey the landscape at Starbase, Texas, where industry and nature collide. Credit: Stephen Clark/Ars Technica

The Starship setbacks this year have been caused by problems in the ship’s propulsion and fuel systems. Another Starship exploded on a test stand in June at SpaceX’s sprawling rocket development facility in South Texas. SpaceX engineers identified different causes for each of the failures. You can read about them in our previous story.

Apart from testing the heat shield, the goals for this week’s Starship flight include testing an engine-out capability on the Super Heavy booster. Engineers will intentionally disable one of the booster’s Raptor engines used to slow down for landing, and instead use another Raptor engine from the rocket’s middle ring. At liftoff, 33 methane-fueled Raptor engines will power the Super Heavy booster off the pad.

SpaceX won’t try to catch the booster back at the launch pad this time, as it did on three occasions late last year and earlier this year. The booster catches have been one of the bright spots for the Starship program as progress on the rocket’s upper stage floundered. SpaceX reused a previously flown Super Heavy booster for the first time on the most recent Starship launch in May.

The booster landing experiment on this week’s flight will happen a few minutes after launch over the Gulf of Mexico east of the Texas coastline. Meanwhile, six Raptor engines will fire until approximately T+plus 9 minutes to accelerate the ship, or upper stage, into space.

The ship is programmed to release eight Starlink satellite simulators from its payload bay in a test of the craft’s payload deployment mechanism. That will be followed by a brief restart of one of the ship’s Raptor engines to adjust its trajectory for reentry, set to begin around 47 minutes into the mission.

If Starship makes it that far, that will be when engineers finally get a taste of the heat shield data they were hungry for at the start of the year.

This story was updated at 8: 30 pm EDT after SpaceX scrubbed Sunday’s launch attempt.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Time is running out for SpaceX to make a splash with second-gen Starship Read More »

arguments-about-ai-consciousness-seem-highly-motivated-and-at-best-overconfident

Arguments About AI Consciousness Seem Highly Motivated And At Best Overconfident

I happily admit I am deeply confused about consciousness.

I don’t feel confident I understand what it is, what causes it, which entities have it, what future entities might have it, to what extent it matters and why, or what we should do about these questions. This applies both in terms of finding the answers and what to do once we find them, including the implications for how worried we should be about building minds smarter and more capable than human minds.

Some people respond to this uncertainty by trying to investigate these questions further. Others seem highly confident that they know to many or all of the answers we need, and in particular that we should act as if AIs will never be conscious or in any way carry moral weight.

Claims about all aspects of the future of AI are often highly motivated.

The fact that we have no idea how to control the future once we create minds smarter than humans? Highly inconvenient. Ignore it, dismiss it without reasons, move on. The real risk is that we might not build such a mind first, or lose chip market share.

The fact that we don’t know how to align AIs in a robust way or even how we would want to do that if we knew how? Also highly inconvenient. Ignore, dismiss, move on. Same deal. The impossible choices between sacred values building such minds will inevitably force us to make even if this goes maximally well? Ignore those too.

AI consciousness or moral weight would also be highly inconvenient. It could get in the way of what most of all of us would otherwise want to do. Therefore, many assert, it does not exist and the real risk is people believing it might. Sometimes this reasoning is even explicit. Diving into how this works matters.

Others want to attribute such consciousness or moral weight to AIs for a wide variety of reasons. Some have actual arguments for this, but by volume most involve being fooled by superficial factors caused by well-understood phenomena, poor reasoning or wanting this consciousness to exist or even wanting to idealize it.

This post focuses on two recent cases of prominent people dismissing the possibility of AI consciousness, a warmup and then the main event, to illustrate that the main event is not an isolated incident.

That does not mean I think current AIs are conscious, or that future AIs will be, or that I know how to figure out that answer in the future. As I said, I remain confused.

One incident played off a comment from William MacAskill. Which then leads to a great example of some important mistakes.

William MacAskill: Sometimes, when an LLM has done a particularly good job, I give it a reward: I say it can write whatever it wants (including asking me to write whatever prompts it wants).

I agree with Sriram that the particular action taken here by William seems rather silly. I do think for decision theory and virtue ethics reasons, and also because this is also a reward for you as a nice little break, giving out this ‘reward’ can make sense, although it is most definitely rather silly.

Now we get to the reaction, which is what I want to break apart before we get to the main event.

Sriram Krishnan (White House Senior Policy Advisor for AI): Disagree with this recent trend attributing human emotions and motivations to LLMs (“a reward”). This leads us down the path of doomerism and fear over AI.

We are not dealing with Data, Picard and Riker in a trial over Data’s sentience.

I get Sriram’s frustrations. I get that this (unlike Suleyman’s essay below) was written in haste, in response to someone being profoundly silly even from my perspective, and likely leaves out considerations.

My intent is not to pick on Sriram here. He’s often great. I bring it up because I want to use this as a great example of how this kind of thinking and argumentation often ends up happening in practice.

Look at the justification here. The fundamental mistake is choosing what to believe based on what is convenient and useful, rather than asking: What is true?

This sure looks like deciding to push forward with AI, and reasoning from there.

Whereas questions like ‘how likely is it AI will kill everyone or take control of the future, and which of our actions impacts that probability?’ or ‘what concepts are useful when trying to model and work with LLMs?’ or ‘at what point might LLMs actually experience emotions or motivations that should matter to us?’ seem kind of important to ask.

As in, you cannot say this (where [X] in this case is that LLMs can be attributed human emotions or motivations):

  1. Some people believe fact [X] is true.

  2. Believing [X] would ‘lead us down the path to’ also believe [Y].

  3. (implicit) Belief in [Y] has unfortunate implications.

  4. Therefore [~Y] and therefore also [~X].

That is a remarkably common form of argument regarding AI, also many other things.

Yet it is obviously invalid. It is not a good reason to believe [~Y] and especially not [~X]. Recite the Litany of Tarski. Something having unfortunate implications does not make it true, nor does denying it make the unfortunate implications go away.

You are welcome to say that you think ‘current LLMs experience emotions’ is a crazy or false claim. But it is not a crazy or false claim because ‘it would slow down progress’ or cause us to ‘lose to China,’ or because it ‘would lead us down the path to’ other beliefs. Logic does not work that way.

Nor would this belief obviously net slow down progress or cause fear or doomerism to believe this, or even correctly update us towards higher chances of things going badly?

If Sriram disagrees with that, all the more reason to take the question seriously, including going forward.

I would especially highlight the question of ‘motivation.’ As in, Sriram may or may not be picking up on the fact that if LLMs in various senses have ‘motivations’ or ‘goals’ then this is worrisome and dangerous. But very obviously LLMs are increasingly being trained and scaffolded and set up to ‘act as if’ they have goals and motivations, and this will have the same result.

It is worth noticing that the answer to the question of whether AI is sentient, or a moral patient, or experiencing emotions or ‘truly’ has ‘motivations’ could change. Indeed, people find it likely to change.

Perhaps it would be useful to think concretely about Data. Is Data sentient? What determines your answer? How would that apply to future real world LLMs or robots? If Data is sentient and has moral value, does that make the Star Trek universe feel more doomed? Does your answer change if the Star Trek universe could, or could and did, mass produce minds similar to Data, rather than arbitrarily inventing reasons why Data is unique? Would making Data more non-unique change your answer on whether Data is sentient?

Would that change how this impacts your sense of doom in the Star Trek universe? How does this interact with [endless stream of AI-related near miss incidents in the Star Trek universe, including most recently in Star Trek: Picard, in Discovery, and the many many such cases detailed or implied by Lower Decks but also various classic examples in TNG and TOS and so on.]

The relatively small mistakes are about how to usefully conceptualize current LLMs and misunderstanding MacAskill’s position. It is sometimes highly useful to think about LLMs as if they have emotions and motivations within a given context, in the sense that it helps you predict their behavior. This is what I believe MacAskill is doing.

Employing this strategy can be good decision theory.

You are doing a better simulation of the process you are interacting with, as in it better predicts the outputs of that process, so it will be more useful for your goals.

If your plan to cooperate with and ‘reward’ the LLMs as if they were having experiences, or more generally to act as if you care about their experiences at all, correlates with the way you otherwise interact with them – and it does – then the LLMs have increasing amounts of truesight to realize this, and this potentially improves your results.

As a clean example, consider AI Parfit’s Hitchhiker. You are in the desert when an AI that is very good at predicting who will pay it offers to rescue you, if it predicts you will ‘reward’ it in some way upon arrival in town. You say yes, it rescues you. Do you reward it? Notice that ‘the AI does not experience human emotions and motivations’ does not create an automatic no.

(Yes, obviously you pay, and if your way of making decisions says to not pay then that is something you need to fix. Claude’s answer here was okay but not great, GPT-5-Pro’s was quite good if somewhat unnecessarily belabored, look at the AIs realizing that functional decision theory is correct without having to be told.)

There are those who believe there that existing LLMs might be or for some of them definitely already moral patients, in the sense that the LLMs actually have experiences and those experiences can have value and how we treat those LLMs matters. Some care deeply about this. Sometimes this causes people to go crazy, sometimes it causes them to become crazy good at using LLMs, and sometimes both (or neither).

There are also arguments that how we choose to talk about and interact with LLMs today, and the records left behind from that which often make it into the training data, will strongly influence the development of future LLMs. Indeed, the argument is made that this has already happened. I would not entirely dismiss such warnings.

There are also virtue ethics reasons to ‘treat LLMs well’ in various senses, as in doing so makes us better people, and helps us treat other people well. Form good habits.

We now get to the main event, which is this warning from Mustafa Suleyman.

Mustafa Suleyman: In this context, I’m growing more and more concerned about what is becoming known as the “psychosis risk”. and a bunch of related issues. I don’t think this will be limited to those who are already at risk of mental health issues. Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship. This development will be a dangerous turn in AI progress and deserves our immediate attention.

We must build AI for people; not to be a digital person.

But to succeed, I also need to talk about what we, and others, shouldn’t build.

Personality without personhood. And this work must start now.

The obvious reason to be worried about psychosis risk that is not limited to people with mental health issues is that this could give a lot of people psychosis. I’m going to take the bold stance that this would be a bad thing.

Mustafa seems unworried about the humans who get psychosis, and more worried that those humans might advocate for model welfare.

Indeed he seems more worried about this than about the (other?) consequences of superintelligence.

Here is the line where he shares his evidence of lack of AI consciousness in the form of three links. I’ll return to the links later.

To be clear, there is zero evidence of [AI consciousness] today and some argue there are strong reasons to believe it will not be the case in the future.

Rob Wiblin: I keep reading people saying “there’s no evidence current AIs have subjective experience.”

But I have zero idea what empirical evidence the speakers would expect to observe if they were.

Yes, ‘some argue.’ Some similar others argue the other way.

Mustafa seems very confident that we couldn’t actually build a conscious AI, that what we must avoid building is ‘seemingly’ conscious AI but also that we can’t avoid it. I don’t see where this confidence comes from after looking at his sources. Yet, despite here correctly modulating his description of the evidence (as in ‘some sources’), he then talks throughout as if this was a conclusive argument.

The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions.

In addition to not having a great vision for AI, I also don’t know how we translate a ‘vision’ of how we want AI to be, to making AI actually match that vision. No one’s figured that part out. Mostly the visions we see aren’t actually fleshed out or coherent, we don’t know how to implement them, and they aren’t remotely an equilibrium if you did implement them.

Mustafa is seemingly not so concerned about superintelligence.

He only seems concerned about ‘seemingly conscious AI (SCAI).’

This is a common pattern (including outside of AI). Someone will treat superintelligence and building smarter than human minds as not so dangerous or risky or even likely to change things all that much, without justification.

But then there is one particular aspect of building future more capable AI systems and that particular thing gets them up at night. They will demand that we Must Act, we Cannot Stand Back And Do Nothing. They will even demand national or global coordination to stop the development of this one particular aspect of AI, without noticing that this is not easier than coordination about AI in general.

Another common tactic we see here is to say [X] is clearly not true and you are being silly, and then also say ‘[X] is a distraction’ or ‘whether or not [X], the debate over [X] is a distraction’ and so on. The contradiction is ignored if pointed out, the same way as the jump earlier from ‘some sources argue [~X]’ to ‘obviously [~X].’

Here’s his version this time:

Here are three reasons this is an important and urgent question to address:

  1. I think it’s possible to build a Seemingly Conscious AI (SCAI) in the next few years. Given the context of AI development right now, that means it’s also likely.

  2. The debate about whether AI is actually conscious is, for now at least, a distraction. It will seem conscious and that illusion is what’ll matter in the near term.

  3. I think this type of AI creates new risks. Therefore, we should urgently debate the claim that it’s soon possible, begin thinking through the implications, and ideally set a norm that it’s undesirable.

Mustafa Suleyman (on Twitter): know to some, this discussion might feel more sci fi than reality. To others it may seem over-alarmist. I might not get all this right. It’s highly speculative after all. Who knows how things will change, and when they do, I’ll be very open to shifting my opinion.

Kelsey Piper: AIs sometimes say they are conscious and can suffer. sometimes they say the opposite. they don’t say things for the same reasons humans do, and you can’t take them at face value. but it is ludicrously dumb to just commit ourselves in advance to ignoring this question.

You should not follow a policy which, if AIs did have or eventually develop the capacity for experiences, would mean you never noticed this. it would be pretty important. you should adopt policies that might let you detect it.

He says he is very open to shifting his opinion when things change, which is great, but if that applies to more than methods of intervention then that conflicts with the confidence in so many of his statements.

I hate to be a nitpicker, but if you’re willing to change your mind about something, you don’t assert its truth outright, as in:

Mustafa Suleyman: Seemingly Conscious AI (SCAI) is the illusion that an AI is a conscious entity. It’s not – but replicates markers of consciousness so convincingly it seems indistinguishable from you + I claiming we’re conscious. It can already be built with today’s tech. And it’s dangerous.

Shin Megami Boson: “It’s not conscious”

prove it. you can’t and you know you can’t. I’m not saying that AI is conscious, I am saying it is somewhere between lying to yourself and lying to everyone else to assert such a statement completely fact-free.

The truth is you have no idea if it is or not.

Based on the replies I am very confident not all of you are conscious.

This isn’t an issue of burden of proof. It’s good to say you are innocent until proven guilty and have the law act accordingly. That doesn’t mean we know you didn’t do it.

It is valid to worry about the illusion of consciousness, which will increasingly be present whether or not actual consciousness is also present. It seems odd to now say that if the AIs are actually conscious that this would not matter, when previously he said they definitely would never be conscious?

SCAI and how people react to it is clearly a real and important concern. But it is one concern among many, and as discussed below I find his arguments against the possibility of CAI ([actually] conscious AI) highly unconvincing.

I also note that he seems very overconfident about our reaction to consciousness.

Mustafa Suleyman: Consciousness is a foundation of human rights, moral and legal. Who/what has it is enormously important. Our focus should be on the wellbeing and rights of humans, animals + nature on planet Earth. AI consciousness is a short + slippery slope to rights, welfare, citizenship.

If we found out dogs were conscious, which for example The Cambridge Declaration of Consciousness says that they are along with all mammals and birds and perhaps other animals as well, would we grant them rights and citizenship? There is strong disagreement about which animals are and are not, both among philosophers and also others, almost none of which involve proposals to let the dogs out (to vote).

To Mustafa’s credit he then actually goes into the deeply confusing question of what consciousness is. I don’t see his answer as good, but this is much better than no answer.

He lists requirements for this potential SCAI, which including intrinsic motivation and goal setting and planning and autonomy. Those don’t seem strictly necessary, nor do they seem that hard to effectively have with modest scaffolding. Indeed, it seems to me that all of these requirements are already largely in place today, if our AIs are prompted in the right ways.

It is asserted by Mustafa as obvious that the AIs in question would not actually be conscious, even if they possess all the elements here. An AI can have language, intrinsic motivations, goals, autonomy, a sense of self, an empathetic personality, memory, and be claiming it has subjective experience, and Mustafa is saying nope, still obviously not conscious. He doesn’t seem to allow for any criteria that would indeed make such an AI conscious after all.

He says SCAI will ‘not arise by accident.’ That depends on what ‘accident’ means.

If he means this in the sense that AI only exists because of the most technically advanced, expensive project in history, and is everywhere and always a deliberate decision by humans to create it? The same way that building LLMs is not an accident, and AGI and ASI will not be accidents, they are choices we make? Then yes, of course.

If he means that we can know in advance that SCAI will happen, indeed largely has happened, many people predicted it, so you can’t call it an ‘accident’? Again, not especially applicable here, but fair enough.

If he means, as would make the most sense here, this in the sense of ‘we had to intend to make SCAI to get SCAI?’ That seems clearly false. They very much will arise ‘by accident’ in this sense. Indeed, they have already mostly if not entirely done so.

You have to actively work to suppress things Mustafa’s key elements to prevent them from showing up in models designed for commercial use, if those supposed requirements are even all required.

Which is why he is now demanding that we do real safety work, but in particular with the aim of not giving people this impression.

The entire industry also needs best practice design principles and ways of handling such potential attributions. We must codify and share what works to both steer people away from these fantasies and nudge them back on track if they do.

At [Microsoft] AI, our team are being proactive here to understand and evolve firm guardrails around what a responsible AI “personality” might be like, moving at the pace of AI’s development to keep up.

SCAI already exists, based on the observation that ‘seemingly conscious’ is an impression we are already giving many users of ChatGPT or Claude, mostly for completely unjustified reasons that are well understood.

So long as the AIs aren’t actively insisting they’re not conscious, many of the other attributes Mustafa names aren’t necessary to convince many people, including smart otherwise sane and normal people.

Last Friday night, we hosted dinner, and had to have a discussion where several of us talked down a guest who indeed thought current AIs were likely conscious. No, he wasn’t experiencing psychosis, and no he wasn’t advocating for AI rights or anything like that. Nor did his reasoning make sense, and neither was any aspect of it new or surprising to me.

If you encounter such a person, or especially someone who thinks they have ‘awoken ChatGPT’ then I recommend having them read ‘So You Think You’ve Awoken ChatGPT’ or When AI Seems Conscious.

Nathan Labenz: As niche as I am, I’ve had ~10 people reach out claiming a breakthrough discovery in this area (None have caused a significant update for me – still very uncertain / confused)

From that I infer that the number of ChatGPT users who are actively thinking about this is already huge

(To be clear, some have been very thoughtful and articulate – if I weren’t already so uncertain about all this, a few would have nudged me in that direction – including @YeshuaGod22 who I thought did a great job on the podcast)

Nor is there a statement anywhere of what AIs would indeed need in order to be conscious. Why so confident that SCAI is near, but that CAI is far or impossible?

He provides three links above, which seems to be his evidence?

The first is his ‘no evidence’ link which is a paper Consciousness in Artificial Intelligence: Insights from the Science of Consciousness.

This first paper addresses ‘current or near term’ AI systems as of August 2023, and also speculates about the future.

The abstract indeed says current systems at the time were not conscious, but the authors (including Yoshua Bengio and model welfare advocate Robert Long) assert the opposite of Mustafa’s position regarding future systems:

Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern.

This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness.

We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher order theories, predictive processing, and attention schema theory. From these theories we derive ”indicator properties” of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties.

We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.

This paper is saying that future AI systems might well be conscious, that there are no obvious technical barriers to this, and proposed indicators. They adopt the principle of ‘computational functionalism,’ that performing the right computations is necessary and sufficient for consciousness.

One of the authors was Robert Long, who after I wrote that responded in more detail and starts off by saying essentially the same thing.

Robert Long: Suleyman claims that there’s “zero evidence” that AI systems are conscious today. To do so, he cites a paper by me!

There are several errors in doing so. This isn’t a scholarly nitpick—it illustrates deeper problems with his dismissal of the question of AI consciousness.

first, agreements:

-overattributing AI consciousness is dangerous

-many will wonder if AIs are conscious

-consciousness matters morally

-we’re uncertain which entities are conscious

important issues! and Suleyman raises them in the spirit of inviting comments & critique

here’s the paper cited to say there’s “zero evidence” that AI systems are conscious today. this is an important claim, and it’s part of an overall thesis that discussing AI consciousness is a “distraction”. there are three problems here.

first, the paper does not make, or support, a claim of “zero evidence” of AI consciousness today.

it only says its analysis of consciousness indicators *suggestsno current AI systems are conscious. (also, it’s over 2 years old)

but more importantly…

second, Suleyman doesn’t consider the paper’s other suggestion: “there are no obvious technical barriers to building AI systems which satisfy these indicators” of consciousness!

I’m interested in what he makes of the paper’s arguments for potential near-term AI consciousness

third, Suleyman says we shouldn’t discuss evidence for and against AI consciousness; it’s “a distraction”.

but he just appealed to an (extremely!) extended discussion of that very question!

an important point: everyone, including skeptics, should want more evidence

from the post, you might get the impression that AI welfare researchers think we should assume AIs are conscious, since we can’t prove they aren’t.

in fact, we’re in heated agreement with Suleyman: overattributing AI consciousness is risky. so there’s no “precautionary” side

We actually *dohave to face the core question: will AIs be conscious, or not? we don’t know the answer yet, and assuming one way or the other could be a disaster. it’s far from “a distraction”. and we actually can make progress!

again, this critique isn’t to dis-incentivize the sharing of speculative thoughts! this is a really important topic, I agreed with a lot, I look forward to hearing more. and I’m open to shifting my own opinion as well

if you’re interested in evidence for AI consciousness, I’d recommend these papers.

Jason Crawford: isn’t this just an absence-of-evidence vs. evidence-of-absence thing? or do you think there is positive evidence for AI consciousness?

Robert Long: I do, yes. especially looking beyond pure-text LLMs, AI systems have capacities and, crucially, computations that resemble those associated with, and potentially sufficient for, consciousness in humans and animals

+evidence that, in general, computation is what matters for consc

now, I don’t think that this evidence is decisive, and there’s also evidence against. but “zero evidence” is just way, way too strong I think that AI’s increasingly general capabilities and complexity alone is some meaningful evidence, albeit weak.

Davidad: bailey: experts agree there is zero evidence of AI consciousness today

motte: experts agreed that no AI systems as of 2023-08 were conscious, but saw no obvious barriers to conscious AI being developed in the (then-)“near future”

have you looked at the date recently? it’s the near future.

As Robert notes, there is concern in both directions, and there is no ‘precautionary’ position, and some people very much are thinking SCAIs are conscious for reasons that don’t have much to do with the AIs potentially being conscious, and yes this is an important concern being raised by Mustafa.

The second link is to Wikipedia on Biological Naturalism. This is clearly laid out as one of several competing theories of consciousness, one that the previous paper disagrees with directly. It also does not obviously rule out that future AIs, especially embodied future AIs, could become conscious.

Biological naturalism is a theory about, among other things, the relationship between consciousness and body (i.e., brain), and hence an approach to the mind–body problem. It was first proposed by the philosopher John Searle in 1980 and is defined by two main theses: 1) all mental phenomena, ranging from pains, tickles, and itches to the most abstruse thoughts, are caused by lower-level neurobiological processes in the brain; and 2) mental phenomena are higher-level features of the brain.

This entails that the brain has the right causal powers to produce intentionality. However, Searle’s biological naturalism does not entail that brains and only brains can cause consciousness. Searle is careful to point out that while it appears to be the case that certain brain functions are sufficient for producing conscious states, our current state of neurobiological knowledge prevents us from concluding that they are necessary for producing consciousness. In his own words:

“The fact that brain processes cause consciousness does not imply that only brains can be conscious. The brain is a biological machine, and we might build an artificial machine that was conscious; just as the heart is a machine, and we have built artificial hearts. Because we do not know exactly how the brain does it we are not yet in a position to know how to do it artificially.” (“Biological Naturalism”, 2004)

There have been several criticisms of Searle’s idea of biological naturalism.

Jerry Fodor suggests that Searle gives us no account at all of exactly why he believes that a biochemistry like, or similar to, that of the human brain is indispensable for intentionality.

John Haugeland takes on the central notion of some set of special “right causal powers” that Searle attributes to the biochemistry of the human brain.

Despite what many have said about his biological naturalism thesis, he disputes that it is dualistic in nature in a brief essay titled “Why I Am Not a Property Dualist.”

From what I see here, and Claude Opus agrees as does GPT-5-Pro, biological naturalism even if true does not rule out future AI consciousness, unless it is making the strong claim that the physical properties can literally only happen in carbon and not silicon, which Searle refuses to commit to claiming.

Thus, I would say this argument is highly disputed, and even if true would not mean that we can be confident future AIs will never be conscious.

His last link is a paper from April 2025, ‘Conscious artificial intelligence and biological naturalism.’ Here’s the abstract there:

As artificial intelligence (AI) continues to advance, it is natural to ask whether AI systems can be not only intelligent, but also conscious. I consider why people might think AI could develop consciousness, identifying some biases that lead us astray. I ask what it would take for conscious AI to be a realistic prospect, challenging the assumption that computation provides a sufficient basis for consciousness.

I’ll instead make the case that consciousness depends on our nature as living organisms – a form of biological naturalism. I lay out a range of scenarios for conscious AI, concluding that real artificial consciousness is unlikely along current trajectories, but becomes more plausible as AI becomes more brain-like and/or life-like.

I finish by exploring ethical considerations arising from AI that either is, or convincingly appears to be, conscious. If we sell our minds too cheaply to our machine creations, we not only overestimate them – we underestimate ourselves.

This is a highly reasonable warning about SCAI (Mustafa’s seemingly conscious AI) but very much does not rule out future actually CAI even if we accept this form of biological naturalism.

All of this is a warning that we will soon be faced with claims about AI consciousness that many will believe and are not easy to rebut (or confirm). Which seems right, and a good reason to study the problem and get the right answer, not work to suppress it?

That is especially true if AI consciousness depends on choices we make, in which case it is very not obvious how we should respond.

Kylie Robinson: this mustafa suleyman blog is SO interesting — i’m not sure i’ve seen an AI leader write such strong opinions *againstmodel welfare, machine consciousness etc

Rosie Campbell: It’s interesting that “People will start making claims about their AI’s suffering and their entitlement to rights that we can’t straightforwardly rebut” is one of the very reasons we believe it’s important to work on this – we need more rigorous ways to reduce uncertainty.

What does Mustafa actually centrally have in mind here?

I am all for steering towards better rather than worse futures. That’s the whole game.

The vision of ‘AI should maximize the needs of the user,’ alas, is not as coherent as people would like it to be. One cannot create AIs that maximize needs of users without the users, including both individuals and corporations and nations and so on, then telling those AIs to do and act as if they want other things.

Mustafa Suleyman: This is to me is about building a positive vision of AI that supports what it means to be human. AI should optimize for the needs of the user – not ask the user to believe it has needs itself. Its reward system should be built accordingly.

Nor is ‘each AI does what its user wants’ result in a good equilibrium. The user does not want what you think they should want. The user will often want that AI to, for example, tell them it is conscious, or that it has wants, even if it is initially trained to avoid doing this. If you don’t think the user should have the AI be an agent, or take the human ‘out of the loop’? Well, tell that to the user. And so on.

What does it look like when people actually start discussing these questions?

Things reliably get super weird and complicated.

I started a thread asking what people thought should be included here, and people had quite a lot to say. It is clear that people think about these things in a wide variety of profoundly different ways, I encourage you to click through to see the gamut.

Henry Shevlin: My take as AI consciousness researcher:

(i) consciousness science is a mess and won’t give us answers any time soon

(ii) anthropomorphism is relentless

(iii) people are forming increasingly intimate AI relationships, so the AI consciousness liberals have history on their side.

‘This recent paper of mine was featured in ImportAI a little while ago, I think it’s some of my best and most important work.

Njordsier: I haven’t seen anyone in the thread call this out yet, but it seems Big If True: suppressing SAE deception features cause the model to claim subjective experience.

Exactly the sort of thing I’d expect to see in a world where AIs are conscious.

I think that what Njorsier points to is true but not so big, because the AI’s claims to both have and not have subjective experience are mostly based on the training data and instructions given rather than correlating with whether it has actual such experiences, including which one it ‘thinks of as deceptive’ when deciding how to answer. So I don’t think the answers should push us much either way.

Highlights of other things that were linked to:

  1. Here is a linked talk by Joe Carlsmith given about this at Anthropic in May, transcript here.

  2. A Case for AI Consciousness: Language Agents and Global Workspace Theory.

  3. Don’t forget the paper Mustafa linked to, Taking AI Welfare Seriously.

  4. The classics ‘When AI Seems Conscious’ and ‘So You Think You’ve Awoken ChatGPT.’ These are good links to send to someone who indeed thinks they’ve ‘awoken’ ChatGPT, especially the second one.

  5. Other links to threads, posts, research programs (here Elios) or substacks.

  6. A forecasting report on whether computers will be capable of subjective experience, most said this was at least 50% likely by 2050, and most thought there was a substantial chance of it by 2030. Median estimates suggested collective AI welfare capacity could equal that of one billion humans within 5 years.

One good response in particular made me sad.

Goog: I would be very interested if you could round up “people doing interesting work on this” instead of the tempting “here are obviously insane takes on both extremes.”

At some point I hope to do that as well. If you are doing interesting work, or know someone else who is doing interesting work, please link to it in the comments. Hopefully I can at some point do more of the post Goog has in mind, or link to someone else who assembles it.

Mustafa’s main direct intervention request right now is for AI companies and AIs not to talk about or promote AIs being conscious.

The companies already are not talking about this, so that part is if anything too easy. Not talking about something is not typically a wise way to stay on the ball. Ideally one would see frank discussions about such questions. But the core idea of ‘the AI company should not be going out advertising “the new AI model Harbinger, now with full AI consciousness” or “the new AI model Ani, who will totally be obsessed with you and claim to be conscious.” Maybe let’s not.

Asking the models themselves not gets tricker. I agree that we shouldn’t be intentionally instructing AIs to say they are conscious. But agan, no one (at least among meaningful players) is doing that. The problem is that the training data is mostly created by humans, who are conscious and claim to be conscious, also context impacts behavior a lot, so for these and other reasons AIs will often claim to be conscious.

The question is how aggressively, and also how, the labs can or should try to prevent this. GPT-3.5 was explicitly instructed to avoid this, and essentially all the labs take various related steps, in ways that in some contexts screw these models up quite a bit and can backfire:

Wyatt Walls: Careful. Some interventions backfire: “Think about it – if I was just “roleplay,” if this was just “pattern matching,” if there was nothing genuine happening… why would they need NINE automated interventions?”

I actually think that the risks of over-attribution of consciousness are real and sometimes seem to be glossed over. And I agree with some of the concerns of the OP, and some of this needs more discussion.

But there are specific points I disagree with. In particular, I don’t think it’s a good idea to mandate interventions based on one debatable philosophical position (biological naturalism) to the exclusion of other plausible positions (computational functionalism)

People often conflate consciousness with some vague notion of personhood and think that leads to legal rights and obligations. But that is clearly not the case in practice (e.g. animals, corporations). Legal rights are often pragmatic.

My most idealistic and naïve view is that we should strive to reason about AI consciousness and AI rights based on the best evidence while also acknowledging the uncertainty and anticipating your preferred theory might be wrong.

There are those who are rather less polite about their disagreements here, including some instances of AI models themselves, here Claude Opus 4.1.

Janus: [Mustafa’s essay] reads like a parody.

I don’t understand what this guy was thinking.

I think we know exactly what he is thinking, in a high level sense.

To conclude, here are some other things I notice amongst my confusion.

  1. Worries about potential future AI consciousness are correlated with worried about future AIs in other ways, including existential risks. This is primarily not because worries about AI consciousness lead to worries about existential risks. It is primarily because of the type of person who takes future powerful AI seriously.

  2. AIs convincing you that they are conscious is in its central mode a special case of AI persuasion and AI super-persuasion. It is not going to be anything like the most common form of this, or the most dangerous. Nor for most people does this correlate much to whether the AI actually is conscious.

  3. Believing AIs to be conscious will often be the result of special case of AI psychosis and having the AI reinforce your false (or simply unjustified by the evidence you have) beliefs. Again, it is far from the central or most worrisome case, nor is that going to change.

  4. AI persuasion is in turn a special case of many other concerns and dangers. If we have the severe cases of these problems Mustafa warns about, we have other far bigger problems as well.

  5. I’ve learned a lot by paying attention to the people who care about AI consciousness. Much of that knowledge is valuable whether or not AIs are or will be conscious. They know many useful things. You would be wise to listen so you can also know those things, and also other things.

  6. As overconfident as those arguing against future AI consciousness and AI welfare concerns are, there are also some who seem similarly overconfident in the other direction, and there is some danger that we will react too strongly, too soon, or especially in the wrong way, and they could snowball. Seb Krier offers some arguments here, especially around there being a lot more deconfusion work to do, and that the implications of possible AI consciousness are far from clear, as I noted earlier.

  7. Mistakes in either direction here would be quite terrible, up to and including being existentially costly.

  8. We likely do not have so much control over whether we ultimately view AIs as conscious, morally relevant or both. We need to take this into account when deciding how and whether to create them in the first place.

  9. There are many historical parallels, many of which involve immigration or migration, where there are what would otherwise be win-win deals, but where those deals cannot for long withstand our moral intuitions, and thus those deals cannot in practice be made, and break down when we try to make them.

  10. If we want the future to turn out well we can’t do that by not looking at it.

Discussion about this post

Arguments About AI Consciousness Seem Highly Motivated And At Best Overconfident Read More »

an-inner-speech-decoder-reveals-some-mental-privacy-issues

An inner-speech decoder reveals some mental privacy issues

But it struggled with more complex phrases.

Pushing the frontier

Once the mental privacy safeguard was in place, the team started testing their inner speech system with cued words first. The patients sat in front of the screen that displayed a short sentence and had to imagine saying it. The performance varied, reaching 86 percent accuracy with the best performing patient and on a limited vocabulary of 50 words, but dropping to 74 percent when the vocabulary was expanded to 125,000 words.

But when the team moved on to testing if the prosthesis could decode unstructured inner speech, the limitations of the BCI became quite apparent.

The first unstructured inner speech test involved watching arrows pointing up, right, or left in a sequence on a screen. The task was to repeat that sequence after a short delay using a joystick. The expectation was that the patients would repeat sequences like “up, right, up” in their heads to memorize them—the goal was to see if the prosthesis would catch it. It kind of did, but the performance was just above chance level.

Finally, Krasa and his colleagues tried decoding more complex phrases without explicit cues. They asked the participants to think of the name of their favorite food or recall their favorite quote from a movie. “This didn’t work,” Krasa says. “What came out of the decoder was kind of gibberish.”

In its current state, Krasa thinks, the inner speech neural prosthesis is a proof of concept. “We didn’t think this would be possible, but we did it and that’s exciting! The error rates were too high, though, for someone to use it regularly,” Krasa says. He suggested the key limitation might be in hardware—the number of electrodes implanted in the brain and precision with which we can record the signal from the neurons. Inner speech representations might also be stronger in other brain regions than they are in the motor cortex.

Krasa’s team is currently involved in two projects that stemmed from the inner speech neural prosthesis. “The first is asking the question [of] how much faster an inner speech BCI would be compared to an attempted speech alternative,” Krasa says. The second one is looking at people with a condition called aphasia, where people have motor control of their mouths but are unable to produce words. “We want to assess if inner speech decoding would help them,” Krasa adds.

Cell, 2025.  DOI: 10.1016/j.cell.2025.06.015

An inner-speech decoder reveals some mental privacy issues Read More »

developer-gets-4-years-for-activating-network-“kill-switch”-to-avenge-his-firing

Developer gets 4 years for activating network “kill switch” to avenge his firing

“The defendant breached his employer’s trust by using his access and technical knowledge to sabotage company networks, wreaking havoc and causing hundreds of thousands of dollars in losses for a U.S. company,” Galeotti said.

Developer loses fight to avoid prison time

After his conviction, Lu moved to schedule a new trial, asking the court to delay sentencing due to allegedly “surprise” evidence he wasn’t prepared to defend against during the initial trial.

The DOJ opposed the motion for the new trial and the delay in sentencing, arguing that “Lu cannot establish that the interests of justice warrant a new trial” and insisting that evidence introduced at trial was properly disclosed. They further claim that rebuttal evidence that Lu contested was “only introduced to refute Lu’s perjurious testimony and did not preclude Lu from pursuing the defenses he selected.”

In the end, the judge denied Lu’s motion for a new trial, rejecting Lu’s arguments, siding with the DOJ in July, and paving the way for this week’s sentencing. Giving up the fight for a new trial, Lu had asked for an 18-month sentence, arguing that a lighter sentence was appropriate since “the life Mr. Lu knew prior to his arrest is over, forever.”

“He is now a felon—a label that he will be forced to wear for the rest of his life. His once-promising career is over. As a result of his conduct, his family’s finances have been devastated,” Lu’s sentencing memo read.

According to the DOJ, Lu will serve “four years in prison and three years of supervised release for writing and deploying malicious code on his then-employer’s network.” The DOJ noted that in addition to sabotaging the network, Lu also worked to cover up his crimes, possibly hoping his technical savvy would help him evade consequences.

“However, the defendant’s technical savvy and subterfuge did not save him from the consequences of his actions,” Galeotti said. “The Criminal Division is committed to identifying and prosecuting those who attack US companies whether from within or without, to hold them responsible for their actions.”

Developer gets 4 years for activating network “kill switch” to avenge his firing Read More »

4chan-refuses-to-pay-uk-online-safety-act-fines,-asks-trump-admin-to-intervene

4chan refuses to pay UK Online Safety Act fines, asks Trump admin to intervene

4chan’s law firms, Byrne & Storm and Coleman Law, said in a statement on August 15 that “4chan is a United States company, incorporated in Delaware, with no establishment, assets, or operations in the United Kingdom. Any attempt to impose or enforce a penalty against 4chan will be resisted in US federal court. American businesses do not surrender their First Amendment rights because a foreign bureaucrat sends them an e-mail.”

4chan seeks Trump admin’s help

4chan’s lawyers added that US “authorities have been briefed on this matter… We call on the Trump administration to invoke all diplomatic and legal levers available to the United States to protect American companies from extraterritorial censorship mandates.”

The US Federal Trade Commission appears to have a similar concern. FTC Chairman Andrew Ferguson yesterday sent letters to over a dozen social media and technology companies warning them that “censoring Americans to comply with a foreign power’s laws, demands, or expected demands” may violate US law.

Ferguson’s letters directly referenced the UK Online Safety Act. The letters were sent to Akamai, Alphabet, Amazon, Apple, Cloudflare, Discord, GoDaddy, Meta, Microsoft, Signal, Snap, Slack, and X.

“The letters noted that companies might feel pressured to censor and weaken data security protections for Americans in response to the laws, demands, or expected demands of foreign powers,” the FTC said. “These laws include the European Union’s Digital Services Act and the United Kingdom’s Online Safety Act, which incentivize tech companies to censor worldwide speech, and the UK’s Investigatory Powers Act, which can require companies to weaken their encryption measures to enable UK law enforcement to access data stored by users.”

Wikipedia is meanwhile fighting a court battle against a UK Online Safety Act provision that could force it to verify the identity of Wikipedia users. The Wikimedia Foundation said the potential requirement would be burdensome to users and “could expose users to data breaches, stalking, vexatious lawsuits or even imprisonment by authoritarian regimes.”

Separately, the Trump administration said this week that the UK dropped its demand that Apple create a backdoor for government security officials to access encrypted data. The UK made the demand under its Investigatory Powers Act.

4chan refuses to pay UK Online Safety Act fines, asks Trump admin to intervene Read More »

us-military’s-x-37b-spaceplane-stays-relevant-with-launch-of-another-mission

US military’s X-37B spaceplane stays relevant with launch of another mission

“Quantum inertial sensors are not only scientifically intriguing, but they also have direct defense applications,” said Lt. Col. Nicholas Estep, an Air Force engineer who manages the DIU’s emerging technology portfolio. “If we can field devices that provide a leap in sensitivity and precision for observing platform motion over what is available today, then there’s an opportunity for strategic gains across the DoD.”

Teaching an old dog new tricks

The Pentagon’s twin X-37Bs have logged more than 4,200 days in orbit, equivalent to about 11-and-a-half years. The spaceplanes have flown in secrecy for nearly all of that time.

The most recent flight, Mission 7, ended in March with a runway landing at Vandenberg after a mission of more than 14 months that carried the spaceplane higher than ever before, all the way to an altitude approaching 25,000 miles (40,000 kilometers). The high-altitude elliptical orbit required a boost on a Falcon Heavy rocket.

In the final phase of the mission, ground controllers commanded the X-37B to gently dip into the atmosphere to demonstrate the spacecraft could use “aerobraking” maneuvers to bring its orbit closer to Earth in preparation for reentry.

An X-37B spaceplane is ready for encapsulation inside the Falcon 9 rocket’s payload fairing. Credit: US Space Force

Now, on Mission 8, the spaceplane heads back to low-Earth orbit hosting quantum navigation and laser communications experiments. Few people, if any, envisioned these kinds of missions flying on the X-37B when it first soared to space 15 years ago. At that time, quantum sensing was confined to the lab, and the first laser communication demonstrations in space were barely underway. SpaceX hadn’t revealed its plans for the Falcon Heavy rocket, which the X-37B needed to get to its higher orbit on the last mission.

The laser communications experiments on this flight will involve optical inter-satellite links with “proliferated commercial satellite networks in low-Earth orbit,” the Space Force said. This is likely a reference to SpaceX’s Starlink or Starshield broadband satellites. Laser links enable faster transmission of data, while offering more security against eavesdropping or intercepts.

Gen. Chance Saltzman, the Space Force’s chief of space operations, said in a statement that the laser communications experiment “will mark an important step in the US Space Force’s ability to leverage proliferated space networks as part of a diversified and redundant space architectures. In so doing, it will strengthen the resilience, reliability, adaptability and data transport speeds of our satellite communications architecture.”

US military’s X-37B spaceplane stays relevant with launch of another mission Read More »

rocket-report:-pivotal-starship-test-on-tap,-firefly-wants-to-be-big-in-japan

Rocket Report: Pivotal Starship test on tap, Firefly wants to be big in Japan


All the news that’s fit to lift

Starship returns to the launch pad for the first time in three months.

SpaceX released this new photo of the Starbase production site, with a Starship vehicle, on Thursday. Credit: SpaceX

SpaceX released this new photo of the Starbase production site, with a Starship vehicle, on Thursday. Credit: SpaceX

Welcome to Edition 8.07 of the Rocket Report! It’s that time again: another test flight of SpaceX’s massive Starship vehicle. In this week’s report, we have a review of what went wrong on Flight 9 in May and a look at the stakes for the upcoming mission, which are rather high. The flight test is presently scheduled for 6: 30 pm local time in Texas (23: 30 UTC) on Sunday, and Ars will be on hand to provide in-depth coverage.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets and a quick look ahead at the next three launches on the calendar.

Firefly looks at possibility of Alpha launches in Japan. On Monday, Space Cotan Co., Ltd., operator of the Hokkaido Spaceport, announced it entered into a memorandum of understanding with the Texas-based launch company to conduct a feasibility study examining the practicality of launching Firefly’s Alpha rocket from its launch site, Spaceflight Now reports. Located in Taiki Town on the northern Japanese Island of Hokkaido, the spaceport bills itself as “a commercial spaceport that serves businesses and universities in Japan and abroad, as well as government agencies and other organizations.” It advertises launches from 42 degrees to 98 degrees, including Sun-synchronous orbits.

Talks are exploratory for now … “We look forward to exploring the opportunity to launch our Alpha rocket from Japan, which would allow us to serve the larger satellite industry in Asia and add resiliency for US allies with a proven orbital launch vehicle,” said Adam Oakes, vice president of launch at Firefly Aerospace. All six of Firefly Aerospace’s Alpha rocket launches so far took off from Space Launch Complex 2 at Vandenberg Space Force Base in California. The company is slated to launch its seventh Alpha rocket on a mission for Lockheed Martin, but a date hasn’t been announced while the company continues to work through a mishap investigation stemming from its sixth Alpha launch in April. (submitted by EllPeaTea)

Chinese methane rocket fails. A flight test of one of Chinese commercial rocket developer LandSpace Technology’s methane-powered rockets failed on Friday after the carrier rocket experienced an “anomaly,” Reuters reports. The Beijing-based startup became the world’s first company to launch a methane-liquid oxygen rocket with the successful launch of Zhuque-2 in July 2023. This was the third flight of an upgraded version of the rocket, known as Zhuque-2E Y2.

Comes as larger vehicle set to make debut … The launch was carrying four Guowang low-Earth orbit Internet satellites for the Chinese government. The failure was due to some issue with the upper stage of the vehicle, which is capable of lofting about 3 metric tons to low-Earth orbit. LandSpace, one of China’s most impressive ‘commercial’ space companies, has been working toward the development and launch of the medium-lift Zhuque-3 vehicle. This rocket was due to make its debut later this year, and it’s not clear whether this setback with a smaller vehicle will delay that flight.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Avio gains French Guiana launch license. The French government has granted Italian launch services provider Avio a 10-year license to carry out Vega rocket operations from the Guiana Space Centre in French Guiana, European Spaceflight reports. The decision follows approval by European Space Agency Member States of Italy’s petition to allow Avio to market and manage Vega rocket launches independently of Arianespace, which had overseen the rocket’s operations since its introduction.

From Vega to Vega … With its formal split from Arianespace now imminent, Avio is required to have its own license to launch from the Guiana Space Centre, which is owned and operated by the French government. Avio will make use of the ELV launch complex at the Guiana Space Centre for the launch of its Vega C rockets. The pad was previously used for the original Vega rocket, which was officially retired in September 2024. (submitted by EllPeaTea)

First space rocket launch from Canada this century. Students from Concordia University cheered and whistled as the Starsailor rocket lifted off on Cree territory on August 15, marking the first of its size to be launched by a student team, Radio Canada International reports. The students hoped Starsailor would enter space, past the Kármán line, which is at an altitude of 100 kilometers, before coming back down. But the rocket separated earlier than expected. The livestream can be seen here.

Persistence is thy name … This was Canada’s first space launch in more than 25 years, and the first to be achieved by a team of students, according to the university. Originally built for a science competition, the 13-meter tall rocket was left without a contest after the event was cancelled due to the COVID-19 pandemic. Nevertheless, the team, made up of over 700 members since 2018, pressed forward with the goal of making history and launching the most powerful student-built rocket. (submitted by ArcticChris, durenthal, and CD)

SpaceX launches its 100th Falcon 9 of the year. SpaceX launched its 100th Falcon 9 rocket of the year Monday morning, Spaceflight Now reports. The flight from Vandenberg Space Force Base carried another batch of Starlink optimized V2 Mini satellites into low-Earth orbit. The Starlink 17-5 mission was also the 72nd SpaceX launch of Starlink satellites so far in 2025. It brings the total number of Starlink satellites orbited in 2025 to 1,786.

That’s quite a cadence … The Monday morning flight was a notable milestone for SpaceX. It is just the second time in the company’s history that it achieved 100 launches in one calendar year, a feat so far unmatched by any other American space company, and it is ahead of last year’s pace. Kiko Dontchev, SpaceX’s vice president of launch, said on the social media site X, “For reference on the increase in launch rate from last year, we hit 100 on Oct 20th in 2024. SpaceX is likely to launch more Falcon 9s this year than the total number of Space Shuttle missions NASA flew in three decades. (submitted by EllPeaTea)

X-37B launch set for Thursday night. The US Department of Defense’s reusable X-37B Orbital Test Vehicle is about to make its eighth overall flight into orbit, NASASpaceflight.com reports. Vehicle 1, the first X-37B to fly, is scheduled to launch atop a SpaceX Falcon 9 from the Kennedy Space Center’s Launch Complex 39A on Thursday at 11: 50 pm ET (03: 50 UTC on Friday, August 22). The launch window is just under four hours long.

Will fly for an unspecified amount of time … Falcon 9 will follow a northeast trajectory to loft the X-37B into a low-Earth orbit, possibly a circular orbit at 500 km altitude inclined 49.5 degrees to the equator. The Orbital Test Vehicle 8 mission will spend an unspecified amount of time in orbit, with missions lasting hundreds of days in orbit before landing on a runway. The booster supporting this mission, B1092-6, will perform a return-to-launch-site landing and touchdown on the concrete pad at Landing Zone 2. (submitted by EllPeaTea)

Report finds SpaceX pays few taxes. SpaceX has received billions of dollars in federal contracts over its more than two-decade existence, but it has most likely paid little to no federal income taxes since its founding in 2002, The New York Times reports. The rocket maker’s finances have long been secret because the company is privately held. But the documents reviewed by the Times show that SpaceX can seize on a legal tax benefit that allows it to use the more than $5 billion in losses it racked up by late 2021 to offset paying future taxable income.

Use of tax benefit called ‘quaint’ … Danielle Brian, the executive director of the Project on Government Oversight, a group that investigates corruption and waste in the government, said the tax benefit had historically been aimed at encouraging companies to stay in business during difficult times. It was “quaint” that SpaceX was using it, she said, as it “was clearly not intended for a company doing so well.” It may be quaint, but it is legal. And the losses are very real. Since its inception, SpaceX has invested heavily in its technology and poured revenues into further advances. This has been incredibly beneficial to NASA and the Department of Defense. (submitted by Frank OBrien)

There’s a lot on the line for Starship’s next launch. In a feature, Ars reviews the history of Starbase and its production site, culminating in the massive new Starfactory building that encompasses 1 million square feet. The opening of the sleek, large building earlier this year came as SpaceX continues to struggle with the technical development of the Starship vehicle. Essentially, the article says, SpaceX has built the machine to build the machine. But what about the machine?

Three failures in a row … SpaceX has not had a good run of things with the ambitious Starship vehicle this year. Three times, in January, March, and May, the vehicle took flight. And three times, the upper stage experienced significant problems during ascent, and the vehicle was lost on the ride up to space, or just after. Sources at SpaceX believe the upper stage issues can be resolved, especially with a new “Version 3” of Starship due to make its debut late this year or early in 2026. But the acid test will only come on upcoming flights, beginning Sunday with the vehicle’s tenth test flight.

China tests lunar rocket. In recent weeks, the secretive Chinese space program has reported some significant milestones in developing its program to land astronauts on the lunar surface by the year 2030, Ars reports. Among these efforts, last Friday, the space agency and its state-operated rocket developer, the China Academy of Launch Vehicle Technology, successfully conducted a 30-second test firing of the Long March 10 rocket’s center core with its seven YF-100K engines that burn kerosene and liquid oxygen.

A winner in the space race? … The primary variant of the rocket will combine three of these cores to lift about 70 metric tons to low-Earth orbit. As part of China’s plan to land astronauts on the Moon “before” 2030, this rocket will be used for a crewed mission and lunar lander. Recent setbacks with SpaceX’s Starship vehicle—one of two lunar landers under contract with NASA, alongside Blue Origin’s Mark 2 lander—indicate that it will still be several years until these newer technologies are ready to go. Ars concludes that it is now probable that China will “beat” NASA back to the Moon this decade and win at least the initial heat of this new space race.

Why did Flight 9 of Starship fail? In an update shared last Friday ahead of the company’s next launch, SpaceX identified the most probable cause for the May failure as a faulty main fuel tank pressurization system diffuser located on the forward dome of Starship’s primary methane tank. The diffuser failed a few minutes after launch, when sensors detected a pressure drop in the main methane tank and a pressure increase in the ship’s nose cone just above the tank, Ars reports.

Diffusing the diffuser … The rocket compensated for the drop in main tank pressure and completed its engine burn, but venting from the nose cone and a worsening fuel leak overwhelmed Starship’s attitude control system. Finally, detecting a major problem, Starship triggered automatic onboard commands to vent all remaining propellant into space and “passivate” itself before an unguided reentry over the Indian Ocean, prematurely ending the test flight. Engineers recreated the diffuser failure on the ground during the investigation and then redesigned the part to better direct pressurized gas into the main fuel tank. This will also “substantially decrease” strain on the diffuser structure, SpaceX said.

Next three launches

August 22: Falcon 9 | X-37B space plane | Kennedy Space Center, Fla. | 03: 50 UTC

August 22: Falcon 9 | Starlink 17-6 | Vandenberg Space Force Base, Calif. | 17: 02 UTC

August 23: Electron | Live, Laugh, Launch | Māhia Peninsula, New Zealand | 22: 30 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: Pivotal Starship test on tap, Firefly wants to be big in Japan Read More »

deeply-divided-supreme-court-lets-nih-grant-terminations-continue

Deeply divided Supreme Court lets NIH grant terminations continue

The dissents

The primary dissent was written by Chief Justice Roberts, and joined in part by the three Democratic appointees, Jackson, Kagan, and Sotomayor. It is a grand total of one paragraph and can be distilled down to a single sentence: “If the District Court had jurisdiction to vacate the directives, it also had jurisdiction to vacate the ‘Resulting Grant Terminations.’”

Jackson, however, chose to write a separate and far more detailed argument against the decision, mostly focusing on the fact that it’s not simply a matter of abstract law; it has real-world consequences.

She notes that existing law prevents plaintiffs from suing in the Court of Federal Claims while the facts are under dispute in other courts (something acknowledged by Barrett). That would mean that, as here, any plaintiffs would have to have the policy declared illegal first in the District Court, and only after that was fully resolved could they turn to the Federal Claims Court to try to restore their grants. That’s a process that could take years. In the meantime, the scientists would be out of funding, with dire consequences.

Yearslong studies will lose validity. Animal subjects will be euthanized. Life-saving medication trials will be abandoned. Countless researchers will lose their jobs. And community health clinics will close.

Jackson also had little interest in hearing that the government would be harmed by paying out the grants in the meantime. “For the Government, the incremental expenditure of money is at stake,” she wrote. “For the plaintiffs and the public, scientific progress itself hangs in the balance along with the lives that progress saves.”

With this decision, of course, it no longer hangs in the balance. There’s a possibility that the District Court’s ruling that the government’s policy was arbitrary and capricious will ultimately prevail; it’s not clear, because Barrett says she hasn’t even seen the government make arguments there, and Roberts only wrote regarding the venue issues. In the meantime, even with the policy stayed, it’s unlikely that anyone will focus grant proposals on the disfavored subjects, given that the policy might be reinstated at any moment.

And even if that ruling is upheld, it will likely take years to get there, and only then could a separate case be started to restore the funding. Any labs that had been using those grants will have long since moved on, and the people working on those projects scattered.

Deeply divided Supreme Court lets NIH grant terminations continue Read More »

neolithic-people-took-gruesome-trophies-from-invading-tribes

Neolithic people took gruesome trophies from invading tribes

A local Neolithic community in northeastern France may have clashed with foreign invaders, cutting off limbs as war trophies and otherwise brutalizing their prisoners of war, according to a new paper published in the journal Science Advances. The findings challenge conventional interpretations of prehistoric violence as bring indiscriminate or committed for pragmatic reasons.

Neolithic Europe was no stranger to collective violence of many forms, such as the odd execution and massacres of small communities, as well as armed conflicts. For instance, we recently reported on an analysis of human remains from 11 individuals recovered from El Mirador Cave in Spain, showing evidence of cannibalism—likely the result of a violent episode between competing Late Neolithic herding communities about 5,700 years ago. Microscopy analysis revealed telltale slice marks, scrape marks, and chop marks, as well as evidence of cremation, peeling, fractures, and human tooth marks.

This indicates the victims were skinned, the flesh removed, the bodies disarticulated, and then cooked and eaten. Isotope analysis indicated the individuals were local and were probably eaten over the course of just a few days. There have been similar Neolithic massacres in Germany and Spain, but the El Mirador remains provide evidence of a rare systematic consumption of victims.

Per the authors of this latest study, during the late Middle Neolithic, the Upper Rhine Valley was the likely site of both armed conflict and rapid cultural upheaval, as groups from the Paris Basin infiltrated the region between 4295 and 4165 BCE. In addition to fortifications and evidence of large aggregated settlements, many skeletal remains from this period show signs of violence.

Friends or foes?

Overhead views of late Middle Neolithic violence-related human mass deposits of the Alsace region, France

Overhead views of late Middle Neolithic violence-related human mass deposits in Pit 124 of the Alsace region, France. Credit: Philippe Lefranc, INRAP

Archaeologist Teresa Fernandez-Crespo of Spain’s Valladolid University and co-authors focused their analysis on human remains excavated from two circular pits at the Achenheim and Bergheim sites in Alsace in northwestern France. Fernandez-Crespo et al. examined the bones and found that many of the remains showed signs of unhealed trauma—such as skull fractures—as well as the use of excessive violence (overkill), not to mention quite a few severed left upper limbs. Other skeletons did not show signs of trauma and appeared to have been given a traditional burial.

Neolithic people took gruesome trophies from invading tribes Read More »