Author name: Kris Guyer

cs2-item-market-loses-nearly-$2b-in-value-overnight-due-to-“trade-up”-update

CS2 item market loses nearly $2B in value overnight due to “trade up” update

Valve benefits from any panicked trading in the short term, with every Steam Marketplace sale carrying a 5 percent “Steam Transaction Fee” on top of a 10 percent “Counter-Strike 2 fee… that is determined and collected by the game publisher” (read: Valve). In the long term, though, making some of the rarest items in the game easier to obtain will likely depress overall spending among the whales that dominate the market.

Wild CS2 update tonight. I’ve spent the last few hours digging through market data and built this projection chart to show how I think things play out.

Knives and gloves drop fast (40–50%) as the new trade-up path floods supply, while Covert skins surge short-term as everyone… pic.twitter.com/8NOMIBPZ1F

— SAC (@SAC_IG) October 23, 2025

Using marketplace data, Irish Guys esports team owner SAC ran some projections estimating that, over the next few months, “the market settles about 5–10% lower overall, not a crash, just a correction.” But there are also more bullish and bearish possibilities, depending on how overall item demand and market liquidity develops in the near future.

Market tracker CSFloat also crunched some numbers to determine that the overall supply of knives and gloves could roughly double if every common item were traded up under the new update. In practice, though, the supply increase will likely be “far less.”

Massive monetary shifts aside, this latest update seems set to make it easier for new CS2 players to access some once-rare in-game items without breaking the bank. “I got burned a little [by the update]… but honestly, this is the way to go for the long term health of the game,” Redditor chbotong wrote. “[It’s] given me faith that Valve is actually steering in a direction that favors the average player than a market whale.”

CS2 item market loses nearly $2B in value overnight due to “trade up” update Read More »

elon-musk-just-declared-war-on-nasa’s-acting-administrator,-apparently

Elon Musk just declared war on NASA’s acting administrator, apparently


“Sean said that NASA might benefit from being part of the Cabinet.”

NASA astronauts Reid Wiseman, left, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) astronaut Jeremy Hansen watch as Jared Isaacman testifies before a Senate Committee in 2025. Credit: NASA/Bill Ingalls

The clock just ticked past noon here in Houston, so it’s acceptable to have a drink, right?

Because after another turbulent morning of closely following the rough-and-tumble contest to become the next NASA administrator, I sure could use one.

What has happened now? Why, it was only SpaceX founder Elon Musk, who is NASA’s most important contractor, referring to the interim head of the space agency, Sean Duffy, as “Sean Dummy” and suggesting Duffy was trying to kill NASA. Musk later added, “The person responsible for America’s space program can’t have a 2 digit IQ.”

This is all pretty bonkers, so I want to try to contextualize what I believe is going on behind the scenes. This should help us make sense of what is happening in public.

It all boils down to this

The most important through line for all of this is as follows: the contest to become the next NASA administrator. This has, as the British like to say, hotted up of late. And people are starting to take sides.

In one corner stands the private astronaut and billionaire, Jared Isaacman. He was nominated by Donald Trump to become NASA administrator last year, and after a lengthy process, he was on the cusp of confirmation when the president pulled his nomination for political reasons in late May. In the other corner is Sean Duffy, a former congressman with minimal space experience, whom Trump appointed as interim administrator after yanking Isaacman. Duffy was already secretary of transportation.

Since then, a lot has happened, but it boils down to this. Duffy was, nominally, supposed to be running the space agency while searching for a permanent replacement. The biggest move he has made is naming Amit Kshatriya, a long-time employee, as NASA’s associate administrator. Kshatriya now has a lot of power within the agency and comes with the mindset of a former flight director. He is not enamored with using SpaceX’s Starship as a lunar lander.

After Isaacman’s dismissal, key figures within Trump’s orbit continued to vouch for the former astronaut. They liked his flight experience, his financial background, and his vigor to modernize NASA and lean into the country’s dynamic commercial space industry in the effort to remain ahead of China in spaceflight. Trump listened. He met with Isaacman multiple times since, all positive experiences. A re-nomination seemed possible, even likely.

Duffy likes running NASA

However, Duffy was finding that he liked running NASA. There were lots of opportunities to go on television and burnish his credentials. Spaceflight often receives more positive coverage than air traffic controller strikes. His chief of staff at the Department of Transportation, Pete Meachum, has also enjoyed exercising power at NASA. Neither appears ready to relinquish their influence.

To be clear, Duffy is not saying this publicly. Asked whether Duffy wishes to remain NASA administrator, a spokesperson for the agency gave Ars the following statement on Tuesday morning:

Sean is grateful that the President gave him the chance to lead NASA. At the President’s direction, Sean has focused the agency on one clear goal — making sure America gets back to the Moon before China. Sean said that NASA might benefit from being part of the Cabinet, maybe even within the Department of Transportation, but he’s never said he wants to keep the job himself. The President asked him to talk with potential candidates for Administrator, and he’s been happy to help by vetting people and giving his honest feedback. The bottom line is that Secretary Duffy is here to serve the President, and he will support whomever the President nominates.

But based on discussions with numerous sources, it seems clear that Duffy wants to keep the job. He has not taken significant steps toward identifying a replacement.

His appearances on Fox News and CNBC on Monday morning buttress this fact. It is not typical for a NASA administrator to go on television and criticize one of the space agency’s most important contractors. In this case, Duffy said he was reworking the agency’s lunar lander contracts because SpaceX had fallen behind.

It is true that SpaceX is behind in developing a lunar lander version of Starship. Nevertheless, this was a pretty remarkable thing for Duffy to do, at least in the context of the US space community. NASA projects run late all the time, every time. There was no mention of spacesuits needed for the lunar landing, which also almost certainly will not be ready by 2027.

There seem to be two clear reasons why Duffy did this. One, he wanted to show President Trump he was committed to reaching the Moon again before China gets there. And secondly, with his public remarks, Duffy sought to demonstrate to the rest of the space community that he was willing to stand up to SpaceX.

How do we know this? Because Duffy and Meachum had just spent the weekend calling around to SpaceX’s competitors in the industry, asking for their support in his quest to remain at NASA. For example, he called Blue Origin’s leadership and expressed support for their plans to accelerate a lunar landing program. Then he went on TV to demonstrate in public what he was saying in private.

Musk unloads

By Tuesday morning, Musk appears to have had enough.

The acting administrator had gone on TV and publicly shamed Musk’s company, which has self-invested billions of dollars into Starship. (By contrast, Lockheed has invested little or nothing in the Orion spacecraft, and Boeing also has little skin in the game with the Space Launch System rocket. Similarly, a ‘government option’ lunar lander would likely need to be cost-plus in order to attract Lockheed as a bidder.) Then Duffy praised Blue Origin, which, for all of its promise, has yet to make meaningful achievements in orbit. All the while, it is only thanks to SpaceX and its Dragon spacecraft that NASA does not have to go hat-in-hand to Russia for astronaut transportation.

So Musk channeled his inner Trump and called out “Sean Dummy.” It’s crass language, but will it be effective?

We really don’t know the extent to which Musk and Trump are on speaking terms at this point, but certainly Musk is a huge Republican donor, and there will be plenty of people in Congress who do not want to see another food fight between the world’s most powerful person and its richest person.

The widespread assumption is that Musk is advocating for Isaacman to become his administrator, since he originally put the astronaut forward for the position. However, the reality is that they don’t speak regularly, and although Isaacman is deeply appreciative of what SpaceX has achieved, he seems to genuinely want Blue Origin and other private space companies to succeed as well. Most likely, then, Musk was lashing out in frustration on Tuesday morning, feeling spurned by a space agency he has done a lot for.

Isaacman, for his part, has been keeping a relatively low profile. Trump, who will ultimately make a decision on NASA’s leadership, has also largely been silent about all of this.

Not a super augury

The war of words may be entertaining and a spectacle, but this is pretty dreadful for NASA. The space agency is already down 20 percent of its workforce due to cuts and voluntary retirements. Morale remains low, and the uncertainty over long-term leadership is unhelpful. The first year of the Trump presidency, to many in space, feels like a lost year.

There is also the possibility of a significant restructuring. NASA is an independent federal agency, but my sources (The Wall Street Journal also reported this last night) have indicated that Duffy has sought to move NASA within the Department of Transportation. In his new statement today, Duffy confirmed this. Folding NASA into the Department of Transportation would allow him to maintain oversight of the agency, and Duffy could recommend a leader who is loyal to him.

So this is where we are. A fierce, behind-the-scenes battle rages on among camps supporting Duffy and Isaacman to decide the leadership of NASA. The longer this process drags on, the messier it seems to get. In the meantime, NASA is twisting in the wind, trying to run in molasses while wearing lead shoes as China marches onward and upward.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Elon Musk just declared war on NASA’s acting administrator, apparently Read More »

openai-looks-for-its-“google-chrome”-moment-with-new-atlas-web-browser

OpenAI looks for its “Google Chrome” moment with new Atlas web browser

That means you can use ChatGPT to search through your bookmarks or browsing history using human-parsable language prompts. It also means you can bring up a “side chat” next to your current page and ask questions that rely on the context of that specific page. And if you want to edit a Gmail draft using ChatGPT, you can now do that directly in the draft window, without the need to copy and paste between a ChatGPT window and an editor.

When typing in a short search prompt, Atlas will, by default, reply as an LLM, with written answers with embedded links to sourcing where appropriate (à la OpenAI’s existing search function). But the browser will also provide tabs with more traditional lists of links, images, videos, or news like those you would get from a search engine without LLM features.

Let us do the browsing

To wrap up the livestreamed demonstration, the OpenAI team showed off Atlas’ Agent Mode. While the “preview mode” feature is only available to ChatGPT Plus and Pro subscribers, research lead Will Ellsworth said he hoped it would eventually help users toward “an amazing tool for vibe life-ing” in the same way that LLM coding tools have become tools for “vibe coding.”

To that end, the team showed the browser taking planning tasks written in a Google Docs table and moving them over to the task management software Linear over the course of a few minutes. Agent Mode was also shown taking the ingredients list from a recipe webpage and adding them directly to the user’s Instacart in a different tab (though the demo Agent stopped before checkout to get approval from the user).

OpenAI looks for its “Google Chrome” moment with new Atlas web browser Read More »

cards-against-humanity-lawsuit-forced-spacex-to-vacate-land-on-us/mexico-border

Cards Against Humanity lawsuit forced SpaceX to vacate land on US/Mexico border

A year after suing SpaceX for “invading” a plot of land on the US/Mexico border, Cards Against Humanity says it has obtained a settlement and will provide supporters with a new pack of cards about Elon Musk.

The party-game company bought the land in 2017 in an attempt to stymie President Trump’s wall-building project, but alleged that SpaceX illegally took over the land and filled it with construction equipment and materials. A September 2024 lawsuit filed against SpaceX in Cameron County District Court in Texas sought up to $15 million to cover the cost of restoring the property and other damages.

Cards Against Humanity, which bought the property with donations from supporters, told Ars today that “we’ve been in negotiations with SpaceX for much of the last year. We held out for the best settlement we could get—almost until the trial was supposed to start—and unfortunately part of that negotiation was that we’re not allowed to discuss specific settlement terms. They did admit to trespassing during the discovery phase, which was very validating.”

A court document shows that SpaceX admitted it did not ask for or receive permission to use the property. SpaceX admitted that its “contractors cleared the lot and put down gravel,” parked vehicles on the property, and stored construction materials. An Associated Press article yesterday said that “Texas court records show a settlement was reached in the case last month, just weeks before a jury trial was scheduled to begin on Nov. 3.”

The game company said a victory at trial wouldn’t have resulted in a better outcome. “A trial would have cost more than what we were likely to win from SpaceX,” the company’s statement to Ars said. “Under Texas law, even if we had won at trial (and we would have, given their admission to trespassing), we likely wouldn’t have been able to recoup our legal fees. And SpaceX certainly seemed ready to dramatically outspend us on lawyers.”

“They packed up the space garbage”

The company also provided this update to donors:

Dear Horrible Friends,

Remember last year, when we sued Elon Musk for dumping space garbage all over your land, and then you signed up to collect your share of the proceeds? Also, remember how we warned you that we’d “probably only be able to get you like two dollars or most likely nothing”?

Well, Elon Musk’s team admitted on the record that they illegally trespassed on your land, and then they packed up the space garbage and fucked off. But when it comes to paying you all, he did the legal equivalent of throwing dust in our eyes and kicking us in the balls.

Instead of money, Cards Against Humanity said it will provide its “best, sexiest customers” with a comedic “mini-pack of exclusive cards all about Elon Musk” that can be obtained via this sign-up link. “P.S. Soon, the land will be returned to its natural state: no space garbage, and still completely free of pointless fucking border walls,” the company said.

Cards Against Humanity lawsuit forced SpaceX to vacate land on US/Mexico border Read More »

nasa’s-acting-leader-seeks-to-keep-his-job-with-new-lunar-lander-announcement

NASA’s acting leader seeks to keep his job with new lunar lander announcement

NASA would not easily be able to rip up its existing HLS contracts with SpaceX and Blue Origin, as especially with the former much of the funding has already been awarded for milestone payments. Rather, Duffy would likely have to find new funding from Congress. And it would not be cheap. This NASA analysis, from 2017, estimates that a cost-plus, sole-source lunar lander would cost $20 billion to $30 billion, or nearly 10 times what NASA awarded to SpaceX in 2021.

SpaceX founder Elon Musk, responding to Duffy’s comments, seemed to relish the challenge posed by industry competitors.

“SpaceX is moving like lightning compared to the rest of the space industry,” Musk said on the social media site he owns, X. “Moreover, Starship will end up doing the whole Moon mission. Mark my words.”

The timing

Duffy’s remarks on television on Monday morning, although significant for the broader space community, also seemed intended for an audience of one—President Trump.

The president appointed Duffy, already leading the Department of Transportation, to lead NASA on an interim basis in July. This came six weeks after the president rescinded his nomination of billionaire and private astronaut Jared Isaacman, for political reasons, to lead the space agency.

Trump was under the impression that Duffy would use this time to shore up NASA’s leadership while also looking for a permanent chief of the space agency. However, Duffy appears to have not paid more than lip service to finding a successor.

Since late summer there has been a groundswell of support for Isaacman in the White House, and among some members of Congress. The billionaire has met with Trump several times, both at the White House and Mar-a-Lago, and sources report that the two have a good rapport. There has been some momentum toward the president re-nominating Isaacman, with Trump potentially making a decision soon. Duffy’s TV appearances on Monday morning appear to be part of an effort to forestall this momentum by showing Trump he is actively working toward a lunar landing during his second term, which ends in January 2029.

NASA’s acting leader seeks to keep his job with new lunar lander announcement Read More »

claude-code-gets-a-web-version—but-it’s-the-new-sandboxing-that-really-matters

Claude Code gets a web version—but it’s the new sandboxing that really matters

Now, it can instead be given permissions for specific file system folders and network servers. That means fewer approval steps, but it’s also more secure overall against prompt injection and other risks.

Anthropic’s demo video for Claude Code on the web.

According to Anthropic’s engineering blog, the new network isolation approach only allows Internet access “through a unix domain socket connected to a proxy server running outside the sandbox. … This proxy server enforces restrictions on the domains that a process can connect to, and handles user confirmation for newly requested domains.” Additionally, users can customize the proxy to set their own rules for outgoing traffic.

This way, the coding agent can do things like fetch npm packages from approved sources, but without carte blanche for communicating with the outside world, and without badgering the user with constant approvals.

For many developers, these additions are more significant than the availability of web or mobile interfaces. They allow Claude Code agents to operate more independently without as many detailed, line-by-line approvals.

That’s more convenient, but it’s a double-edged sword, as it will also make code review even more important. One of the strengths of the too-many-approvals approach was that it made sure developers were still looking closely at every little change. Now it might be a little bit easier to miss Claude Code making a bad call.

The new features are available in beta now as a research preview, and they are available to Claude users with Pro or Max subscriptions.

Claude Code gets a web version—but it’s the new sandboxing that really matters Read More »

do-animals-fall-for-optical-illusions?-it’s-complicated.

Do animals fall for optical illusions? It’s complicated.

A tale of two species

View from above of the apparatuses used for ring doves (A) and guppies (B).

View from above of the apparatuses used for ring doves (A) and guppies (B). Credit: M. Santaca et al., 2025

The authors tested 38 ring doves and 19 guppies (of the “snakeskin cobra green” ornamental strain) for their experiments. The doves were placed in a testing cage with the bottom covered with an anti-slip wooden panel and a branch serving as a perch and starting point; the feeding station was at the opposite end of the cage. The guppies were tested in single tanks with a gravel bottom.

In both cases, the test subjects were presented with visual stimuli in the form of two white plastic cards. Sizes differed for the doves and the guppies, but each card showed an array of six black circles with a bit of food serving as the center “circle”: red millet seeds for the doves and commercial flake food for the guppies. The circles were smaller on one of the cards and larger on the other. The subjects were free to choose food from one of the cards, and the card with the unchosen food was removed promptly. If no choice was made after 15 minutes, the trial was null and the team tried again after a 15-minute interval.

The authors found that the guppies were indeed highly susceptible to the Ebbinghaus illusion, choosing food surrounded by smaller circles much more frequently, suggesting they perceived it as larger and hence more desirable. The results for ring doves were more mixed, however: some of the doves seemed to be susceptible while others were not, suggesting that their perceptual strategies are more local, detail-oriented, and less influenced by their surrounding context.

“The doves’ mixed responses suggest that individual experience or innate bias can strongly shape how an animal interprets illusions,” the authors concluded. “Just like in humans, where some people are strongly fooled by illusions and others hardly at all, animal perception is not uniform.”

The authors acknowledge that their study has limitations. For instance, guppies and ring doves diverged hundreds of millions of years ago and hence are phylogenetically distant, so the perceptual differences between them could be due not just to ecological pressures, but also to evolutionary traits gained or lost via natural selection. Future experiments involving more closely related species with different sensory environments would better isolate the role of ecological factors in animal perception.

Frontiers in Psychology, 2025. DOI: 10.3389/fpsyg.2025.1653695 (About DOIs).

Do animals fall for optical illusions? It’s complicated. Read More »

bubble,-bubble,-toil-and-trouble

Bubble, Bubble, Toil and Trouble

We have the classic phenomenon where suddenly everyone decided it is good for your social status to say we are in an ‘AI bubble.’

Are these people short the market? Do not be silly. The conventional wisdom response to that question these days is that, as was said in 2007, ‘if the music is playing you have to keep dancing.’

So even with lots of people newly thinking there is a bubble the market has not moved down, other than (modestly) on actual news items, usually related to another potential round of tariffs, or that one time we had a false alarm during the DeepSeek Moment.

So, what’s the case we’re in a bubble? What’s the case we’re not?

People get confused about bubbles, often applying that label any time prices fall. So you have to be clear on what question is being asked.

If ‘that was a bubble’ simply means ‘number go down’ then it is entirely uninteresting to say things are bubbles.

So if we operationalize ‘bubble’ simply means that at some point there is a substantial drawdown in market values (e.g. a 20% drop in the Nasdaq sustained for 6 months) then I would be surprised by this, but the market would need to be dramatically, crazily underpriced for that not to be a plausible thing to happen.

If a bubble means something similar to the 2000 dot com bubble, as in valuations that are not plausible expectations for the net present values of future cash flows? No.

[Standard disclaimer: Nothing on this blog is ever investment advice.]

Before I dive into the details, a time sensitive point of order, that you can skip if you would not consider political donations:

When trying to pass laws, it is vital to have a champion. You need someone in each chamber of Congress who is willing to help craft, introduce and actively fight for good bills. Many worthwhile bills do not get advanced because no one will champion them.

Alex Bores did this with New York’s RAISE Act, an AI safety bill along similar lines to SB 53 that is currently on the governor’s desk. I did a full RTFB (read the bill) on it, and found it to be a very good bill that I strongly supported. It would not have happened without him championing the bill and spending political capital on it.

By far the strongest argument against the bill is that it would be better if such bills were done on the Federal level.

He’s trying to address this by running for Congress in my own distinct, NY-12, to succeed Jerry Nadler. The district is deeply Democratic, so this will have no impact on the partisan balance. What it would do is give real AI safety a knowledgeable champion in the House of Representatives, capable of championing good bills.

Eric Nayman makes an extensive case for considering donating to Alex Bores today, in his first 24 hours, as donations in the first 24 hours are extremely valuable. Sonnet 4.5 estimates that in this case, a donation on day one is worth about double what it would be worth later. If you do decide to donate, they prefer that you use this link to ensure the donation gets fully registered today.

As always, remember while considering this that political donations are public.

(Note: I intend to remove this announcement from this post after the 24 hour window closes, and move it to AI #139.)

Sagarika Jaisinghani (Bloomberg): A record share of global fund managers said artificial intelligence stocks are in a bubble following a torrid rally this year, according to a survey by Bank of America Corp.

About 54% of participants in the October poll indicated tech stocks were looking too expensive, an about-turn from last month when nearly half had dismissed those concerns. Fears that global stocks were overvalued also hit a peak in the latest survey.

So a month ago things most people thought things were fine and now it’s a bubble?

This is a very light bubble definition, as these things go.

Nothing importantly bearish happened in that month other than bullish deals, so presumably this is a ‘circular deals freak us out’ shift in mood? Or it could be a cascade effect.

There is definitely reason for concern. If you remove the label ‘bubble’ and simply say ‘AI’ then the quote from Deutsche Bank below is correct, as AI is responsible for essentially all economic growth. Also you can mostly replace ‘US’ with ‘world.’

Unusual Whales: “The AI bubble is the only thing keeping the US economy together,” Deutsche Bank has said per TechSpot.

Not quite, at this size you need some doubt involved. But the basic answer is yes.

Roon: one reason to disbelief in a sizeable bubble is when the largest financial institutions in the world are openly calling it that.

Jon Stokes: I disagree. I was in my early 20’s during the dotcom bubble and was in tech, and everyone everywhere knew it was a bubble — from the banks to the VCs down to the individual programmers in SF. Everyone talked about it openly in the last ~1yr of it, but the numbers kept going up.

I don’t think this is Dotcom 2.0 but I think it’s possible the market is getting ahead of itself. That said, I also lived through the cloud “bubble” which turned out to not be a bubble at all — I even wrote my own contributes to “are we in a bubble?” literature in like 2012. Anyone who actually traded on the idea that the cloud buildout was a bubble lost out bigtime.

Every time there’s a big new infra buildout there’s bubble talk.

My point is that is very possible to have a bubble that everyone everywhere knows is a bubble, yet it keeps on bubbling because nobody wants to miss the action & everyone thinks they can time an exit. “Enjoy the party, but dance close to the exits” was the slogan back then.

It is definitely possible to get into an Everybody Knows situation with a bubble, for various reasons, both when it is and when it isn’t actually a bubble. For example, there’s Bitcoin, and Bitcoin, and Bitcoin, and Bitcoin, but there’s also Bitcoin.

Is it evidence for or against a bubble when everyone says it’s a bubble?

My gut answer is it depends on who is everyone.

If everyone is everyone working in the industry? Then yeah, evidence for a bubble.

If everyone is everyone at the major economic institutions? Not so much.

So I decided to check.

There was essentially no correlation, with 42.5% of AI workers and 41.7% of others saying there is a bubble, and that’s a large percentage, so things are certainly somewhat concerning. It certainly seems likely that certain subtypes of AI investment are ‘in a bubble’ in the sense that investors in those subtypes will lose money, which you would expect in anything like an efficient market.

In particular, consensus seems to be, and I agree with it (reminder: not investment advice), that investment in ‘companies with products in position to get steamrolled by OpenAI and other frontier labs’ are as a group not going to do well. If you want to call that an ‘AI bubble’ you can, but that seems net misleading. I also wouldn’t be excited to short that basket, since you’re exposed if even one of them hits it big. Remember that if you bought a tech stock portfolio at the dot com peak, you still got Amazon.

Whereas if you had a portfolio of ‘picks and shovels’ or of the frontier labs themselves, that still seems to me like a fine place to bet, although it is no longer a ‘I can’t believe they’re letting me buy at these prices, this is free money’ level of fine. You now have to actually have beliefs about the future and an investment thesis.

Noah Smith speculates on bubble causes and types when it comes to AI.

Noah Smith: An AI crash isn’t certain, but I think it’s more likely than people think.

Looking at the historical examples of railroads, electricity, dotcoms, and housing can help us understand what an AI crash would look like.

A burning question that’s on a lot of people’s minds right now is: Why is the U.S. economy still holding up? The manufacturing industry is hurting badly from Trump’s tariffs, the payroll numbers are looking weak, and consumer sentiment is at Great Recession levels.

… Another possibility is that tariffs are bad, but are being canceled out by an even more powerful force — the AI boom.

You could have a speculative bubble, or an extrapolative bubble, or simply a big mistake about the value of the tech. He thinks if it is a bubble it would be of the later type, proposing we use Bezos’ term ‘industrial bubble.’

Noah Smith: … When we look at the history of industrial bubbles, and of new technologies in general, it becomes clear that in order to cause a crash, AI doesn’t have to fail. It just has to mildly disappoint the most ardent optimists.

I don’t think that’s quite right. The market reflects a variety of perspectives, and it will almost always be way below where the ardent optimists would place it. The ardent optimists are the rock with the word ‘BUY!’ written on it.

What is right is that if AI over some time frame disappoints relative to expectations, sufficiently to shift forward expectations downward from their previous level, that would cause a substantial drop in prices, which could then break momentum and worsen various positions, causing a larger drop in prices.

Thus, we could have AI ultimately having a huge economic impact and ultimately being fully transformative (maybe killing everyone, maybe being amazingly great), and have nothing go that wrong along the way, but still have what people at the time would call ‘the bubble bursting.’

Indeed, if the market is at all efficient, there is a lot of ‘upside risk’ of AI being way more impactful than suggested by the market price, which means there has to be a corresponding downside risk too. Part of that risk is geopolitical, an anti-AI movement could rise, or the supply chain could be disrupted by tariff battles or a war over Taiwan. By traditional definitions of ‘bubble,’ that means a potential bubble.

Ethan Mollick: I don’t have much to add to the bubble discussion, but the “this time is different” argument is, in part, based on the sincere belief of many at the AI labs that there is a race to superintelligence & the winner gets,.. everything.

It is a key dynamic that is not discussed much.

You don’t have to believe it (or think this is a good idea), but many of the AI insiders really do. Their public statements are not much different than their private ones. Without considering that zero sum dimension, a lot of what is happening in the space makes less sense.

Even a small chance of a big upside should mean a big boost to valuation. Indeed that is the reason tech startups are funded and venture capital firms exist. If you don’t get the fully transformational level of impact, then at some point value will drop.

Consider the parallel to Bitcoin, and in thinking there is some small percentage chance of becoming ‘digital gold’ or even the new money. If you felt there was no way it could fall by a lot from any given point in time, or even if you were simply confident that it was probably not going to crash, it would be a fantastic screaming buy.

AI also has to retain expectations that providers will be profitable. If AI is useful but it is expected to not provide enough profits, that too can burst the bubble.

Matthew Yglesias: Key point in here from @Noahpinion — even if the AI tech turns out to be exactly as promising as the bulls think, it’s not totally clear whether this would mean high margin businesses.

A slightly random example but passenger jetliners have definitely worked out as a technology, tons of people use them and they are integral to the whole world economy. But the combined market cap of Boeing + Airbus is unimpressive.

Jetliners seem a lot more important and impressive than the idea of a big box building supply store, but in terms of market cap Home Depot > Boeing + Airbus. Technology is hard but then business is also hard.

Sam D’Amico: AI is going to rock but we may have a near-term capex bubble like the fiber buildout during the dotcom boom.

Matthew Yglesias writes more thoughts here, noting that AI is propping up the whole economy and offering reasons some people believe there’s a bubble and also reasons it likely isn’t one, and especially isn’t one in the pure bubble sense of cryptocurrency or Beanie babies, there’s clearly a there there.

To say confidently that there is no bubble in AI is to claim, among other things, that the market is horribly inefficient, and that AI assets are and will remain dramatically underpriced but reliably gain value as people gain situational awareness and are mugged by reality. This includes the requirement that the currently trading AI assets will be poised to capture a lot of value.

Alternatively, how about the possibility that there could be a crash for no reason?

Simeon: Agreed with Noah here. People underestimate the odds of a crash, and things like the OA x AMD deal make such things more likely. It just takes a sufficient number of people to be scared at the same time.

Remember the market reaction to DeepSeek? It can be irrational.

The DeepSeek moment is sobering, since the AI market was down quite a lot on news that should have been priced in and if anything should have made prices go up. What is to stop a similar incorrect information cascade from happening again? Other than a potential ‘Trump put’ or Fed put, very little.

Derek Thompson provides his best counterargument, saying AI probably isn’t a bubble. He also did an episode on this for the Plain English podcast with Azeem Azhar of Exponential View to balance his previous episode from September 23 on ‘how the AI bubble could burst.

Derek Thompson: And yet, look around: Is anybody actually acting as if AI is a bubble?

… Everyone claims that they know the music is ending soon, and yet everybody is still dancing along to the music.

Well, yeah, when there’s a bubble everyone goes around saying ‘there’s a bubble’ but no one does anything about it, until they do and then there’s no bubble?

As Tyler Cowen sometimes asks, are you short the market? Me neither.

Derek breaks down the top arguments for a bubble.

  1. Lofty valuations for companies with no clear path to profit.

  2. Unprecedented spending on an unproven business.

  3. A historic chasm between capex spending and revenue.

  4. An eerie level of financial opacity.

  5. A byzantine level of corporate entanglement.

A lot of overlap here. We have a lot of money being invested in and spent on AI, without much revenue. True that. The AI companies all invest in and buy from each other, at a level that yeah is somewhat suspicious. Yeah, fair. The chips are only being discounted on five year horizons and that seems a bit long? Eh, that seems fine to me, older chips are still useful as long as demand exceeds supply.

So why not a bubble? That part is gated, but his thread lays out the core responses. One, the AI companies look nothing like the joke companies in the dot com bubble.

Two, the AI companies need AI revenues to grow 100%-200% a year, and that sounds like a lot, but so far you’re seeing even more than that.

Epoch AI: One way bubbles pop: a technology doesn’t deliver value as quickly as investors bet it will. In light of that, it’s notable that OpenAI is projecting historically unprecedented revenue growth — from $10B to $100B — over the next three years.

OpenAI’s revenue growth has been extremely impressive: from <$1B to >$10B in only three years. Still, a few other companies have pulled off similar growth.

We found four such US companies in the past fifty years. Of these, only Google went on to top $100B in revenue.

Peter Wildeford: OpenAI is claiming they will double revenue three years in a row… this is historically unprecedented, but may be possible (so far they are 3x/year and NVIDIA has done 2x/yr lately)

But OpenAI has also projected revenue of $100B in 2028. We found seven companies which achieved revenue growth from $10B to $100B in under a decade.

None of them did it in six years, let alone three.

As Matt Levine says, OpenAI has a business model now, because when you need debt investors you need to have a business plan. This one strikes a balance between ‘good enough to raise money’ and ‘not so good no one will believe it.’ Which makes it well under where I expect their revenue to land.

Frankly, OpenAI is downplaying their expectations because if they used their actual projections then no one would believe them, and they might get sued if things didn’t work out. The baseline scenario is that OpenAI (and Anthropic) blow the projections out of the water.

Martha Gimbel: Ok not the main point of [Thompson’s] article but I love this line: “The whole thing is vaguely Augustinian: O Lord, make me sell my Nvidia position and rebalance toward consumer staples, but not yet.”

Timothy Lee thinks it’s probably ‘not a bubble yet,’ partly citing Thompson and partly because we are seeing differentiation on which models do tasks best. He also links to the worry that there might be no moat, as fast follows offer the same service much cheaper and kill your margin, since your product might be only slightly better.

The thing about AI is that it might in total cost a lot, but in exchange you get a ton. It doesn’t have to be ten times better to have the difference be a big deal. For most use cases of AI, you would be wise to pay twice the price for something 10% better, and often wise to pay 10 or 100 times as much. Always think absolute cost, not relative.

Vinay Sridhar: What finally caught my attention was this stat from an NYT article by Natasha Sarin (via Marginal Revolution): “To provide some sense of scale, that means the equivalent of about $1,800 per person in America will be invested this year on AI”. That is a bucketload of spending.

It is, but is it? I spend more than that on one of my AI subscriptions, and get many times that much value in return. Thinking purely in terms of present day concrete benefits, when I ask ‘how many different use cases of AI are providing me $1,800 in value?’ I can definitely include taxes and accounting, product evaluation and search, medical help, analysis of research papers, coding and general information and search. So that’s at least six.

Similarly, does this sound like a problem, given the profit margins of these companies?

Vinay Sridhar: Hyperscalers (Microsoft, Amazon, Alphabet and Meta) historically ran capex at 11-16% of revenue. Today they’re at 22%, with revenue YoY growth in the 15-25% range (ex Nvidia) – capex spending is dramatically outpacing revenue growth. The four major hyperscalers are spending approximately $320 billion combined on AI infrastructure in 2025 – with public statements on how this will likely continue in the coming years.

Similarly, Vinay notes that the valuations are only somewhat high.

Current valuations, while elevated, remain “much lower” than prior tech bubbles. The Nasdaq’s forward P/E ratio is ~28X today. At the 2000 peak, it exceeded 70X. Even in 2007 before the financial crisis, tech valuations were higher relative to earnings than they are now.

Looking more closely, the MAG7 — who are spending the majority of this capex — has a blended P/E of ~32X — expensive, as Coatue’s Laffont brothers said in June, but not extreme relative to past tech bubbles.

That’s a P/E ratio, and all this extra capex spending if anything reduces short term earnings. Does a 28x forward P/E ratio sound scary in context, with YoY growth in the 20% range? It doesn’t to me. Sure, there’s some downside, but it would be a dramatic inefficiency if there wasn’t.

Vinay offers several other notes as well.

One thing I find confusing is all the ‘look how fast the chips will lose value’ arguments. Here’s Vinay’s supremely confident claims, as another example of this:

Vinay Sridhar: 7. GPU Depreciation Schedules Don’t Match Reality

Nvidia now unveils a new AI chip every year instead of every two years. Jensen Huang said in March that “when Blackwell starts shipping in volume you couldn’t give Hoppers away”.

Meanwhile, companies keep extending depreciation schedules. Microsoft: 4 to 6 years (2022). Alphabet: 4 to 6 years (2023). Amazon and Oracle: 5 to 6 years (2024). Meta: 5 to 5.5 years (January 2025). Amazon partially reversed course in January 2025, moving some assets back to 5 years, noting this would cut operating profit by $700m.

The Economist analyzed the impact: if servers depreciate over 3 years instead of current schedules, the AI big five’s combined annual pre-tax profit falls by $26bn (8% of last year’s total). Over 2 years: $1.6trn market cap hit. If you take Huang literally at 1 year, this implies $4trn, one-third of their collective worth. Barclays estimated higher depreciation costs would shave 5-10% from earnings per share.

Hedgie takes a similar angle, calling the economics unsustainable because the lifespan of data center components is only 3-10 years due to rapid technological advances.

Hedgie: Kupperman originally assumed data center components would depreciate over 10 years, but learned from two dozen senior professionals that the actual lifespan is just 3-10 years due to rapid technology advances. His revised calculations show the industry needs $320-480 billion in revenue just to break even on 2025 data center spending alone. Current AI revenue sits around $20 billion annually.

What strikes me most is that none of the senior data center professionals Kupperman spoke with understand how the financial math works either.

No one I have seen is saying that chip capability improvements are accelerating dramatically. If that is the case we need to update our timelines.

When Nvidia releases a new chip every year, that doesn’t mean they do the 2027 chip in 2026 and then do the 2029 chip in 2027. It means they do the 2027 chip in 2027, and before that do the best chip you can do in 2026, and it also means Nvidia is good at marketing and life is coming at them fast.

Huang’s statement about free hoppers is obviously deeply silly, and everyone knows not to take such Nvidia statements seriously or literally. The existence of new better chips does not invalidate older worse chips unless supply exceeds demand by enough that the old chips cost more to run then the value they bring.

That’s very obviously not going to happen over three years let alone one or two. You can do math on the production capacity available.

If the marginal cost of hoppers in 2028 was going to be approximately zero, what does that imply?

By default? Stop thinking about capex depreciation and start thinking about whether this means we get a singularity in 2028, since you can now scale compute as long as you have power. Also, get long China, since they have unlimited power generation.

If that’s not why, then it means AI use cases turned out to be severely limited, and the world has a large surplus of compute and not much to do with it.

It kind of has to be one or the other. Neither seems plausible.

I see not only no sign of overcapacity, I see signs of undercapacity, including a scramble for every chip people can get and compute being a limiting factor on many labs in practice right now, including OpenAI and Anthropic. The price of compute has recently been rising, not falling, including the price for renting older chips.

Dave Friedman looked into the accounting here, ultimately not seeing this as a solvency or liquidity issue, but he thinks there could be an accounting optics issue.

Could recent trends reverse, and faster than expected depreciations and ability to charge for older chips cause problems for the accounting in data centers? I mean, sure, that’s obviously possible, if we actually produce enough better chips, or demand sufficiently lags expectations, or some combination thereof.

This whole question seems like a strange thing for those investing hundreds of billions and everyone trading the market to not have priced into their plans and projections? Yes, current OpenAI revenue is on the order of $20 billion, but if you project that out over 3-10 years, that number is going to be vastly higher, and there are other companies.

I mostly agree with Charles that the pro-bubble arguments are remarkably weak, given the amount of bubble talk we are seeing, and that when you combine these two facts it should move you towards there not being a bubble.

Unlike Charles, I am not about to use leverage. I consider leverage in personal investing to be reserved for extreme situations, and a substantial drop in prices is very possible. But I definitely understand.

Charles: There have been so many terrible arguments for why we’re in an AI bubble lately, and so few good ones, that I’ve been convinced the appropriate update is in the “not a bubble” direction and increased my already quite long position.

Fwiw that looks like now being 1.4x leveraged long a mix of about 50% index funds and 50% specific bets (GOOG, TSMC, AMZN the biggest of those).

Most of what changed, I think, is that there were a bunch of circular deals done in close succession, and when combined with the exponential growth expectations for AI and people’s lack of understanding the technology and what it will be able to do, and the valuations approaching the point where one can question there being any room to grow, this reasonably triggered various heuristics and freaked people out.

If we define a bubble narrowly as ‘we see a Nasdaq price decline of 20% sustained for 6 months’ I would give that on the order of 25% to happen within the next few years, including as part of a marketwide decline in prices. It has happened to the wider market as recently as 2022, and about 5 times in the last 50 years.

If a decline does happen, I predict I will probably use that opportunity to buy more.

That does not have to be true. Perhaps there will have been large shifts in anticipated future capabilities, or in the competitive landscape and ability to capture profits, or the general economic conditions, and the drop will be fully justified and reflect AI slowing down.

But most of the time this will not be what happened, and the drop will not ultimately have much effect, although it would presumably slow down progress slightly.

Connor Leahy: I want to preregister the following opinion:

I think it’s plausible, but by no means guaranteed, that we could see a massive financial crisis or bubble pop affecting AI in the next year.

I expect if this happens, it will be mostly for mundane economic reasons (overleveraged markets, financial policy of major nations, mistiming of bets even by small amounts and good ol’ fraud), not because the technology isn’t making rapid progress.

I expect such a crisis to have at most modest effects on timelines to existentially dangerous ASI being developed, but will be used by partisans to try and dismiss the risk.

Sadly, a bunch of people making poorly thought through leveraged bets on the market tells you little about underlying object reality of how powerful AI is or soon will be.

Do not be fooled by narratives.

Discussion about this post

Bubble, Bubble, Toil and Trouble Read More »

roberta-williams’-the-colonel’s-bequest-was-a-different-type-of-adventure-game

Roberta Williams’ The Colonel’s Bequest was a different type of adventure game

However, my mom was another story. I remember her playing Dr. Mario a lot, and we played Donkey Kong Country together when I was young—standard millennial childhood family gaming stuff. But the games I most associate with her from my childhood are adventure games. She liked King’s Quest, of course—but I also remember her being particularly into the Hugo trilogy of games.

As I mentioned above, I struggled to get hooked on those. Fortunately, we were able to meet in the middle on The Colonel’s Bequest.

I remember swapping chairs with my mom as we attempted additional playthroughs of the game; I enjoyed seeing the secrets she found that I hadn’t because I was perhaps too young to think things through the way she did.

Games you played with family stick with you more, so I think I mostly remember The Colonel’s Bequest so well because, as I recall, it was my mom’s favorite game.

The legacy of The Colonel’s Bequest

The Colonel’s Bequest may have been a pivotal game for me personally, but it hasn’t really resonated through gaming history the way that King’s Quest, The Secret of Monkey Island, or other adventure titles did.

I think that’s partly because many people might understandably find the game a bit boring. There’s not much to challenge you here, and your character is kind of just along for the ride. She’s not the center of the story, and she’s not really taking action. She’s just walking around, listening and looking, until the clock runs out.

That formula has more niche appeal than traditional point-and-click adventure games.

Still, the game has its fans. You can buy and download it from GOG to play it today, of course, but it also recently inspired a not-at-all-subtle spiritual successor by developer Julia Minamata called The Crimson Diamond, which we covered here at Ars. That game is worth checking out, too, though it goes a more traditional route with its gameplay.

The Crimson Diamond‘s influence from The Colonel’s Bequest wasn’t subtle, but that’s OK. Credit: GOG

And of course, The Colonel’s Bequest creators Roberta and Ken Williams are still active; they somewhat recently released a 3D reboot of Colossal Cave, a title many credit as the foremost ancestor of the point-and-click adventure genre.

Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Roberta Williams’ The Colonel’s Bequest was a different type of adventure game Read More »

yes,-everything-online-sucks-now—but-it-doesn’t-have-to

Yes, everything online sucks now—but it doesn’t have to


from good to bad to nothing

Ars chats with Cory Doctorow about his new book Enshittification.

We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It.

As Doctorow tells it, he was on vacation in Puerto Rico, staying in a remote cabin nestled in a cloud forest with microwave Internet service—i.e., very bad Internet service, since microwave signals struggle to penetrate through clouds. It was a 90-minute drive to town, but when they tried to consult TripAdvisor for good local places to have dinner one night, they couldn’t get the site to load. “All you would get is the little TripAdvisor logo as an SVG filling your whole tab and nothing else,” Doctorow told Ars. “So I tweeted, ‘Has anyone at TripAdvisor ever been on a trip? This is the most enshittified website I’ve ever used.’”

Initially, he just got a few “haha, that’s a funny word” responses. “It was when I married that to this technical critique, at a moment when things were quite visibly bad to a much larger group of people, that made it take off,” Doctorow said. “I didn’t deliberately set out to do it. I bought a million lottery tickets and one of them won the lottery. It only took two decades.”

Yes, people sometimes express regret to him that the term includes a swear word. To which he responds, “You’re welcome to come up with another word. I’ve tried. ‘Platform decay’ just isn’t as good.” (“Encrapification” and “enpoopification” also lack a certain je ne sais quoi.)

In fact, it’s the sweariness that people love about the word. While that also means his book title inevitably gets bleeped on broadcast radio, “The hosts, in my experience, love getting their engineers to creatively bleep it,” said Doctorow. “They find it funny. It’s good radio, it stands out when every fifth word is ‘enbeepification.’”

People generally use “enshittification” colloquially to mean “the degradation in the quality and experience of online platforms over time.” Doctorow’s definition is more specific, encompassing “why an online service gets worse, how that worsening unfolds,” and how this process spreads to other online services, such that everything is getting worse all at once.

For Doctorow, enshittification is a disease with symptoms, a mechanism, and an epidemiology. It has infected everything from Facebook, Twitter, Amazon, and Google, to Airbnb, dating apps, iPhones, and everything in between. “For me, the fact that there were a lot of platforms that were going through this at the same time is one of the most interesting and important factors in the critique,” he said. “It makes this a structural issue and not a series of individual issues.”

It starts with the creation of a new two-sided online product of high quality, initially offered at a loss to attract users—say, Facebook, to pick an obvious example. Once the users are hooked on the product, the vendor moves to the second stage: degrading the product in some way for the benefit of their business customers. This might include selling advertisements, scraping and/or selling user data, or tweaking algorithms to prioritize content the vendor wishes users to see rather than what those users actually want.

This locks in the business customers, who, in turn, invest heavily in that product, such as media companies that started Facebook pages to promote their published content. Once business customers are locked in, the vendor can degrade those services too—i.e., by de-emphasizing news and links away from Facebook—to maximize profits to shareholders. Voila! The product is now enshittified.

The four horsemen of the shitocalypse

Doctorow identifies four key factors that have played a role in ushering in an era that he has dubbed the “Enshittocene.” The first is competition (markets), in which companies are motivated to make good products at affordable prices, with good working conditions, because otherwise customers and workers will go to their competitors.  The second is government regulation, such as antitrust laws that serve to keep corporate consolidation in check, or levying fines for dishonest practices, which makes it unprofitable to cheat.

The third is interoperability: the inherent flexibility of digital tools, which can play a useful adversarial role. “The fact that enshittification can always be reversed with a dis-enshittifiting counter-technology always acted as a brake on the worst impulses of tech companies,” Doctorow writes. Finally, there is labor power; in the case of the tech industry, highly skilled workers were scarce and thus had considerable leverage over employers.

All four factors, when functioning correctly, should serve as constraints to enshittification. However, “One by one each enshittification restraint was eroded until it dissolved, leaving the enshittification impulse unchecked,” Doctorow writes. Any “cure” will require reversing those well-established trends.

But isn’t all this just the nature of capitalism? Doctorow thinks it’s not, arguing that the aforementioned weakening of traditional constraints has resulted in the usual profit-seeking behavior producing very different, enshittified outcomes. “Adam Smith has this famous passage in Wealth of Nations about how it’s not due to the generosity of the baker that we get our bread but to his own self-regard,” said Doctorow. “It’s the fear that you’ll get your bread somewhere else that makes him keep prices low and keep quality high. It’s the fear of his employees leaving that makes him pay them a fair wage. It is the constraints that causes firms to behave better. You don’t have to believe that everything should be a capitalist or a for-profit enterprise to acknowledge that that’s true.”

Our wide-ranging conversation below has been edited for length to highlight the main points of discussion.

Ars Technica: I was intrigued by your choice of framing device, discussing enshittification as a form of contagion. 

Cory Doctorow: I’m on a constant search for different framing devices for these complex arguments. I have talked about enshittification in lots of different ways. That frame was one that resonated with people. I’ve been a blogger for a quarter of a century, and instead of keeping notes to myself, I make notes in public, and I write up what I think is important about something that has entered my mind, for better or for worse. The downside is that you’re constantly getting feedback that can be a little overwhelming. The upside is that you’re constantly getting feedback, and if you pay attention, it tells you where to go next, what to double down on.

Another way of organizing this is the Galaxy Brain meme, where the tiny brain is “Oh, this is because consumers shopped wrong.” The medium brain is “This is because VCs are greedy.” The larger brain is “This is because tech bosses are assholes.” But the biggest brain of all is “This is because policymakers created the policy environment where greed can ruin our lives.” There’s probably never going to be just one way to talk about this stuff that lands with everyone. So I like using a variety of approaches. I suck at being on message. I’m not going to do Enshittification for the Soul and Mornings with Enshittifying Maury. I am restless, and my Myers-Briggs type is ADHD, and I want to have a lot of different ways of talking about this stuff.

Ars Technica: One site that hasn’t (yet) succumbed is Wikipedia. What has protected Wikipedia thus far? 

Cory Doctorow: Wikipedia is an amazing example of what we at the Electronic Frontier Foundation (EFF) call the public interest Internet. Internet Archive is another one. Most of these public interest Internet services start off as one person’s labor of love, and that person ends up being what we affectionately call the benevolent dictator for life. Very few of these projects have seen the benevolent dictator for life say, “Actually, this is too important for one person to run. I cannot be the keeper of the soul of this project. I am prone to self-deception and folly just like every other person. This needs to belong to its community.” Wikipedia is one of them. The founder, my friend Jimmy Wales, woke up one day and said, “No individual should run Wikipedia. It should be a communal effort.”

There’s a much more durable and thick constraint on the decisions of anyone at Wikipedia to do something bad. For example, Jimmy had this idea that you could use AI in Wikipedia to help people make entries and navigate Wikipedia’s policies, which are daunting. The community evaluated his arguments and decided—not in a reactionary way, but in a really thoughtful way—that this was wrong. Jimmy didn’t get his way. It didn’t rule out something in the future, but that’s not happening now. That’s pretty cool.

Wikipedia is not just governed by a board; it’s also structured as a nonprofit. That doesn’t mean that there’s no way it could go bad. But it’s a source of friction against enshittification. Wikipedia has its entire corpus irrevocably licensed as the most open it can be without actually being in the public domain. Even if someone were to capture Wikipedia, there’s limits on what they could do to it.

There’s also a labor constraint in Wikipedia in that there’s very little that the leadership can do without bringing along a critical mass of a large and diffuse body of volunteers. That cuts against the volunteers working in unison—they’re not represented by a union; it’s hard for them to push back with one voice. But because they’re so diffuse and because there’s no paychecks involved, it’s really hard for management to do bad things. So if there are two people vying for the job of running the Wikimedia Foundation and one of them has got nefarious plans and the other doesn’t, the nefarious plan person, if they’re smart, is going to give it up—because if they try to squeeze Wikipedia, the harder they squeeze, the more it will slip through their grasp.

So these are structural defenses against enshittification of Wikipedia. I don’t know that it was in the mechanism design—I think they just got lucky—but it is a template for how to run such a project. It does raise this question: How do you build the community? But if you have a community of volunteers around a project, it’s a model of how to turn that project over to that community.

Ars Technica: Your case studies naturally include the decay of social media, notably Facebook and the social media site formerly known as Twitter. How might newer social media platforms resist the spiral into “platform decay”?

Cory Doctorow: What you want is a foundation in which people on social media face few switching costs. If the social media is interoperable, if it’s federatable, then it’s much harder for management to make decisions that are antithetical to the interests of users. If they do, users can escape. And it sets up an internal dynamic within the firm, where the people who have good ideas don’t get shouted down by the people who have bad but more profitable ideas, because it makes those bad ideas unprofitable. It creates both short and long-term risks to the bottom line.

There has to be a structure that stops their investors from pressurizing them into doing bad things, that stops them from rationalizing their way into complying. I think there’s this pathology where you start a company, you convince 150 of your friends to risk their kids’ college fund and their mortgage working for you. You make millions of users really happy, and your investors come along and say, “You have to destroy the life of 5 percent of your users with some change.” And you’re like, “Well, I guess the right thing to do here is to sacrifice those 5 percent, keep the other 95 percent happy, and live to fight another day, because I’m a good guy. If I quit over this, they’ll just put a bad guy in who’ll wreck things. I keep those 150 people working. Not only that, I’m kind of a martyr because everyone thinks I’m a dick for doing this. No one understands that I have taken the tough decision.”

I think that’s a common pattern among people who, in fact, are quite ethical but are also capable of rationalizing their way into bad things. I am very capable of rationalizing my way into bad things. This is not an indictment of someone’s character. But it’s why, before you go on a diet, you throw away the Oreos. It’s why you bind yourself to what behavioral economists call “Ulysses pacts“: You tie yourself to the mast before you go into the sea of sirens, not because you’re weak but because you’re strong enough now to know that you’ll be weak in the future.

I have what I would call the epistemic humility to say that I don’t know what makes a good social media network, but I do know what makes it so that when they go bad, you’re not stuck there. You and I might want totally different things out of our social media experience, but I think that you should 100 percent have the right to go somewhere else without losing anything. The easier it is for you to go without losing something, the better it is for all of us.

My dream is a social media universe where knowing what network someone is using is just a weird curiosity. It’d be like knowing which cell phone carrier your friend is using when you give them a call. It should just not matter. There might be regional or technical reasons to use one network or another, but it shouldn’t matter to anyone other than the user what network they’re using. A social media platform where it’s always easier for users to leave is much more future-proof and much more effective than trying to design characteristics of good social media.

Ars Technica: How might this work in practice?

Cory Doctorow: I think you just need a protocol. This is [Mike] Maznik’s point: protocols, not products. We don’t need a universal app to make email work. We don’t need a universal app to make the web work. I always think about this in the context of administrable regulation. Making a rule that says your social media network must be good for people to use and must not harm their mental health is impossible. The fact intensivity of determining whether a platform satisfies that rule makes it a non-starter.

Whereas if you were to say, “OK, you have to support an existing federation protocol, like AT Protocol and Mastodon ActivityPub,” both have ways to port identity from one place to another and have messages auto-forward. This is also in RSS. There’s a permanent redirect directive. You do that, you’re in compliance with the regulation.

Or you have to do something that satisfies the functional requirements of the spec. So it’s not “did you make someone sad in a way that was reckless?” That is a very hard question to adjudicate. Did you satisfy these functional requirements? It’s not easy to answer that, but it’s not impossible. If you want to have our users be able to move to your platform, then you just have to support the spec that we’ve come up with, which satisfies these functional requirements.

We don’t have to have just one protocol. We can have multiple ones. Not everything has to connect to everything else, but everyone who wants to connect should be able to connect to everyone else who wants to connect. That’s end-to-end. End-to-end is not “you are required to listen to everything someone wants to tell you.” It’s that willing parties should be connected when they want to be.

Ars Technica: What about security and privacy protocols like GPG and PGP?

Cory Doctorow: There’s this argument that the reason GPG is so hard to use is that it’s intrinsic; you need a closed system to make it work. But also, until pretty recently, GPG was supported by one part-time guy in Germany who got 30,000 euros a year in donations to work on it, and he was supporting 20 million users. He was primarily interested in making sure the system was secure rather than making it usable. If you were to put Big Tech quantities of money behind improving ease of use for GPG, maybe you decide it’s a dead end because it is a 30-year-old attempt to stick a security layer on top of SMTP. Maybe there’s better ways of doing it. But I doubt that we have reached the apex of GPG usability with one part-time volunteer.

I just think there’s plenty of room there. If you have a pretty good project that is run by a large firm and has had billions of dollars put into it, the most advanced technologists and UI experts working on it, and you’ve got another project that has never been funded and has only had one volunteer on it—I would assume that dedicating resources to that second one would produce pretty substantial dividends, whereas the first one is only going to produce these minor tweaks. How much more usable does iOS get with every iteration?

I don’t know if PGP is the right place to start to make privacy, but I do think that if we can create independence of the security layer from the transport layer, which is what PGP is trying to do, then it wouldn’t matter so much that there is end-to-end encryption in Mastodon DMs or in Bluesky DMs. And again, it doesn’t matter whose sim is in your phone, so it just shouldn’t matter which platform you’re using so long as it’s secure and reliably delivered end-to-end.

Ars Technica: These days, I’m almost contractually required to ask about AI. There’s no escaping it. But it’s certainly part of the ongoing enshittification.

Cory Doctorow: I agree. Again, the companies are too big to care. They know you’re locked in, and the things that make enshittification possible—like remote software updating, ongoing analytics of use of devices—they allow for the most annoying AI dysfunction. I call it the fat-finger economy, where you have someone who works in a company on a product team, and their KPI, and therefore their bonus and compensation, is tied to getting you to use AI a certain number of times. So they just look at the analytics for the app and they ask, “What button gets pushed the most often? Let’s move that button somewhere else and make an AI summoning button.”

They’re just gaming a metric. It’s causing significant across-the-board regressions in the quality of the product, and I don’t think it’s justified by people who then discover a new use for the AI. That’s a paternalistic justification. The user doesn’t know what they want until you show it to them: “Oh, if I trick you into using it and you keep using it, then I have actually done you a favor.” I don’t think that’s happening. I don’t think people are like, “Oh, rather than press reply to a message and then type a message, I can instead have this interaction with an AI about how to send someone a message about takeout for dinner tonight.” I think people are like, “That was terrible. I regret having tapped it.” 

The speech-to-text is unusable now. I flatter myself that my spoken and written communication is not statistically average. The things that make it me and that make it worth having, as opposed to just a series of multiple-choice answers, is all the ways in which it diverges from statistical averages. Back when the model was stupider, when it gave up sooner if it didn’t recognize what word it might be and just transcribed what it thought you’d said rather than trying to substitute a more probable word, it was more accurate.  Now, what I’m getting are statistically average words that are meaningless.

That elision of nuance and detail is characteristic of what makes AI products bad. There is a bunch of stuff that AI is good at that I’m excited about, and I think a lot of it is going to survive the bubble popping. But I fear that we’re not planning for that. I fear what we’re doing is taking workers whose jobs are meaningful, replacing them with AIs that can’t do their jobs, and then those AIs are going to go away and we’ll have nothing. That’s my concern.

Ars Technica: You prescribe a “cure” for enshittification, but in such a polarized political environment, do we even have the collective will to implement the necessary policies?

Cory Doctorow: The good news is also the bad news, which is that this doesn’t just affect tech. Take labor power. There are a lot of tech workers who are looking at the way their bosses treat the workers they’re not afraid of—Amazon warehouse workers and drivers, Chinese assembly line manufacturers for iPhones—and realizing, “Oh, wait, when my boss stops being afraid of me, this is how he’s going to treat me.” Mark Zuckerberg stopped going to those all-hands town hall meetings with the engineering staff. He’s not pretending that you are his peers anymore. He doesn’t need to; he’s got a critical mass of unemployed workers he can tap into. I think a lot of Googlers figured this out after the 12,000-person layoffs. Tech workers are realizing they missed an opportunity, that they’re going to have to play catch-up, and that the only way to get there is by solidarity with other kinds of workers.

The same goes for competition. There’s a bunch of people who care about media, who are watching Warner about to swallow Paramount and who are saying, “Oh, this is bad. We need antitrust enforcement here.” When we had a functional antitrust system for the last four years, we saw a bunch of telecoms mergers stopped because once you start enforcing antitrust, it’s like eating Pringles. You just can’t stop. You embolden a lot of people to start thinking about market structure as a source of either good or bad policy. The real thing that happened with [former FTC chair] Lina Kahn doing all that merger scrutiny was that people just stopped planning mergers.

There are a lot of people who benefit from this. It’s not just tech workers or tech users; it’s not just media users. Hospital consolidation, pharmaceutical consolidation, has a lot of people who are very concerned about it. Mark Cuban is freaking out about pharmacy benefit manager consolidation and vertical integration with HMOs, as he should be. I don’t think that we’re just asking the anti-enshittification world to carry this weight.

Same with the other factors. The best progress we’ve seen on interoperability has been through right-to-repair. It hasn’t been through people who care about social media interoperability. One of the first really good state-level right-to-repair bills was the one that [Governor] Jared Polis signed in Colorado for powered wheelchairs. Those people have a story that is much more salient to normies.

What do you mean you spent six months in bed because there’s only two powered wheelchair manufacturers and your chair broke and you weren’t allowed to get it fixed by a third party?” And they’ve slashed their repair department, so it takes six months for someone to show up and fix your chair. So you had bed sores and pneumonia because you couldn’t get your chair fixed. This is bullshit.

So the coalitions are quite large. The thing that all of those forces share—interoperability, labor power, regulation, and competition—is that they’re all downstream of corporate consolidation and wealth inequality. Figuring out how to bring all of those different voices together, that’s how we resolve this. In many ways, the enshittification analysis and remedy are a human factors and security approach to designing an enshittification-resistant Internet. It’s about understanding this as a red team, blue team exercise. How do we challenge the status quo that we have now, and how do we defend the status quo that we want?

Anything that can’t go on forever eventually stops. That is the first law of finance, Stein’s law. We are reaching multiple breaking points, and the question is whether we reach things like breaking points for the climate and for our political system before we reach breaking points for the forces that would rescue those from permanent destruction.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Yes, everything online sucks now—but it doesn’t have to Read More »

nasa’s-next-moonship-reaches-last-stop-before-launch-pad

NASA’s next Moonship reaches last stop before launch pad

The Orion spacecraft, which will fly four people around the Moon, arrived inside the cavernous Vehicle Assembly Building at NASA’s Kennedy Space Center in Florida late Thursday night, ready to be stacked on top of its rocket for launch early next year.

The late-night transfer covered about 6 miles (10 kilometers) from one facility to another at the Florida spaceport. NASA and its contractors are continuing preparations for the Artemis II mission after the White House approved the program as an exception to work through the ongoing government shutdown, which began on October 1.

The sustained work could set up Artemis II for a launch opportunity as soon as February 5 of next year. Astronauts Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen will be the first humans to fly on the Orion spacecraft, a vehicle that has been in development for nearly two decades. The Artemis II crew will make history on their 10-day flight by becoming the first people to travel to the vicinity of the Moon since 1972.

Where things stand

The Orion spacecraft, developed by Lockheed Martin, has made several stops at Kennedy over the last few months since leaving its factory in May.

First, the capsule moved to a fueling facility, where technicians filled it with hydrazine and nitrogen tetroxide propellants, which will feed Orion’s main engine and maneuvering thrusters on the flight to the Moon and back. In the same facility, teams loaded high-pressure helium and ammonia coolant into Orion propulsion and thermal control systems.

The next stop was a nearby building where the Launch Abort System was installed on the Orion spacecraft. The tower-like abort system would pull the capsule away from its rocket in the event of a launch failure. Orion stands roughly 67 feet (20 meters) tall with its service module, crew module, and abort tower integrated together.

Teams at Kennedy also installed four ogive panels to serve as an aerodynamic shield over the Orion crew capsule during the first few minutes of launch.

The Orion spacecraft, with its Launch Abort System and ogive panels installed, is seen last month inside the Launch Abort System Facility at Kennedy Space Center, Florida. Credit: NASA/Frank Michaux

It was then time to move Orion to the Vehicle Assembly Building (VAB), where a separate team has worked all year to stack the elements of NASA’s Space Launch System rocket. In the coming days, cranes will lift the spacecraft, weighing 78,000 pounds (35 metric tons), dozens of stories above the VAB’s center aisle, then up and over the transom into the building’s northeast high bay to be lowered atop the SLS heavy-lift rocket.

NASA’s next Moonship reaches last stop before launch pad Read More »

12-years-of-hdd-analysis-brings-insight-to-the-bathtub-curve’s-reliability

12 years of HDD analysis brings insight to the bathtub curve’s reliability

But as seen in Backblaze’s graph above, the company’s HDDs aren’t adhering to that principle. The blog’s authors noted that in 2021 and 2025, Backblaze’s drives had a “pretty even failure rate through the significant majority of the drives’ lives, then a fairly steep spike once we get into drive failure territory.”

The blog continues:

What does that mean? Well, drives are getting better, and lasting longer. And, given that our trendlines are about the same shape from 2021 to 2025, we should likely check back in when 2029 rolls around to see if our failure peak has pushed out even further.

Speaking with Ars Technica, Doyle said that Backblaze’s analysis is good news for individuals shopping for larger hard drives because the devices are “going to last longer.”

She added:

In many ways, you can think of a datacenter’s use of hard drives as the ultimate test for a hard drive—you’re keeping a hard drive on and spinning for the max amount of hours, and often the amount of times you read/write files is well over what you’d ever see as a consumer. Industry trend-wise, drives are getting bigger, which means that oftentimes, folks are buying fewer of them. Reporting on how these drives perform in a data center environment, then, can give you more confidence that whatever drive you’re buying is a good investment.

The longevity of HDDs is also another reason for shoppers to still consider HDDs over faster, more expensive SSDs.

“It’s a good idea to decide how justified the improvement in latency is,” Doyle said.

Questioning the bathtub curve

Doyle and Paterson aren’t looking to toss the bathtub curve out with the bathwater. They’re not suggesting that the bathtub curve doesn’t apply to HDDs, but rather that it overlooks additional factors affecting HDD failure rates, including “workload, manufacturing variation, firmware updates, and operational churn.” The principle also makes the assumptions that, per the authors:

  • Devices are identical and operate under the same conditions
  • Failures happen independently, driven mostly by time
  • The environment stays constant across a product’s life

While these conditions can largely be met in datacenter environments, “conditions can’t ever be perfect,” Doyle and Patterson noted. When considering an HDD’s failure rates over time, it’s wise to consider both the bathtub curve and how you use the component.

12 years of HDD analysis brings insight to the bathtub curve’s reliability Read More »