Author name: Kris Guyer

nasa’s-acting-leader-seeks-to-keep-his-job-with-new-lunar-lander-announcement

NASA’s acting leader seeks to keep his job with new lunar lander announcement

NASA would not easily be able to rip up its existing HLS contracts with SpaceX and Blue Origin, as especially with the former much of the funding has already been awarded for milestone payments. Rather, Duffy would likely have to find new funding from Congress. And it would not be cheap. This NASA analysis, from 2017, estimates that a cost-plus, sole-source lunar lander would cost $20 billion to $30 billion, or nearly 10 times what NASA awarded to SpaceX in 2021.

SpaceX founder Elon Musk, responding to Duffy’s comments, seemed to relish the challenge posed by industry competitors.

“SpaceX is moving like lightning compared to the rest of the space industry,” Musk said on the social media site he owns, X. “Moreover, Starship will end up doing the whole Moon mission. Mark my words.”

The timing

Duffy’s remarks on television on Monday morning, although significant for the broader space community, also seemed intended for an audience of one—President Trump.

The president appointed Duffy, already leading the Department of Transportation, to lead NASA on an interim basis in July. This came six weeks after the president rescinded his nomination of billionaire and private astronaut Jared Isaacman, for political reasons, to lead the space agency.

Trump was under the impression that Duffy would use this time to shore up NASA’s leadership while also looking for a permanent chief of the space agency. However, Duffy appears to have not paid more than lip service to finding a successor.

Since late summer there has been a groundswell of support for Isaacman in the White House, and among some members of Congress. The billionaire has met with Trump several times, both at the White House and Mar-a-Lago, and sources report that the two have a good rapport. There has been some momentum toward the president re-nominating Isaacman, with Trump potentially making a decision soon. Duffy’s TV appearances on Monday morning appear to be part of an effort to forestall this momentum by showing Trump he is actively working toward a lunar landing during his second term, which ends in January 2029.

NASA’s acting leader seeks to keep his job with new lunar lander announcement Read More »

claude-code-gets-a-web-version—but-it’s-the-new-sandboxing-that-really-matters

Claude Code gets a web version—but it’s the new sandboxing that really matters

Now, it can instead be given permissions for specific file system folders and network servers. That means fewer approval steps, but it’s also more secure overall against prompt injection and other risks.

Anthropic’s demo video for Claude Code on the web.

According to Anthropic’s engineering blog, the new network isolation approach only allows Internet access “through a unix domain socket connected to a proxy server running outside the sandbox. … This proxy server enforces restrictions on the domains that a process can connect to, and handles user confirmation for newly requested domains.” Additionally, users can customize the proxy to set their own rules for outgoing traffic.

This way, the coding agent can do things like fetch npm packages from approved sources, but without carte blanche for communicating with the outside world, and without badgering the user with constant approvals.

For many developers, these additions are more significant than the availability of web or mobile interfaces. They allow Claude Code agents to operate more independently without as many detailed, line-by-line approvals.

That’s more convenient, but it’s a double-edged sword, as it will also make code review even more important. One of the strengths of the too-many-approvals approach was that it made sure developers were still looking closely at every little change. Now it might be a little bit easier to miss Claude Code making a bad call.

The new features are available in beta now as a research preview, and they are available to Claude users with Pro or Max subscriptions.

Claude Code gets a web version—but it’s the new sandboxing that really matters Read More »

do-animals-fall-for-optical-illusions?-it’s-complicated.

Do animals fall for optical illusions? It’s complicated.

A tale of two species

View from above of the apparatuses used for ring doves (A) and guppies (B).

View from above of the apparatuses used for ring doves (A) and guppies (B). Credit: M. Santaca et al., 2025

The authors tested 38 ring doves and 19 guppies (of the “snakeskin cobra green” ornamental strain) for their experiments. The doves were placed in a testing cage with the bottom covered with an anti-slip wooden panel and a branch serving as a perch and starting point; the feeding station was at the opposite end of the cage. The guppies were tested in single tanks with a gravel bottom.

In both cases, the test subjects were presented with visual stimuli in the form of two white plastic cards. Sizes differed for the doves and the guppies, but each card showed an array of six black circles with a bit of food serving as the center “circle”: red millet seeds for the doves and commercial flake food for the guppies. The circles were smaller on one of the cards and larger on the other. The subjects were free to choose food from one of the cards, and the card with the unchosen food was removed promptly. If no choice was made after 15 minutes, the trial was null and the team tried again after a 15-minute interval.

The authors found that the guppies were indeed highly susceptible to the Ebbinghaus illusion, choosing food surrounded by smaller circles much more frequently, suggesting they perceived it as larger and hence more desirable. The results for ring doves were more mixed, however: some of the doves seemed to be susceptible while others were not, suggesting that their perceptual strategies are more local, detail-oriented, and less influenced by their surrounding context.

“The doves’ mixed responses suggest that individual experience or innate bias can strongly shape how an animal interprets illusions,” the authors concluded. “Just like in humans, where some people are strongly fooled by illusions and others hardly at all, animal perception is not uniform.”

The authors acknowledge that their study has limitations. For instance, guppies and ring doves diverged hundreds of millions of years ago and hence are phylogenetically distant, so the perceptual differences between them could be due not just to ecological pressures, but also to evolutionary traits gained or lost via natural selection. Future experiments involving more closely related species with different sensory environments would better isolate the role of ecological factors in animal perception.

Frontiers in Psychology, 2025. DOI: 10.3389/fpsyg.2025.1653695 (About DOIs).

Do animals fall for optical illusions? It’s complicated. Read More »

bubble,-bubble,-toil-and-trouble

Bubble, Bubble, Toil and Trouble

We have the classic phenomenon where suddenly everyone decided it is good for your social status to say we are in an ‘AI bubble.’

Are these people short the market? Do not be silly. The conventional wisdom response to that question these days is that, as was said in 2007, ‘if the music is playing you have to keep dancing.’

So even with lots of people newly thinking there is a bubble the market has not moved down, other than (modestly) on actual news items, usually related to another potential round of tariffs, or that one time we had a false alarm during the DeepSeek Moment.

So, what’s the case we’re in a bubble? What’s the case we’re not?

People get confused about bubbles, often applying that label any time prices fall. So you have to be clear on what question is being asked.

If ‘that was a bubble’ simply means ‘number go down’ then it is entirely uninteresting to say things are bubbles.

So if we operationalize ‘bubble’ simply means that at some point there is a substantial drawdown in market values (e.g. a 20% drop in the Nasdaq sustained for 6 months) then I would be surprised by this, but the market would need to be dramatically, crazily underpriced for that not to be a plausible thing to happen.

If a bubble means something similar to the 2000 dot com bubble, as in valuations that are not plausible expectations for the net present values of future cash flows? No.

[Standard disclaimer: Nothing on this blog is ever investment advice.]

Before I dive into the details, a time sensitive point of order, that you can skip if you would not consider political donations:

When trying to pass laws, it is vital to have a champion. You need someone in each chamber of Congress who is willing to help craft, introduce and actively fight for good bills. Many worthwhile bills do not get advanced because no one will champion them.

Alex Bores did this with New York’s RAISE Act, an AI safety bill along similar lines to SB 53 that is currently on the governor’s desk. I did a full RTFB (read the bill) on it, and found it to be a very good bill that I strongly supported. It would not have happened without him championing the bill and spending political capital on it.

By far the strongest argument against the bill is that it would be better if such bills were done on the Federal level.

He’s trying to address this by running for Congress in my own distinct, NY-12, to succeed Jerry Nadler. The district is deeply Democratic, so this will have no impact on the partisan balance. What it would do is give real AI safety a knowledgeable champion in the House of Representatives, capable of championing good bills.

Eric Nayman makes an extensive case for considering donating to Alex Bores today, in his first 24 hours, as donations in the first 24 hours are extremely valuable. Sonnet 4.5 estimates that in this case, a donation on day one is worth about double what it would be worth later. If you do decide to donate, they prefer that you use this link to ensure the donation gets fully registered today.

As always, remember while considering this that political donations are public.

(Note: I intend to remove this announcement from this post after the 24 hour window closes, and move it to AI #139.)

Sagarika Jaisinghani (Bloomberg): A record share of global fund managers said artificial intelligence stocks are in a bubble following a torrid rally this year, according to a survey by Bank of America Corp.

About 54% of participants in the October poll indicated tech stocks were looking too expensive, an about-turn from last month when nearly half had dismissed those concerns. Fears that global stocks were overvalued also hit a peak in the latest survey.

So a month ago things most people thought things were fine and now it’s a bubble?

This is a very light bubble definition, as these things go.

Nothing importantly bearish happened in that month other than bullish deals, so presumably this is a ‘circular deals freak us out’ shift in mood? Or it could be a cascade effect.

There is definitely reason for concern. If you remove the label ‘bubble’ and simply say ‘AI’ then the quote from Deutsche Bank below is correct, as AI is responsible for essentially all economic growth. Also you can mostly replace ‘US’ with ‘world.’

Unusual Whales: “The AI bubble is the only thing keeping the US economy together,” Deutsche Bank has said per TechSpot.

Not quite, at this size you need some doubt involved. But the basic answer is yes.

Roon: one reason to disbelief in a sizeable bubble is when the largest financial institutions in the world are openly calling it that.

Jon Stokes: I disagree. I was in my early 20’s during the dotcom bubble and was in tech, and everyone everywhere knew it was a bubble — from the banks to the VCs down to the individual programmers in SF. Everyone talked about it openly in the last ~1yr of it, but the numbers kept going up.

I don’t think this is Dotcom 2.0 but I think it’s possible the market is getting ahead of itself. That said, I also lived through the cloud “bubble” which turned out to not be a bubble at all — I even wrote my own contributes to “are we in a bubble?” literature in like 2012. Anyone who actually traded on the idea that the cloud buildout was a bubble lost out bigtime.

Every time there’s a big new infra buildout there’s bubble talk.

My point is that is very possible to have a bubble that everyone everywhere knows is a bubble, yet it keeps on bubbling because nobody wants to miss the action & everyone thinks they can time an exit. “Enjoy the party, but dance close to the exits” was the slogan back then.

It is definitely possible to get into an Everybody Knows situation with a bubble, for various reasons, both when it is and when it isn’t actually a bubble. For example, there’s Bitcoin, and Bitcoin, and Bitcoin, and Bitcoin, but there’s also Bitcoin.

Is it evidence for or against a bubble when everyone says it’s a bubble?

My gut answer is it depends on who is everyone.

If everyone is everyone working in the industry? Then yeah, evidence for a bubble.

If everyone is everyone at the major economic institutions? Not so much.

So I decided to check.

There was essentially no correlation, with 42.5% of AI workers and 41.7% of others saying there is a bubble, and that’s a large percentage, so things are certainly somewhat concerning. It certainly seems likely that certain subtypes of AI investment are ‘in a bubble’ in the sense that investors in those subtypes will lose money, which you would expect in anything like an efficient market.

In particular, consensus seems to be, and I agree with it (reminder: not investment advice), that investment in ‘companies with products in position to get steamrolled by OpenAI and other frontier labs’ are as a group not going to do well. If you want to call that an ‘AI bubble’ you can, but that seems net misleading. I also wouldn’t be excited to short that basket, since you’re exposed if even one of them hits it big. Remember that if you bought a tech stock portfolio at the dot com peak, you still got Amazon.

Whereas if you had a portfolio of ‘picks and shovels’ or of the frontier labs themselves, that still seems to me like a fine place to bet, although it is no longer a ‘I can’t believe they’re letting me buy at these prices, this is free money’ level of fine. You now have to actually have beliefs about the future and an investment thesis.

Noah Smith speculates on bubble causes and types when it comes to AI.

Noah Smith: An AI crash isn’t certain, but I think it’s more likely than people think.

Looking at the historical examples of railroads, electricity, dotcoms, and housing can help us understand what an AI crash would look like.

A burning question that’s on a lot of people’s minds right now is: Why is the U.S. economy still holding up? The manufacturing industry is hurting badly from Trump’s tariffs, the payroll numbers are looking weak, and consumer sentiment is at Great Recession levels.

… Another possibility is that tariffs are bad, but are being canceled out by an even more powerful force — the AI boom.

You could have a speculative bubble, or an extrapolative bubble, or simply a big mistake about the value of the tech. He thinks if it is a bubble it would be of the later type, proposing we use Bezos’ term ‘industrial bubble.’

Noah Smith: … When we look at the history of industrial bubbles, and of new technologies in general, it becomes clear that in order to cause a crash, AI doesn’t have to fail. It just has to mildly disappoint the most ardent optimists.

I don’t think that’s quite right. The market reflects a variety of perspectives, and it will almost always be way below where the ardent optimists would place it. The ardent optimists are the rock with the word ‘BUY!’ written on it.

What is right is that if AI over some time frame disappoints relative to expectations, sufficiently to shift forward expectations downward from their previous level, that would cause a substantial drop in prices, which could then break momentum and worsen various positions, causing a larger drop in prices.

Thus, we could have AI ultimately having a huge economic impact and ultimately being fully transformative (maybe killing everyone, maybe being amazingly great), and have nothing go that wrong along the way, but still have what people at the time would call ‘the bubble bursting.’

Indeed, if the market is at all efficient, there is a lot of ‘upside risk’ of AI being way more impactful than suggested by the market price, which means there has to be a corresponding downside risk too. Part of that risk is geopolitical, an anti-AI movement could rise, or the supply chain could be disrupted by tariff battles or a war over Taiwan. By traditional definitions of ‘bubble,’ that means a potential bubble.

Ethan Mollick: I don’t have much to add to the bubble discussion, but the “this time is different” argument is, in part, based on the sincere belief of many at the AI labs that there is a race to superintelligence & the winner gets,.. everything.

It is a key dynamic that is not discussed much.

You don’t have to believe it (or think this is a good idea), but many of the AI insiders really do. Their public statements are not much different than their private ones. Without considering that zero sum dimension, a lot of what is happening in the space makes less sense.

Even a small chance of a big upside should mean a big boost to valuation. Indeed that is the reason tech startups are funded and venture capital firms exist. If you don’t get the fully transformational level of impact, then at some point value will drop.

Consider the parallel to Bitcoin, and in thinking there is some small percentage chance of becoming ‘digital gold’ or even the new money. If you felt there was no way it could fall by a lot from any given point in time, or even if you were simply confident that it was probably not going to crash, it would be a fantastic screaming buy.

AI also has to retain expectations that providers will be profitable. If AI is useful but it is expected to not provide enough profits, that too can burst the bubble.

Matthew Yglesias: Key point in here from @Noahpinion — even if the AI tech turns out to be exactly as promising as the bulls think, it’s not totally clear whether this would mean high margin businesses.

A slightly random example but passenger jetliners have definitely worked out as a technology, tons of people use them and they are integral to the whole world economy. But the combined market cap of Boeing + Airbus is unimpressive.

Jetliners seem a lot more important and impressive than the idea of a big box building supply store, but in terms of market cap Home Depot > Boeing + Airbus. Technology is hard but then business is also hard.

Sam D’Amico: AI is going to rock but we may have a near-term capex bubble like the fiber buildout during the dotcom boom.

Matthew Yglesias writes more thoughts here, noting that AI is propping up the whole economy and offering reasons some people believe there’s a bubble and also reasons it likely isn’t one, and especially isn’t one in the pure bubble sense of cryptocurrency or Beanie babies, there’s clearly a there there.

To say confidently that there is no bubble in AI is to claim, among other things, that the market is horribly inefficient, and that AI assets are and will remain dramatically underpriced but reliably gain value as people gain situational awareness and are mugged by reality. This includes the requirement that the currently trading AI assets will be poised to capture a lot of value.

Alternatively, how about the possibility that there could be a crash for no reason?

Simeon: Agreed with Noah here. People underestimate the odds of a crash, and things like the OA x AMD deal make such things more likely. It just takes a sufficient number of people to be scared at the same time.

Remember the market reaction to DeepSeek? It can be irrational.

The DeepSeek moment is sobering, since the AI market was down quite a lot on news that should have been priced in and if anything should have made prices go up. What is to stop a similar incorrect information cascade from happening again? Other than a potential ‘Trump put’ or Fed put, very little.

Derek Thompson provides his best counterargument, saying AI probably isn’t a bubble. He also did an episode on this for the Plain English podcast with Azeem Azhar of Exponential View to balance his previous episode from September 23 on ‘how the AI bubble could burst.

Derek Thompson: And yet, look around: Is anybody actually acting as if AI is a bubble?

… Everyone claims that they know the music is ending soon, and yet everybody is still dancing along to the music.

Well, yeah, when there’s a bubble everyone goes around saying ‘there’s a bubble’ but no one does anything about it, until they do and then there’s no bubble?

As Tyler Cowen sometimes asks, are you short the market? Me neither.

Derek breaks down the top arguments for a bubble.

  1. Lofty valuations for companies with no clear path to profit.

  2. Unprecedented spending on an unproven business.

  3. A historic chasm between capex spending and revenue.

  4. An eerie level of financial opacity.

  5. A byzantine level of corporate entanglement.

A lot of overlap here. We have a lot of money being invested in and spent on AI, without much revenue. True that. The AI companies all invest in and buy from each other, at a level that yeah is somewhat suspicious. Yeah, fair. The chips are only being discounted on five year horizons and that seems a bit long? Eh, that seems fine to me, older chips are still useful as long as demand exceeds supply.

So why not a bubble? That part is gated, but his thread lays out the core responses. One, the AI companies look nothing like the joke companies in the dot com bubble.

Two, the AI companies need AI revenues to grow 100%-200% a year, and that sounds like a lot, but so far you’re seeing even more than that.

Epoch AI: One way bubbles pop: a technology doesn’t deliver value as quickly as investors bet it will. In light of that, it’s notable that OpenAI is projecting historically unprecedented revenue growth — from $10B to $100B — over the next three years.

OpenAI’s revenue growth has been extremely impressive: from <$1B to >$10B in only three years. Still, a few other companies have pulled off similar growth.

We found four such US companies in the past fifty years. Of these, only Google went on to top $100B in revenue.

Peter Wildeford: OpenAI is claiming they will double revenue three years in a row… this is historically unprecedented, but may be possible (so far they are 3x/year and NVIDIA has done 2x/yr lately)

But OpenAI has also projected revenue of $100B in 2028. We found seven companies which achieved revenue growth from $10B to $100B in under a decade.

None of them did it in six years, let alone three.

As Matt Levine says, OpenAI has a business model now, because when you need debt investors you need to have a business plan. This one strikes a balance between ‘good enough to raise money’ and ‘not so good no one will believe it.’ Which makes it well under where I expect their revenue to land.

Frankly, OpenAI is downplaying their expectations because if they used their actual projections then no one would believe them, and they might get sued if things didn’t work out. The baseline scenario is that OpenAI (and Anthropic) blow the projections out of the water.

Martha Gimbel: Ok not the main point of [Thompson’s] article but I love this line: “The whole thing is vaguely Augustinian: O Lord, make me sell my Nvidia position and rebalance toward consumer staples, but not yet.”

Timothy Lee thinks it’s probably ‘not a bubble yet,’ partly citing Thompson and partly because we are seeing differentiation on which models do tasks best. He also links to the worry that there might be no moat, as fast follows offer the same service much cheaper and kill your margin, since your product might be only slightly better.

The thing about AI is that it might in total cost a lot, but in exchange you get a ton. It doesn’t have to be ten times better to have the difference be a big deal. For most use cases of AI, you would be wise to pay twice the price for something 10% better, and often wise to pay 10 or 100 times as much. Always think absolute cost, not relative.

Vinay Sridhar: What finally caught my attention was this stat from an NYT article by Natasha Sarin (via Marginal Revolution): “To provide some sense of scale, that means the equivalent of about $1,800 per person in America will be invested this year on AI”. That is a bucketload of spending.

It is, but is it? I spend more than that on one of my AI subscriptions, and get many times that much value in return. Thinking purely in terms of present day concrete benefits, when I ask ‘how many different use cases of AI are providing me $1,800 in value?’ I can definitely include taxes and accounting, product evaluation and search, medical help, analysis of research papers, coding and general information and search. So that’s at least six.

Similarly, does this sound like a problem, given the profit margins of these companies?

Vinay Sridhar: Hyperscalers (Microsoft, Amazon, Alphabet and Meta) historically ran capex at 11-16% of revenue. Today they’re at 22%, with revenue YoY growth in the 15-25% range (ex Nvidia) – capex spending is dramatically outpacing revenue growth. The four major hyperscalers are spending approximately $320 billion combined on AI infrastructure in 2025 – with public statements on how this will likely continue in the coming years.

Similarly, Vinay notes that the valuations are only somewhat high.

Current valuations, while elevated, remain “much lower” than prior tech bubbles. The Nasdaq’s forward P/E ratio is ~28X today. At the 2000 peak, it exceeded 70X. Even in 2007 before the financial crisis, tech valuations were higher relative to earnings than they are now.

Looking more closely, the MAG7 — who are spending the majority of this capex — has a blended P/E of ~32X — expensive, as Coatue’s Laffont brothers said in June, but not extreme relative to past tech bubbles.

That’s a P/E ratio, and all this extra capex spending if anything reduces short term earnings. Does a 28x forward P/E ratio sound scary in context, with YoY growth in the 20% range? It doesn’t to me. Sure, there’s some downside, but it would be a dramatic inefficiency if there wasn’t.

Vinay offers several other notes as well.

One thing I find confusing is all the ‘look how fast the chips will lose value’ arguments. Here’s Vinay’s supremely confident claims, as another example of this:

Vinay Sridhar: 7. GPU Depreciation Schedules Don’t Match Reality

Nvidia now unveils a new AI chip every year instead of every two years. Jensen Huang said in March that “when Blackwell starts shipping in volume you couldn’t give Hoppers away”.

Meanwhile, companies keep extending depreciation schedules. Microsoft: 4 to 6 years (2022). Alphabet: 4 to 6 years (2023). Amazon and Oracle: 5 to 6 years (2024). Meta: 5 to 5.5 years (January 2025). Amazon partially reversed course in January 2025, moving some assets back to 5 years, noting this would cut operating profit by $700m.

The Economist analyzed the impact: if servers depreciate over 3 years instead of current schedules, the AI big five’s combined annual pre-tax profit falls by $26bn (8% of last year’s total). Over 2 years: $1.6trn market cap hit. If you take Huang literally at 1 year, this implies $4trn, one-third of their collective worth. Barclays estimated higher depreciation costs would shave 5-10% from earnings per share.

Hedgie takes a similar angle, calling the economics unsustainable because the lifespan of data center components is only 3-10 years due to rapid technological advances.

Hedgie: Kupperman originally assumed data center components would depreciate over 10 years, but learned from two dozen senior professionals that the actual lifespan is just 3-10 years due to rapid technology advances. His revised calculations show the industry needs $320-480 billion in revenue just to break even on 2025 data center spending alone. Current AI revenue sits around $20 billion annually.

What strikes me most is that none of the senior data center professionals Kupperman spoke with understand how the financial math works either.

No one I have seen is saying that chip capability improvements are accelerating dramatically. If that is the case we need to update our timelines.

When Nvidia releases a new chip every year, that doesn’t mean they do the 2027 chip in 2026 and then do the 2029 chip in 2027. It means they do the 2027 chip in 2027, and before that do the best chip you can do in 2026, and it also means Nvidia is good at marketing and life is coming at them fast.

Huang’s statement about free hoppers is obviously deeply silly, and everyone knows not to take such Nvidia statements seriously or literally. The existence of new better chips does not invalidate older worse chips unless supply exceeds demand by enough that the old chips cost more to run then the value they bring.

That’s very obviously not going to happen over three years let alone one or two. You can do math on the production capacity available.

If the marginal cost of hoppers in 2028 was going to be approximately zero, what does that imply?

By default? Stop thinking about capex depreciation and start thinking about whether this means we get a singularity in 2028, since you can now scale compute as long as you have power. Also, get long China, since they have unlimited power generation.

If that’s not why, then it means AI use cases turned out to be severely limited, and the world has a large surplus of compute and not much to do with it.

It kind of has to be one or the other. Neither seems plausible.

I see not only no sign of overcapacity, I see signs of undercapacity, including a scramble for every chip people can get and compute being a limiting factor on many labs in practice right now, including OpenAI and Anthropic. The price of compute has recently been rising, not falling, including the price for renting older chips.

Dave Friedman looked into the accounting here, ultimately not seeing this as a solvency or liquidity issue, but he thinks there could be an accounting optics issue.

Could recent trends reverse, and faster than expected depreciations and ability to charge for older chips cause problems for the accounting in data centers? I mean, sure, that’s obviously possible, if we actually produce enough better chips, or demand sufficiently lags expectations, or some combination thereof.

This whole question seems like a strange thing for those investing hundreds of billions and everyone trading the market to not have priced into their plans and projections? Yes, current OpenAI revenue is on the order of $20 billion, but if you project that out over 3-10 years, that number is going to be vastly higher, and there are other companies.

I mostly agree with Charles that the pro-bubble arguments are remarkably weak, given the amount of bubble talk we are seeing, and that when you combine these two facts it should move you towards there not being a bubble.

Unlike Charles, I am not about to use leverage. I consider leverage in personal investing to be reserved for extreme situations, and a substantial drop in prices is very possible. But I definitely understand.

Charles: There have been so many terrible arguments for why we’re in an AI bubble lately, and so few good ones, that I’ve been convinced the appropriate update is in the “not a bubble” direction and increased my already quite long position.

Fwiw that looks like now being 1.4x leveraged long a mix of about 50% index funds and 50% specific bets (GOOG, TSMC, AMZN the biggest of those).

Most of what changed, I think, is that there were a bunch of circular deals done in close succession, and when combined with the exponential growth expectations for AI and people’s lack of understanding the technology and what it will be able to do, and the valuations approaching the point where one can question there being any room to grow, this reasonably triggered various heuristics and freaked people out.

If we define a bubble narrowly as ‘we see a Nasdaq price decline of 20% sustained for 6 months’ I would give that on the order of 25% to happen within the next few years, including as part of a marketwide decline in prices. It has happened to the wider market as recently as 2022, and about 5 times in the last 50 years.

If a decline does happen, I predict I will probably use that opportunity to buy more.

That does not have to be true. Perhaps there will have been large shifts in anticipated future capabilities, or in the competitive landscape and ability to capture profits, or the general economic conditions, and the drop will be fully justified and reflect AI slowing down.

But most of the time this will not be what happened, and the drop will not ultimately have much effect, although it would presumably slow down progress slightly.

Connor Leahy: I want to preregister the following opinion:

I think it’s plausible, but by no means guaranteed, that we could see a massive financial crisis or bubble pop affecting AI in the next year.

I expect if this happens, it will be mostly for mundane economic reasons (overleveraged markets, financial policy of major nations, mistiming of bets even by small amounts and good ol’ fraud), not because the technology isn’t making rapid progress.

I expect such a crisis to have at most modest effects on timelines to existentially dangerous ASI being developed, but will be used by partisans to try and dismiss the risk.

Sadly, a bunch of people making poorly thought through leveraged bets on the market tells you little about underlying object reality of how powerful AI is or soon will be.

Do not be fooled by narratives.

Discussion about this post

Bubble, Bubble, Toil and Trouble Read More »

roberta-williams’-the-colonel’s-bequest-was-a-different-type-of-adventure-game

Roberta Williams’ The Colonel’s Bequest was a different type of adventure game

However, my mom was another story. I remember her playing Dr. Mario a lot, and we played Donkey Kong Country together when I was young—standard millennial childhood family gaming stuff. But the games I most associate with her from my childhood are adventure games. She liked King’s Quest, of course—but I also remember her being particularly into the Hugo trilogy of games.

As I mentioned above, I struggled to get hooked on those. Fortunately, we were able to meet in the middle on The Colonel’s Bequest.

I remember swapping chairs with my mom as we attempted additional playthroughs of the game; I enjoyed seeing the secrets she found that I hadn’t because I was perhaps too young to think things through the way she did.

Games you played with family stick with you more, so I think I mostly remember The Colonel’s Bequest so well because, as I recall, it was my mom’s favorite game.

The legacy of The Colonel’s Bequest

The Colonel’s Bequest may have been a pivotal game for me personally, but it hasn’t really resonated through gaming history the way that King’s Quest, The Secret of Monkey Island, or other adventure titles did.

I think that’s partly because many people might understandably find the game a bit boring. There’s not much to challenge you here, and your character is kind of just along for the ride. She’s not the center of the story, and she’s not really taking action. She’s just walking around, listening and looking, until the clock runs out.

That formula has more niche appeal than traditional point-and-click adventure games.

Still, the game has its fans. You can buy and download it from GOG to play it today, of course, but it also recently inspired a not-at-all-subtle spiritual successor by developer Julia Minamata called The Crimson Diamond, which we covered here at Ars. That game is worth checking out, too, though it goes a more traditional route with its gameplay.

The Crimson Diamond‘s influence from The Colonel’s Bequest wasn’t subtle, but that’s OK. Credit: GOG

And of course, The Colonel’s Bequest creators Roberta and Ken Williams are still active; they somewhat recently released a 3D reboot of Colossal Cave, a title many credit as the foremost ancestor of the point-and-click adventure genre.

Ars Technica may earn compensation for sales from links on this post through affiliate programs.

Roberta Williams’ The Colonel’s Bequest was a different type of adventure game Read More »

yes,-everything-online-sucks-now—but-it-doesn’t-have-to

Yes, everything online sucks now—but it doesn’t have to


from good to bad to nothing

Ars chats with Cory Doctorow about his new book Enshittification.

We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It.

As Doctorow tells it, he was on vacation in Puerto Rico, staying in a remote cabin nestled in a cloud forest with microwave Internet service—i.e., very bad Internet service, since microwave signals struggle to penetrate through clouds. It was a 90-minute drive to town, but when they tried to consult TripAdvisor for good local places to have dinner one night, they couldn’t get the site to load. “All you would get is the little TripAdvisor logo as an SVG filling your whole tab and nothing else,” Doctorow told Ars. “So I tweeted, ‘Has anyone at TripAdvisor ever been on a trip? This is the most enshittified website I’ve ever used.’”

Initially, he just got a few “haha, that’s a funny word” responses. “It was when I married that to this technical critique, at a moment when things were quite visibly bad to a much larger group of people, that made it take off,” Doctorow said. “I didn’t deliberately set out to do it. I bought a million lottery tickets and one of them won the lottery. It only took two decades.”

Yes, people sometimes express regret to him that the term includes a swear word. To which he responds, “You’re welcome to come up with another word. I’ve tried. ‘Platform decay’ just isn’t as good.” (“Encrapification” and “enpoopification” also lack a certain je ne sais quoi.)

In fact, it’s the sweariness that people love about the word. While that also means his book title inevitably gets bleeped on broadcast radio, “The hosts, in my experience, love getting their engineers to creatively bleep it,” said Doctorow. “They find it funny. It’s good radio, it stands out when every fifth word is ‘enbeepification.’”

People generally use “enshittification” colloquially to mean “the degradation in the quality and experience of online platforms over time.” Doctorow’s definition is more specific, encompassing “why an online service gets worse, how that worsening unfolds,” and how this process spreads to other online services, such that everything is getting worse all at once.

For Doctorow, enshittification is a disease with symptoms, a mechanism, and an epidemiology. It has infected everything from Facebook, Twitter, Amazon, and Google, to Airbnb, dating apps, iPhones, and everything in between. “For me, the fact that there were a lot of platforms that were going through this at the same time is one of the most interesting and important factors in the critique,” he said. “It makes this a structural issue and not a series of individual issues.”

It starts with the creation of a new two-sided online product of high quality, initially offered at a loss to attract users—say, Facebook, to pick an obvious example. Once the users are hooked on the product, the vendor moves to the second stage: degrading the product in some way for the benefit of their business customers. This might include selling advertisements, scraping and/or selling user data, or tweaking algorithms to prioritize content the vendor wishes users to see rather than what those users actually want.

This locks in the business customers, who, in turn, invest heavily in that product, such as media companies that started Facebook pages to promote their published content. Once business customers are locked in, the vendor can degrade those services too—i.e., by de-emphasizing news and links away from Facebook—to maximize profits to shareholders. Voila! The product is now enshittified.

The four horsemen of the shitocalypse

Doctorow identifies four key factors that have played a role in ushering in an era that he has dubbed the “Enshittocene.” The first is competition (markets), in which companies are motivated to make good products at affordable prices, with good working conditions, because otherwise customers and workers will go to their competitors.  The second is government regulation, such as antitrust laws that serve to keep corporate consolidation in check, or levying fines for dishonest practices, which makes it unprofitable to cheat.

The third is interoperability: the inherent flexibility of digital tools, which can play a useful adversarial role. “The fact that enshittification can always be reversed with a dis-enshittifiting counter-technology always acted as a brake on the worst impulses of tech companies,” Doctorow writes. Finally, there is labor power; in the case of the tech industry, highly skilled workers were scarce and thus had considerable leverage over employers.

All four factors, when functioning correctly, should serve as constraints to enshittification. However, “One by one each enshittification restraint was eroded until it dissolved, leaving the enshittification impulse unchecked,” Doctorow writes. Any “cure” will require reversing those well-established trends.

But isn’t all this just the nature of capitalism? Doctorow thinks it’s not, arguing that the aforementioned weakening of traditional constraints has resulted in the usual profit-seeking behavior producing very different, enshittified outcomes. “Adam Smith has this famous passage in Wealth of Nations about how it’s not due to the generosity of the baker that we get our bread but to his own self-regard,” said Doctorow. “It’s the fear that you’ll get your bread somewhere else that makes him keep prices low and keep quality high. It’s the fear of his employees leaving that makes him pay them a fair wage. It is the constraints that causes firms to behave better. You don’t have to believe that everything should be a capitalist or a for-profit enterprise to acknowledge that that’s true.”

Our wide-ranging conversation below has been edited for length to highlight the main points of discussion.

Ars Technica: I was intrigued by your choice of framing device, discussing enshittification as a form of contagion. 

Cory Doctorow: I’m on a constant search for different framing devices for these complex arguments. I have talked about enshittification in lots of different ways. That frame was one that resonated with people. I’ve been a blogger for a quarter of a century, and instead of keeping notes to myself, I make notes in public, and I write up what I think is important about something that has entered my mind, for better or for worse. The downside is that you’re constantly getting feedback that can be a little overwhelming. The upside is that you’re constantly getting feedback, and if you pay attention, it tells you where to go next, what to double down on.

Another way of organizing this is the Galaxy Brain meme, where the tiny brain is “Oh, this is because consumers shopped wrong.” The medium brain is “This is because VCs are greedy.” The larger brain is “This is because tech bosses are assholes.” But the biggest brain of all is “This is because policymakers created the policy environment where greed can ruin our lives.” There’s probably never going to be just one way to talk about this stuff that lands with everyone. So I like using a variety of approaches. I suck at being on message. I’m not going to do Enshittification for the Soul and Mornings with Enshittifying Maury. I am restless, and my Myers-Briggs type is ADHD, and I want to have a lot of different ways of talking about this stuff.

Ars Technica: One site that hasn’t (yet) succumbed is Wikipedia. What has protected Wikipedia thus far? 

Cory Doctorow: Wikipedia is an amazing example of what we at the Electronic Frontier Foundation (EFF) call the public interest Internet. Internet Archive is another one. Most of these public interest Internet services start off as one person’s labor of love, and that person ends up being what we affectionately call the benevolent dictator for life. Very few of these projects have seen the benevolent dictator for life say, “Actually, this is too important for one person to run. I cannot be the keeper of the soul of this project. I am prone to self-deception and folly just like every other person. This needs to belong to its community.” Wikipedia is one of them. The founder, my friend Jimmy Wales, woke up one day and said, “No individual should run Wikipedia. It should be a communal effort.”

There’s a much more durable and thick constraint on the decisions of anyone at Wikipedia to do something bad. For example, Jimmy had this idea that you could use AI in Wikipedia to help people make entries and navigate Wikipedia’s policies, which are daunting. The community evaluated his arguments and decided—not in a reactionary way, but in a really thoughtful way—that this was wrong. Jimmy didn’t get his way. It didn’t rule out something in the future, but that’s not happening now. That’s pretty cool.

Wikipedia is not just governed by a board; it’s also structured as a nonprofit. That doesn’t mean that there’s no way it could go bad. But it’s a source of friction against enshittification. Wikipedia has its entire corpus irrevocably licensed as the most open it can be without actually being in the public domain. Even if someone were to capture Wikipedia, there’s limits on what they could do to it.

There’s also a labor constraint in Wikipedia in that there’s very little that the leadership can do without bringing along a critical mass of a large and diffuse body of volunteers. That cuts against the volunteers working in unison—they’re not represented by a union; it’s hard for them to push back with one voice. But because they’re so diffuse and because there’s no paychecks involved, it’s really hard for management to do bad things. So if there are two people vying for the job of running the Wikimedia Foundation and one of them has got nefarious plans and the other doesn’t, the nefarious plan person, if they’re smart, is going to give it up—because if they try to squeeze Wikipedia, the harder they squeeze, the more it will slip through their grasp.

So these are structural defenses against enshittification of Wikipedia. I don’t know that it was in the mechanism design—I think they just got lucky—but it is a template for how to run such a project. It does raise this question: How do you build the community? But if you have a community of volunteers around a project, it’s a model of how to turn that project over to that community.

Ars Technica: Your case studies naturally include the decay of social media, notably Facebook and the social media site formerly known as Twitter. How might newer social media platforms resist the spiral into “platform decay”?

Cory Doctorow: What you want is a foundation in which people on social media face few switching costs. If the social media is interoperable, if it’s federatable, then it’s much harder for management to make decisions that are antithetical to the interests of users. If they do, users can escape. And it sets up an internal dynamic within the firm, where the people who have good ideas don’t get shouted down by the people who have bad but more profitable ideas, because it makes those bad ideas unprofitable. It creates both short and long-term risks to the bottom line.

There has to be a structure that stops their investors from pressurizing them into doing bad things, that stops them from rationalizing their way into complying. I think there’s this pathology where you start a company, you convince 150 of your friends to risk their kids’ college fund and their mortgage working for you. You make millions of users really happy, and your investors come along and say, “You have to destroy the life of 5 percent of your users with some change.” And you’re like, “Well, I guess the right thing to do here is to sacrifice those 5 percent, keep the other 95 percent happy, and live to fight another day, because I’m a good guy. If I quit over this, they’ll just put a bad guy in who’ll wreck things. I keep those 150 people working. Not only that, I’m kind of a martyr because everyone thinks I’m a dick for doing this. No one understands that I have taken the tough decision.”

I think that’s a common pattern among people who, in fact, are quite ethical but are also capable of rationalizing their way into bad things. I am very capable of rationalizing my way into bad things. This is not an indictment of someone’s character. But it’s why, before you go on a diet, you throw away the Oreos. It’s why you bind yourself to what behavioral economists call “Ulysses pacts“: You tie yourself to the mast before you go into the sea of sirens, not because you’re weak but because you’re strong enough now to know that you’ll be weak in the future.

I have what I would call the epistemic humility to say that I don’t know what makes a good social media network, but I do know what makes it so that when they go bad, you’re not stuck there. You and I might want totally different things out of our social media experience, but I think that you should 100 percent have the right to go somewhere else without losing anything. The easier it is for you to go without losing something, the better it is for all of us.

My dream is a social media universe where knowing what network someone is using is just a weird curiosity. It’d be like knowing which cell phone carrier your friend is using when you give them a call. It should just not matter. There might be regional or technical reasons to use one network or another, but it shouldn’t matter to anyone other than the user what network they’re using. A social media platform where it’s always easier for users to leave is much more future-proof and much more effective than trying to design characteristics of good social media.

Ars Technica: How might this work in practice?

Cory Doctorow: I think you just need a protocol. This is [Mike] Maznik’s point: protocols, not products. We don’t need a universal app to make email work. We don’t need a universal app to make the web work. I always think about this in the context of administrable regulation. Making a rule that says your social media network must be good for people to use and must not harm their mental health is impossible. The fact intensivity of determining whether a platform satisfies that rule makes it a non-starter.

Whereas if you were to say, “OK, you have to support an existing federation protocol, like AT Protocol and Mastodon ActivityPub,” both have ways to port identity from one place to another and have messages auto-forward. This is also in RSS. There’s a permanent redirect directive. You do that, you’re in compliance with the regulation.

Or you have to do something that satisfies the functional requirements of the spec. So it’s not “did you make someone sad in a way that was reckless?” That is a very hard question to adjudicate. Did you satisfy these functional requirements? It’s not easy to answer that, but it’s not impossible. If you want to have our users be able to move to your platform, then you just have to support the spec that we’ve come up with, which satisfies these functional requirements.

We don’t have to have just one protocol. We can have multiple ones. Not everything has to connect to everything else, but everyone who wants to connect should be able to connect to everyone else who wants to connect. That’s end-to-end. End-to-end is not “you are required to listen to everything someone wants to tell you.” It’s that willing parties should be connected when they want to be.

Ars Technica: What about security and privacy protocols like GPG and PGP?

Cory Doctorow: There’s this argument that the reason GPG is so hard to use is that it’s intrinsic; you need a closed system to make it work. But also, until pretty recently, GPG was supported by one part-time guy in Germany who got 30,000 euros a year in donations to work on it, and he was supporting 20 million users. He was primarily interested in making sure the system was secure rather than making it usable. If you were to put Big Tech quantities of money behind improving ease of use for GPG, maybe you decide it’s a dead end because it is a 30-year-old attempt to stick a security layer on top of SMTP. Maybe there’s better ways of doing it. But I doubt that we have reached the apex of GPG usability with one part-time volunteer.

I just think there’s plenty of room there. If you have a pretty good project that is run by a large firm and has had billions of dollars put into it, the most advanced technologists and UI experts working on it, and you’ve got another project that has never been funded and has only had one volunteer on it—I would assume that dedicating resources to that second one would produce pretty substantial dividends, whereas the first one is only going to produce these minor tweaks. How much more usable does iOS get with every iteration?

I don’t know if PGP is the right place to start to make privacy, but I do think that if we can create independence of the security layer from the transport layer, which is what PGP is trying to do, then it wouldn’t matter so much that there is end-to-end encryption in Mastodon DMs or in Bluesky DMs. And again, it doesn’t matter whose sim is in your phone, so it just shouldn’t matter which platform you’re using so long as it’s secure and reliably delivered end-to-end.

Ars Technica: These days, I’m almost contractually required to ask about AI. There’s no escaping it. But it’s certainly part of the ongoing enshittification.

Cory Doctorow: I agree. Again, the companies are too big to care. They know you’re locked in, and the things that make enshittification possible—like remote software updating, ongoing analytics of use of devices—they allow for the most annoying AI dysfunction. I call it the fat-finger economy, where you have someone who works in a company on a product team, and their KPI, and therefore their bonus and compensation, is tied to getting you to use AI a certain number of times. So they just look at the analytics for the app and they ask, “What button gets pushed the most often? Let’s move that button somewhere else and make an AI summoning button.”

They’re just gaming a metric. It’s causing significant across-the-board regressions in the quality of the product, and I don’t think it’s justified by people who then discover a new use for the AI. That’s a paternalistic justification. The user doesn’t know what they want until you show it to them: “Oh, if I trick you into using it and you keep using it, then I have actually done you a favor.” I don’t think that’s happening. I don’t think people are like, “Oh, rather than press reply to a message and then type a message, I can instead have this interaction with an AI about how to send someone a message about takeout for dinner tonight.” I think people are like, “That was terrible. I regret having tapped it.” 

The speech-to-text is unusable now. I flatter myself that my spoken and written communication is not statistically average. The things that make it me and that make it worth having, as opposed to just a series of multiple-choice answers, is all the ways in which it diverges from statistical averages. Back when the model was stupider, when it gave up sooner if it didn’t recognize what word it might be and just transcribed what it thought you’d said rather than trying to substitute a more probable word, it was more accurate.  Now, what I’m getting are statistically average words that are meaningless.

That elision of nuance and detail is characteristic of what makes AI products bad. There is a bunch of stuff that AI is good at that I’m excited about, and I think a lot of it is going to survive the bubble popping. But I fear that we’re not planning for that. I fear what we’re doing is taking workers whose jobs are meaningful, replacing them with AIs that can’t do their jobs, and then those AIs are going to go away and we’ll have nothing. That’s my concern.

Ars Technica: You prescribe a “cure” for enshittification, but in such a polarized political environment, do we even have the collective will to implement the necessary policies?

Cory Doctorow: The good news is also the bad news, which is that this doesn’t just affect tech. Take labor power. There are a lot of tech workers who are looking at the way their bosses treat the workers they’re not afraid of—Amazon warehouse workers and drivers, Chinese assembly line manufacturers for iPhones—and realizing, “Oh, wait, when my boss stops being afraid of me, this is how he’s going to treat me.” Mark Zuckerberg stopped going to those all-hands town hall meetings with the engineering staff. He’s not pretending that you are his peers anymore. He doesn’t need to; he’s got a critical mass of unemployed workers he can tap into. I think a lot of Googlers figured this out after the 12,000-person layoffs. Tech workers are realizing they missed an opportunity, that they’re going to have to play catch-up, and that the only way to get there is by solidarity with other kinds of workers.

The same goes for competition. There’s a bunch of people who care about media, who are watching Warner about to swallow Paramount and who are saying, “Oh, this is bad. We need antitrust enforcement here.” When we had a functional antitrust system for the last four years, we saw a bunch of telecoms mergers stopped because once you start enforcing antitrust, it’s like eating Pringles. You just can’t stop. You embolden a lot of people to start thinking about market structure as a source of either good or bad policy. The real thing that happened with [former FTC chair] Lina Kahn doing all that merger scrutiny was that people just stopped planning mergers.

There are a lot of people who benefit from this. It’s not just tech workers or tech users; it’s not just media users. Hospital consolidation, pharmaceutical consolidation, has a lot of people who are very concerned about it. Mark Cuban is freaking out about pharmacy benefit manager consolidation and vertical integration with HMOs, as he should be. I don’t think that we’re just asking the anti-enshittification world to carry this weight.

Same with the other factors. The best progress we’ve seen on interoperability has been through right-to-repair. It hasn’t been through people who care about social media interoperability. One of the first really good state-level right-to-repair bills was the one that [Governor] Jared Polis signed in Colorado for powered wheelchairs. Those people have a story that is much more salient to normies.

What do you mean you spent six months in bed because there’s only two powered wheelchair manufacturers and your chair broke and you weren’t allowed to get it fixed by a third party?” And they’ve slashed their repair department, so it takes six months for someone to show up and fix your chair. So you had bed sores and pneumonia because you couldn’t get your chair fixed. This is bullshit.

So the coalitions are quite large. The thing that all of those forces share—interoperability, labor power, regulation, and competition—is that they’re all downstream of corporate consolidation and wealth inequality. Figuring out how to bring all of those different voices together, that’s how we resolve this. In many ways, the enshittification analysis and remedy are a human factors and security approach to designing an enshittification-resistant Internet. It’s about understanding this as a red team, blue team exercise. How do we challenge the status quo that we have now, and how do we defend the status quo that we want?

Anything that can’t go on forever eventually stops. That is the first law of finance, Stein’s law. We are reaching multiple breaking points, and the question is whether we reach things like breaking points for the climate and for our political system before we reach breaking points for the forces that would rescue those from permanent destruction.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Yes, everything online sucks now—but it doesn’t have to Read More »

nasa’s-next-moonship-reaches-last-stop-before-launch-pad

NASA’s next Moonship reaches last stop before launch pad

The Orion spacecraft, which will fly four people around the Moon, arrived inside the cavernous Vehicle Assembly Building at NASA’s Kennedy Space Center in Florida late Thursday night, ready to be stacked on top of its rocket for launch early next year.

The late-night transfer covered about 6 miles (10 kilometers) from one facility to another at the Florida spaceport. NASA and its contractors are continuing preparations for the Artemis II mission after the White House approved the program as an exception to work through the ongoing government shutdown, which began on October 1.

The sustained work could set up Artemis II for a launch opportunity as soon as February 5 of next year. Astronauts Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen will be the first humans to fly on the Orion spacecraft, a vehicle that has been in development for nearly two decades. The Artemis II crew will make history on their 10-day flight by becoming the first people to travel to the vicinity of the Moon since 1972.

Where things stand

The Orion spacecraft, developed by Lockheed Martin, has made several stops at Kennedy over the last few months since leaving its factory in May.

First, the capsule moved to a fueling facility, where technicians filled it with hydrazine and nitrogen tetroxide propellants, which will feed Orion’s main engine and maneuvering thrusters on the flight to the Moon and back. In the same facility, teams loaded high-pressure helium and ammonia coolant into Orion propulsion and thermal control systems.

The next stop was a nearby building where the Launch Abort System was installed on the Orion spacecraft. The tower-like abort system would pull the capsule away from its rocket in the event of a launch failure. Orion stands roughly 67 feet (20 meters) tall with its service module, crew module, and abort tower integrated together.

Teams at Kennedy also installed four ogive panels to serve as an aerodynamic shield over the Orion crew capsule during the first few minutes of launch.

The Orion spacecraft, with its Launch Abort System and ogive panels installed, is seen last month inside the Launch Abort System Facility at Kennedy Space Center, Florida. Credit: NASA/Frank Michaux

It was then time to move Orion to the Vehicle Assembly Building (VAB), where a separate team has worked all year to stack the elements of NASA’s Space Launch System rocket. In the coming days, cranes will lift the spacecraft, weighing 78,000 pounds (35 metric tons), dozens of stories above the VAB’s center aisle, then up and over the transom into the building’s northeast high bay to be lowered atop the SLS heavy-lift rocket.

NASA’s next Moonship reaches last stop before launch pad Read More »

12-years-of-hdd-analysis-brings-insight-to-the-bathtub-curve’s-reliability

12 years of HDD analysis brings insight to the bathtub curve’s reliability

But as seen in Backblaze’s graph above, the company’s HDDs aren’t adhering to that principle. The blog’s authors noted that in 2021 and 2025, Backblaze’s drives had a “pretty even failure rate through the significant majority of the drives’ lives, then a fairly steep spike once we get into drive failure territory.”

The blog continues:

What does that mean? Well, drives are getting better, and lasting longer. And, given that our trendlines are about the same shape from 2021 to 2025, we should likely check back in when 2029 rolls around to see if our failure peak has pushed out even further.

Speaking with Ars Technica, Doyle said that Backblaze’s analysis is good news for individuals shopping for larger hard drives because the devices are “going to last longer.”

She added:

In many ways, you can think of a datacenter’s use of hard drives as the ultimate test for a hard drive—you’re keeping a hard drive on and spinning for the max amount of hours, and often the amount of times you read/write files is well over what you’d ever see as a consumer. Industry trend-wise, drives are getting bigger, which means that oftentimes, folks are buying fewer of them. Reporting on how these drives perform in a data center environment, then, can give you more confidence that whatever drive you’re buying is a good investment.

The longevity of HDDs is also another reason for shoppers to still consider HDDs over faster, more expensive SSDs.

“It’s a good idea to decide how justified the improvement in latency is,” Doyle said.

Questioning the bathtub curve

Doyle and Paterson aren’t looking to toss the bathtub curve out with the bathwater. They’re not suggesting that the bathtub curve doesn’t apply to HDDs, but rather that it overlooks additional factors affecting HDD failure rates, including “workload, manufacturing variation, firmware updates, and operational churn.” The principle also makes the assumptions that, per the authors:

  • Devices are identical and operate under the same conditions
  • Failures happen independently, driven mostly by time
  • The environment stays constant across a product’s life

While these conditions can largely be met in datacenter environments, “conditions can’t ever be perfect,” Doyle and Patterson noted. When considering an HDD’s failure rates over time, it’s wise to consider both the bathtub curve and how you use the component.

12 years of HDD analysis brings insight to the bathtub curve’s reliability Read More »

lead-poisoning-has-been-a-feature-of-our-evolution

Lead poisoning has been a feature of our evolution


A recent study found lead in teeth from 2 million-year-old hominin fossils.

Our hominid ancestors faced a Pleistocene world full of dangers—and apparently one of those dangers was lead poisoning.

Lead exposure sounds like a modern problem, at least if you define “modern” the way a paleoanthropologist might: a time that started a few thousand years ago with ancient Roman silver smelting and lead pipes. According to a recent study, however, lead is a much more ancient nemesis, one that predates not just the Romans but the existence of our genus Homo. Paleoanthropologist Renaud Joannes-Boyau of Australia’s Southern Cross University and his colleagues found evidence of exposure to dangerous amounts of lead in the teeth of fossil apes and hominins dating back almost 2 million years. And somewhat controversially, they suggest that the toxic element’s pervasiveness may have helped shape our evolutionary history.

The skull of an early hominid, aged to a dark brown color. The skull is fragmentary, but the fragments are held in the appropriate locations by an underlying beige material.

The skull of an early hominid. Credit: Einsamer Schütze / Wikimedia

The Romans didn’t invent lead poisoning

Joannes-Boyau and his colleagues took tiny samples of preserved enamel and dentin from the teeth of 51 fossils. In most of those teeth, the paleoanthropologists found evidence that these apes and hominins had been exposed to lead—sometimes in dangerous quantities—fairly often during their early years.

Tooth enamel forms in thin layers, a little like tree rings, during the first six or so years of a person’s life. The teeth in your mouth right now (and of which you are now uncomfortably aware; you’re welcome) are a chemical and physical record of your childhood health—including, perhaps, whether you liked to snack on lead paint chips. Bands of lead-tainted tooth enamel suggest that a person had a lot of lead in their bloodstream during the year that layer of enamel was forming (in this case, “a lot” means an amount measurable in parts per million).

In 71 percent of the hominin teeth that Joannes-Boyau and his colleagues sampled, dark bands of lead in the tooth enamel showed “clear signs of episodic lead exposure” during the crucial early childhood years. Those included teeth from 100,000-year-old members of our own species found in China and 250,000-year-old French Neanderthals. They also included much earlier hominins who lived between 1 and 2 million years ago in South Africa: early members of our genus Homo, along with our relatives Australopithecus africanus and Paranthropus robustus. Lead exposure, it turns out, is a very ancient problem.

Living in a dangerous world

This study isn’t the first evidence that ancient hominins dealt with lead in their environments. Two Neanderthals living 250,000 years ago in France experienced lead exposure as young children, according to a 2018 study. At the time, they were the oldest known examples of lead exposure (and they’re included in Joannes-Boyau and his colleagues’ recent study).

Until a few thousand years ago, no one was smelting silver, plumbing bathhouses, or releasing lead fumes in car exhaust. So how were our hominin ancestors exposed to the toxic element? Another study, published in 2015, showed that the Spanish caves occupied by other groups of Neanderthals contained enough heavy metals, including lead, to “meet the present-day standards of ‘contaminated soil.’”

Today, we mostly think of lead in terms of human-made pollution, so it’s easy to forget that it’s also found naturally in bedrock and soil. If that weren’t the case, archaeologists couldn’t use lead isotope ratios to tell where certain artifacts were made. And some places—and some types of rock—have higher lead concentrations than others. Several common minerals contain lead compounds, including galena or lead sulfide. And the kind of lead exposure documented in Joannes-Boyau and his colleagues’ study would have happened at an age when little hominins were very prone to putting rocks, cave dirt, and other random objects in their mouths.

Some of the fossils from the Queque cave system in China, which included a 1.8 million-year-old extinct gorilla-like ape called Gigantopithecus blacki, had lead levels higher than 50 parts per million, which Joannes-Boyau and his colleagues describe as “a substantial level of lead that could have triggered some developmental, health, and perhaps social impairments.”

Even for ancient hominins who weren’t living in caves full of lead-rich minerals, wildfires, or volcanic eruptions can also release lead particles into the air, and erosion or flooding can sweep buried lead-rich rock or sediment into water sources. If you’re an Australopithecine living upstream of a lead-rich mica outcropping, for example, erosion might sprinkle poison into your drinking water—or the drinking water of the gazelle you eat or the root system of the bush you get those tasty berries from… .

Our world is full of poisons. Modern humans may have made a habit of digging them up and pumping them into the air, but they’ve always been lying in wait for the unwary.

screenshot from the app

Cubic crystals of the lead-sulfide mineral galena.

Digging into the details

Joannes-Boyau and his colleagues sampled the teeth of several hominin species from South Africa, all unearthed from cave systems just a few kilometers apart. All of them walked the area known as Cradle of Humankind within a few hundred thousand years of each other (at most), and they would have shared a very similar environment. But they also would have had very different diets and ways of life, and that’s reflected in their wildly different exposures to lead.

A. africanus had the highest exposure levels, while P. robustus had signs of infrequent, very slight exposures (with Homo somewhere in between the two). Joannes-Boyau and his colleagues chalk the difference up to the species’ different diets and ecological niches.

“The different patterns of lead exposure could suggest that P. robustus lead bands were the result of acute exposure (e.g., wild forest fire),” Joannes-Boyau and his colleagues wrote, “while for the other two species, known to have a more varied diet, lead bands may be due to more frequent, seasonal, and higher lead concentration through bioaccumulation processes in the food chain.”

Did lead exposure affect our evolution?

Given their evidence that humans and their ancestors have regularly been exposed to lead, the team looked into whether this might have influenced human evolution. In doing so, they focused on a gene called NOVA1, which has been linked to both brain development and the response to lead exposure. The results were quite a bit short of decisive; you can think of things as remaining within the realm of a provocative hypothesis.

The NOVA1 gene encodes a protein that influences the processing of messenger RNAs, allowing it to control the production of closely related variants of a single gene. It’s notable for a number of reasons. One is its role in brain development; mice without a working copy of NOVA1 die shortly after birth due to defects in muscle control. Its activity is also altered following exposure to lead.

But perhaps its most interesting feature is that modern humans have a version of the gene that differs by a single amino acid from the version found in all other primates, including our closest relatives, the Denisovans and Neanderthals. This raises the prospect that the difference is significant from an evolutionary perspective. Altering the mouse version so that it is identical to the one found in modern humans does alter the vocal behavior of these mice.

But work with human stem cells has produced mixed results. One group, led by one of the researchers involved in this work, suggested that stem cells carrying the ancestral form of the protein behaved differently from those carrying the modern human version. But others have been unable to replicate those results.

Regardless of that bit of confusion, the researchers used the same system, culturing stem cells with the modern human and ancestral versions of the protein. These clusters of cells (called organoids) were grown in media containing two different concentrations of lead, and changes in gene activity and protein production were examined. The researchers found changes, but the significance isn’t entirely clear. There were differences between the cells with the two versions of the gene, even without any lead present. Adding lead could produce additional changes, but some of those were partially reversed if more lead was added. And none of those changes were clearly related either to a response to lead or the developmental defects it can produce.

The relevance of these changes isn’t obvious, either, as stem cell cultures tend to reflect early neural development while the lead exposure found in the fossilized remains is due to exposure during the first few years of life.

So there isn’t any clear evidence that the variant found in modern humans protects individuals who are exposed to lead, much less that it was selected by evolution for that function. And given the widespread exposure seen in this work, it seems like all of our relatives—including some we know modern humans interbred with—would also have benefited from this variant if it was protective.

Science Advances, 2025. DOI: 10.1126/sciadv.adr1524  (About DOIs).

Photo of Kiona N. Smith

Kiona is a freelance science journalist and resident archaeology nerd at Ars Technica.

Lead poisoning has been a feature of our evolution Read More »

3-years,-4-championships,-but-0-le-mans-wins:-assessing-the-porsche-963

3 years, 4 championships, but 0 Le Mans wins: Assessing the Porsche 963


Riding high in IMSA but pulling out of WEC paints a complicated picture for the factory team.

Three race cars on track at Road Atlanta

Porsche didn’t win this year’s Petit Le Mans, but the #6 Porsche Penske 963 won championships for the team, the manufacturer, and the drivers. Credit: Hoch Zwei/Porsche

Porsche didn’t win this year’s Petit Le Mans, but the #6 Porsche Penske 963 won championships for the team, the manufacturer, and the drivers. Credit: Hoch Zwei/Porsche

The car world has long had a thing about numbers. Engine outputs. Top speeds. Zero-to-60 times. Displacement. But the numbers go beyond bench racing specs. Some cars have numbers for names, and few more memorably than Porsche. Its most famous model shares its appellation with the emergency services here in North America; although the car should accurately be “nine-11,” you call it “nine-one-one.”

Some numbers are less well-known, but perhaps more special to Porsche’s fans, especially those who like racing. 908. 917. 956. 962. 919. But how about 963?

That’s Porsche’s current sports prototype, a 670-hp (500 kW) hybrid that for the last three years has battled against rivals in what is starting to look like, if not a golden era for endurance racing, then at least a very purple patch. And the 963 has done well, racing here in IMSA’s WeatherTech Sportscar Championship and around the globe in the FIA World Endurance Championship.

In just three years since its competition debut at the Rolex 24 at Daytona in 2023, it has won 15 of the 49 races it has entered—most recently the WEC Lone Star Le Mans in Texas last month—and earned series championships in WEC (2023, 2024) and IMSA (2024, 2025), sealing the last of those this past weekend at the Petit Le Mans at Road Atlanta, a 10-hour race that caps IMSA’s season.

A porsche 963 on track, seen from above

49 races, 15 wins. But not Le Mans… Credit: Hoch Zwei/Porsche

But the IMSA championships—for the drivers, the teams, and the Michelin Endurance cup, as well as the manufacturers’ title in GTP—came just days after Porsche announced that its factory team would not enter WEC’s Hypercar category next year, halving the OEM’s prototype race program. And despite all those race wins, victory has eluded the 963 at Le Mans, which has seen a three-year shut-out by Ferrari’s 499P.

Missing the big win?

Porsche pulling out of WEC doesn’t rule out a 963 win at Le Mans next year, as the championship-winning 963 has gotten an invite to the race, and there is still a privateer 963 in the series. But the failure to win the big race has had me wondering whether that keeps the 963 from joining the pantheon of Porsche’s greatest racing cars and whether it needs a Le Mans win to cement its reputation. So I asked Urs Kuratle, director of factory motorsport LMDh at Porsche.

“Le Mans is one of the biggest car races in the world, independent from Porsche and the brands and the names and everything. So not winning this one is a—“bitter pill” is the wrong term, but obviously we would have loved to win this race. But we did not with the 963. We did with previous projects in LMP1h, but not with the 963,” Kuratle told me.

“But still, the 963 program is… a highly successful program because you named it—in the last year, we did not win one win in the championship, we won all of them. Because there’s several—the drivers’, manufacturers’, endurance, all these things—there’s many, many, many championships that the car won and also races. So the answer, basically, is it is a successful program. Not winning Le Mans with Porsche and Penske as well… I’m looking for the right term… it’s a pity,” Kuratle told me.

The #7 Porsche Penske won the Michelin Endurance Cup this year. Credit: Hoch Zwei/Porsche

Was LMDh the right move?

During the heady days of LMP1h, a complicated rulebook sought to create an equivalence of technology between wildly disparate approaches to hybrid race cars that included diesels, mechanical flywheels, and supercapacitors, as well as the more usual gasoline engines and lithium-ion batteries. The cars were technological marvels; unfettered, Porsche’s 919 was almost as fast as an F1 car—and almost as expensive.

These days, costs are more firmly under control, and equivalence of technology has given way to balance of performance to level the playing field. It’s a controversial topic. IMSA and the ACO, which writes the WEC and Le Mans rules, have different approaches to BoP, and the latter has had a perhaps more complicated—or more political—job as it combines cars built to two different rulebooks.

Some, like Ferrari, Peugeot, Toyota, and Aston Martin, build their entire car themselves to the Le Mans Hypercar (LMH) rules, which were written by the organizers of Le Mans and WEC. Others, like Porsche, Acura, Alpine, BMW, Cadillac, and Lamborghini, chose the Le Mans Daytona h (LMDh) rules, written in the US by IMSA. LMDh cars have to start off with one of four approved chassis or spines and must also use the same Bosch hybrid motor and electronics, the same Xtrac transmission, and the same WAE battery, with the rest being provided by the OEM.

Even before the introduction of LMH and LMDh, I wondered whether the LMDh cars would really be given a fair shake at the most important endurance race of the year, considering the organizers of that race wrote an alternative set of technical regulations. In 2025, a Porsche nearly did win, so I’m not sure there is any inherent bias or “not invented here” syndrome, but I asked Kuratle if, in hindsight, Porsche might have gone the “do it all yourself’ route of LMH, as Ferrari did.

“If you would have the chance starting on a white piece of paper again, knowing what you know now, you obviously would do many things different. That, I believe, is the nature of a competitive environment we are in,” he told me.

“We have many things not under our control, which is not a criticism on Bosch or all the standard components, manufacturer, suppliers,” Kuratle said. “It’s not a criticism at all, but it’s just the fact that, if there are certain things we would like to change for the 963, for example, the suppliers, they cannot do it because they have to do the same thing for the others as well, and they may not agree to this.”

“They are complicated cars, yes, this is true. But it’s not by the performance numbers; the LMP1 hybrid systems were way more efficient but also [more] performant than the system here. But the [spec components are] the way [they are] for good reasons, and that makes it more complicated,” he said.

A porsche 963 in the pit lane at road atlanta

North America is a very important market for Porsche, so we may see the 963 race here for the next few years. Credit: Hoch Zwei/Porsche

What’s next?

While the factory 963s will race in WEC no more after contesting the final round of the series in Bahrain in a few weeks, a continued IMSA effort for 2026 is assured, and there are several 963s in the hands of privateer teams. Meanwhile, discussions are ongoing between IMSA, the ACO, and manufacturers on a unified technical rulebook, probably for 2030.

Porsche is known to be a part of those discussions—the head of Porsche Motorsport spoke to The Race in September about them—but Kuratle wasn’t prepared to discuss the next Porsche racing prototype.

“A brand like Porsche is always thinking about the next project they may do. Obviously, we cannot talk about whatever we don’t know yet,” Kuratle said. But it should probably have something that can feed back into the cars that Porsche sells.

“If you look at the new Porsche turbo models, the concept is slightly different, but that comes very, very close to what the LMP1 hybrid system and concept was. So there’s all these things to go back into the road car side, so the experience is crucial,” he said.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

3 years, 4 championships, but 0 Le Mans wins: Assessing the Porsche 963 Read More »

ars-live-recap:-is-the-ai-bubble-about-to-pop?-ed-zitron-weighs-in.

Ars Live recap: Is the AI bubble about to pop? Ed Zitron weighs in.


Despite connection hiccups, we covered OpenAI’s finances, nuclear power, and Sam Altman.

On Tuesday of last week, Ars Technica hosted a live conversation with Ed Zitron, host of the Better Offline podcast and one of tech’s most vocal AI critics, to discuss whether the generative AI industry is experiencing a bubble and when it might burst. My Internet connection had other plans, though, dropping out multiple times and forcing Ars Technica’s Lee Hutchinson to jump in as an excellent emergency backup host.

During the times my connection cooperated, Zitron and I covered OpenAI’s financial issues, lofty infrastructure promises, and why the AI hype machine keeps rolling despite some arguably shaky economics underneath. Lee’s probing questions about per-user costs revealed a potential flaw in AI subscription models: Companies can’t predict whether a user will cost them $2 or $10,000 per month.

You can watch a recording of the event on YouTube or in the window below.

Our discussion with Ed Zitron. Click here for transcript.

“A 50 billion-dollar industry pretending to be a trillion-dollar one”

I started by asking Zitron the most direct question I could: “Why are you so mad about AI?” His answer got right to the heart of his critique: the disconnect between AI’s actual capabilities and how it’s being sold. “Because everybody’s acting like it’s something it isn’t,” Zitron said. “They’re acting like it’s this panacea that will be the future of software growth, the future of hardware growth, the future of compute.”

In one of his newsletters, Zitron describes the generative AI market as “a 50 billion dollar revenue industry masquerading as a one trillion-dollar one.” He pointed to OpenAI’s financial burn rate (losing an estimated $9.7 billion in the first half of 2025 alone) as evidence that the economics don’t work, coupled with a heavy dose of pessimism about AI in general.

Donald Trump listens as Nvidia CEO Jensen Huang speaks at the White House during an event on “Investing in America” on April 30, 2025, in Washington, DC. Credit: Andrew Harnik / Staff | Getty Images News

“The models just do not have the efficacy,” Zitron said during our conversation. “AI agents is one of the most egregious lies the tech industry has ever told. Autonomous agents don’t exist.”

He contrasted the relatively small revenue generated by AI companies with the massive capital expenditures flowing into the sector. Even major cloud providers and chip makers are showing strain. Oracle reportedly lost $100 million in three months after installing Nvidia’s new Blackwell GPUs, which Zitron noted are “extremely power-hungry and expensive to run.”

Finding utility despite the hype

I pushed back against some of Zitron’s broader dismissals of AI by sharing my own experience. I use AI chatbots frequently for brainstorming useful ideas and helping me see them from different angles. “I find I use AI models as sort of knowledge translators and framework translators,” I explained.

After experiencing brain fog from repeated bouts of COVID over the years, I’ve also found tools like ChatGPT and Claude especially helpful for memory augmentation that pierces through brain fog: describing something in a roundabout, fuzzy way and quickly getting an answer I can then verify. Along these lines, I’ve previously written about how people in a UK study found AI assistants useful accessibility tools.

Zitron acknowledged this could be useful for me personally but declined to draw any larger conclusions from my one data point. “I understand how that might be helpful; that’s cool,” he said. “I’m glad that that helps you in that way; it’s not a trillion-dollar use case.”

He also shared his own attempts at using AI tools, including experimenting with Claude Code despite not being a coder himself.

“If I liked [AI] somehow, it would be actually a more interesting story because I’d be talking about something I liked that was also onerously expensive,” Zitron explained. “But it doesn’t even do that, and it’s actually one of my core frustrations, it’s like this massive over-promise thing. I’m an early adopter guy. I will buy early crap all the time. I bought an Apple Vision Pro, like, what more do you say there? I’m ready to accept issues, but AI is all issues, it’s all filler, no killer; it’s very strange.”

Zitron and I agree that current AI assistants are being marketed beyond their actual capabilities. As I often say, AI models are not people, and they are not good factual references. As such, they cannot replace human decision-making and cannot wholesale replace human intellectual labor (at the moment). Instead, I see AI models as augmentations of human capability: as tools rather than autonomous entities.

Computing costs: History versus reality

Even though Zitron and I found some common ground about AI hype, I expressed a belief that criticism over the cost and power requirements of operating AI models will eventually not become an issue.

I attempted to make that case by noting that computing costs historically trend downward over time, referencing the Air Force’s SAGE computer system from the 1950s: a four-story building that performed 75,000 operations per second while consuming two megawatts of power. Today, pocket-sized phones deliver millions of times more computing power in a way that would be impossible, power consumption-wise, in the 1950s.

The blockhouse for the Semi-Automatic Ground Environment at Stewart Air Force Base, Newburgh, New York. Credit: Denver Post via Getty Images

“I think it will eventually work that way,” I said, suggesting that AI inference costs might follow similar patterns of improvement over years and that AI tools will eventually become commodity components of computer operating systems. Basically, even if AI models stay inefficient, AI models of a certain baseline usefulness and capability will still be cheaper to train and run in the future because the computing systems they run on will be faster, cheaper, and less power-hungry as well.

Zitron pushed back on this optimism, saying that AI costs are currently moving in the wrong direction. “The costs are going up, unilaterally across the board,” he said. Even newer systems like Cerebras and Grok can generate results faster but not cheaper. He also questioned whether integrating AI into operating systems would prove useful even if the technology became profitable, since AI models struggle with deterministic commands and consistent behavior.

The power problem and circular investments

One of Zitron’s most pointed criticisms during the discussion centered on OpenAI’s infrastructure promises. The company has pledged to build data centers requiring 10 gigawatts of power capacity (equivalent to 10 nuclear power plants, I once pointed out) for its Stargate project in Abilene, Texas. According to Zitron’s research, the town currently has only 350 megawatts of generating capacity and a 200-megawatt substation.

“A gigawatt of power is a lot, and it’s not like Red Alert 2,” Zitron said, referencing the real-time strategy game. “You don’t just build a power station and it happens. There are months of actual physics to make sure that it doesn’t kill everyone.”

He believes many announced data centers will never be completed, calling the infrastructure promises “castles on sand” that nobody in the financial press seems willing to question directly.

An orange, cloudy sky backlights a set of electrical wires on large pylons, leading away from the cooling towers of a nuclear power plant.

After another technical blackout on my end, I came back online and asked Zitron to define the scope of the AI bubble. He says it has evolved from one bubble (foundation models) into two or three, now including AI compute companies like CoreWeave and the market’s obsession with Nvidia.

Zitron highlighted what he sees as essentially circular investment schemes propping up the industry. He pointed to OpenAI’s $300 billion deal with Oracle and Nvidia’s relationship with CoreWeave as examples. “CoreWeave, they literally… They funded CoreWeave, became their biggest customer, then CoreWeave took that contract and those GPUs and used them as collateral to raise debt to buy more GPUs,” Zitron explained.

When will the bubble pop?

Zitron predicted the bubble would burst within the next year and a half, though he acknowledged it could happen sooner. He expects a cascade of events rather than a single dramatic collapse: An AI startup will run out of money, triggering panic among other startups and their venture capital backers, creating a fire-sale environment that makes future fundraising impossible.

“It’s not gonna be one Bear Stearns moment,” Zitron explained. “It’s gonna be a succession of events until the markets freak out.”

The crux of the problem, according to Zitron, is Nvidia. The chip maker’s stock represents 7 to 8 percent of the S&P 500’s value, and the broader market has become dependent on Nvidia’s continued hyper growth. When Nvidia posted “only” 55 percent year-over-year growth in January, the market wobbled.

“Nvidia’s growth is why the bubble is inflated,” Zitron said. “If their growth goes down, the bubble will burst.”

He also warned of broader consequences: “I think there’s a depression coming. I think once the markets work out that tech doesn’t grow forever, they’re gonna flush the toilet aggressively on Silicon Valley.” This connects to his larger thesis: that the tech industry has run out of genuine hyper-growth opportunities and is trying to manufacture one with AI.

“Is there anything that would falsify your premise of this bubble and crash happening?” I asked. “What if you’re wrong?”

“I’ve been answering ‘What if you’re wrong?’ for a year-and-a-half to two years, so I’m not bothered by that question, so the thing that would have to prove me right would’ve already needed to happen,” he said. Amid a longer exposition about Sam Altman, Zitron said, “The thing that would’ve had to happen with inference would’ve had to be… it would have to be hundredths of a cent per million tokens, they would have to be printing money, and then, it would have to be way more useful. It would have to have efficacy that it does not have, the hallucination problems… would have to be fixable, and on top of this, someone would have to fix agents.”

A positivity challenge

Near the end of our conversation, I wondered if I could flip the script, so to speak, and see if he could say something positive or optimistic, although I chose the most challenging subject possible for him. “What’s the best thing about Sam Altman,” I asked. “Can you say anything nice about him at all?”

“I understand why you’re asking this,” Zitron started, “but I wanna be clear: Sam Altman is going to be the reason the markets take a crap. Sam Altman has lied to everyone. Sam Altman has been lying forever.” He continued, “Like the Pied Piper, he’s led the markets into an abyss, and yes, people should have known better, but I hope at the end of this, Sam Altman is seen for what he is, which is a con artist and a very successful one.”

Then he added, “You know what? I’ll say something nice about him, he’s really good at making people say, ‘Yes.’”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Ars Live recap: Is the AI bubble about to pop? Ed Zitron weighs in. Read More »

oneplus-unveils-oxygenos-16-update-with-deep-gemini-integration

OnePlus unveils OxygenOS 16 update with deep Gemini integration

The updated Android software expands what you can add to Mind Space and uses Gemini. For starters, you can add scrolling screenshots and voice memos up to 60 seconds in length. This provides more data for the AI to generate content. For example, if you take screenshots of hotel listings and airline flights, you can tell Gemini to use your Mind Space content to create a trip itinerary. This will be fully integrated with the phone and won’t require a separate subscription to Google’s AI tools.

oneplus-oxygen-os16

Credit: OnePlus

Mind Space isn’t a totally new idea—it’s quite similar to AI features like Nothing’s Essential Space and Google’s Pixel Screenshots and Journal. The idea is that if you give an AI model enough data on your thoughts and plans, it can provide useful insights. That’s still hypothetical based on what we’ve seen from other smartphone OEMs, but that’s not stopping OnePlus from fully embracing AI in Android 16.

In addition to beefing up Mind Space, OxygenOS 16 will also add system-wide AI writing tools, which is another common AI add-on. Like the systems from Apple, Google, and Samsung, you will be able to use the OnePlus writing tools to adjust text, proofread, and generate summaries.

OnePlus will make OxygenOS 16 available starting October 17 as an open beta. You’ll need a OnePlus device from the past three years to run the software, both in the beta phase and when it’s finally released. As for that, OnePlus hasn’t offered a specific date. The initial OxygenOS 16 release will be with the OnePlus 15 devices, with releases for other supported phones and tablets coming later.

OnePlus unveils OxygenOS 16 update with deep Gemini integration Read More »