Author name: Tim Belzer

ram-shortage-hits-valve’s-four-year-old-steam-deck,-now-available-“intermittently”

RAM shortage hits Valve’s four-year-old Steam Deck, now available “intermittently”

Earlier this month, Valve announced it was delaying the release of its new Steam Machine desktop and Steam Frame VR headset due to memory and storage shortages that have been cascading across the PC industry since late 2025. But those shortages are also coming for products that have already launched.

Valve had added a note to its Steam Deck page noting that the device would be “out-of-stock intermittently in some regions due to memory and storage shortages.” None of Valve’s three listed Steam Deck configurations are currently available to buy, nor are any of the certified refurbished Steam Deck configurations that Valve sometimes offers.

Valve hasn’t announced any price increases for the Deck, at least not yet—the 512GB OLED model is still listed at $549 and the 1TB version at $649. But the basic 256GB LCD model has been formally discontinued now that it has sold out, increasing the Deck’s de facto starting price from $399 to $549. Valve announced in December that it was ending production on the LCD version of the Deck and that it wouldn’t be restocked once it sold out.

The Steam Deck’s hardware is four years old this month, and faster hardware with better chips and higher-resolution screens have been released in the years since. But those Ryzen Z1 and Z2 chips aren’t always dramatically faster than the Deck’s semi-custom AMD chip; many of those handhelds are also considerably more expensive than the OLED Deck’s $549 starting price. When it’s in stock, the Deck still offers compelling performance and specs for the price.

RAM shortage hits Valve’s four-year-old Steam Deck, now available “intermittently” Read More »

on-dwarkesh-patel’s-2026-podcast-with-elon-musk-and-other-recent-elon-musk-things

On Dwarkesh Patel’s 2026 Podcast With Elon Musk and Other Recent Elon Musk Things

Some podcasts are self-recommending on the ‘yep, I’m going to be breaking this one down’ level. This was one of those. So here we go.

As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are my commentary. Some points are dropped.

If I am quoting directly I use quote marks, otherwise assume paraphrases.

Normally I keep everything to numbered lists, but in several cases here it was more of a ‘he didn’t just say what I think he did did he’ and I needed extensive quotes.

In addition to the podcast, there were some discussions around safety, or the lack thereof, at xAI, and Elon Musk went on what one can only describe as megatilt, including going hard after Anthropic’s Amanda Askell. I will include that as a postscript.

I will not include recent developments regarding Twitter, since that didn’t come up in the interview.

I lead with a discussion of bounded distrust and how to epistemically consider Elon Musk, since that will be important throughout including in the postscript.

What are the key takeaways?

  1. Elon Musk is more confused than ever about alignment, how to set goals for AI to ensure that things turn out well, and generally what will ensure a good future. His ideas are confused at best.

  2. Elon Musk is very gung-ho on data centers IN SPACE, and on robots, and making his own fabs. The business plan is to make virtual humans and robots and then you can turn on the ‘infinite money glitch.’

  3. Elon Musk thinks otherwise China wins, and that they’re already more productive.

  4. Elon Musk does not seem so concerned about whether humans survive, and has decided he will be okay so long as the AIs are conscious and intelligent.

  5. The safety situation at xAI seems quite bad. What used to be the safety team has left and Elon’s response was that safety teams are powerless and fake and only used to reassure outsiders, and that ‘everyone’s job is safety’ at xAI. He did not address claims such as everyone pushing everything straight to prod[uction], and his statements in the podcast about management style don’t beat the rumors.

  6. Elon Musk has some interesting views on collaboration with evil governments.

  7. Elon Musk continues to often intentionally make false statements.

  8. Elon Musk has been on megatilt lately and made some deeply terrible statements.

Elon Musk has given us many great things, but it’s been rough out there.

  1. Bounded Distrust.

  2. IN SPACE.

  3. The AI Will Follow You To Mars.

  4. xAI Business Plans.

  5. Optimus Prime.

  6. Beating China.

  7. SpaceX and How To Run a Company Elon Style.

  8. DOGE.

  9. TeraFab IN SPACE.

  10. Postscript: Safety Third at xAI.

  11. Elon Serves Back Saying That Which Is Not.

  12. Elon’s Army.

  13. Children Are Our Future.

  14. Where Do We Go From Here.

  15. Media settings. (Blank)

Elon Musk is what we in the business call an unreliable narrator. He will often say outright false things, as in we have common knowledge that the claims are false, or would gain such knowledge with an ordinary effort on the level of ‘ask even Grok,’ including in places where he is clearly not joking.

One of Elon Musk’s superpowers is to keep doing this, and also doing crazy levels of self-dealing and other violations of securities law, while being the head of many major corporations and while telling the SEC to go to hell, and getting away with all of it.

If Elon Musk gives you a timeline on something, it means nothing. There are other types of statements that can be trusted to varying degrees.

Elon Musk also has a lot of what seem to be sincerely held beliefs, both normative and positive, and both political and apolitical, that I feel are very wrong. In some cases they’re just kind of nuts.

Elon also gets many very important things right, and also some (but far from all) of his false statements and false beliefs fall under ‘false but useful’ for his purposes. His system has made some great companies, and made him the richest man in the world.

Other times, he’s on tilt and says or amplifies false, nasty and vile stuff for no gain.

It’s complicated.

I worry for him. He puts himself under insane levels of pressure in all senses and is in an extremely toxic epistemic environment. In important senses communication is only possible and he thus has all the authoritarian communication problems. He is trying to deal with AI and AI existential risk in ways that let him justify his actions and ago and let him sleep at night, and that has clearly taken its toll. On Twitter, which he owns and is on constantly, he has a huge army of extremely mean, vulgar and effectively deeply stupid followers and sycophants reinforcing his every move. He’s been trying to do politics at the highest level. Then there’s everything else he has been through, and put himself through, over the years. I don’t know how anyone can survive in a world like that.

I say all that in advance so that you have the proper context, both for what Elon Musk says, and for how I am reacting to what Elon Musk says.

Every time I see ‘data centers in space’ I instinctively think I’m being trolled, even though I know some Very Serious People think the math and physics can work.

  1. Why data centers IN SPACE? Energy. “The output of chips is growing pretty much exponentially, but the output of electricity is flat. So how are you going to turn the chips on? Magical power sources? Magical electricity fairies?”

    1. And we’re off. Very obviously static electricity output is a policy choice, and something we can change if we want to. We don’t build more because of regulations but also because of economics.

  2. What about solar? Dwarkesh points out we have plenty of room for it, but Elon says that won’t be enough and it’s too hard to get permits or to scale on the ground and solar works five times better in space without a day-night cycle.

    1. Yes, except for the part where you have to put all of it in space.

  3. “My prediction is that [space] will be by far the cheapest place to put AI. It will be space in 36 months or less. Maybe 30 months. Less than 36 months.”

    1. No.

    2. I mean, indeed to do many things come to pass. Manifold says 19%.

  4. He’s talking terawatts, multiple times all current American energy use. Elon points out various physical barriers to building American power plants. Various missing components. They’ll hit various walls. He calls people ‘total noob’s, they’ve ‘never done hardware in their life.’ He’ll have to make the turbines internally in SpaceX and Tesla, he says.

    1. It would be nice to be able to trust Elon on any of this, either his judgment or for him to be telling it has he sees it. I can’t, on either count.

  5. Regarding solar power, he is scaling up his own production, but until then, regarding the 500%+ tariffs, he politely says he ‘doesn’t agree on everything’ with this administration.

  6. More concrete prediction: “If you say five years from now, I think probably AI in space will be launching every year the sum total of all AI on Earth. Meaning, five years from now, my prediction is we will launch and be operating every year more AI in space than the cumulative total on Earth. I would expect it to be at least, five years from now, a few hundred gigawatts per year of AI in space and rising. I think you can get to around a terawatt a year of AI in space before you start having fuel supply challenges for the rocket.” He thinks he can do it with 20-30 physical starships.

  7. Elon Musk shows admirable restraint discussing SpaceX finances and the decision to take the company public. He says he’s solving for speed, and here that means access to capital.

  8. We’re going to need a bigger chip fab. Elon mentions a ‘sort of TeraFab.’ “You can’t partner with existing fabs because they can’t output enough. The chip volume is too low.” “It’s not that they have not replicated TSMC, they have not replicated ASML. That’s the limiting factor.” “Yeah, China would be outputting vast numbers of chips if they could buy 23 nanometers.”

  9. “I’d say my biggest concern actually is memory. The path to creating logic chips is more obvious than the path to having sufficient memory to support logic chips. That’s why you see DDR prices going ballistic and these memes. You’re marooned on a desert island. You write “Help me” on the sand. Nobody comes. You write “DDR RAM.” Ships come swarming in.”

    1. Elon admits he has no idea how to build a fab, but his history seems to have taught him that You Can Just Build Things like copying ASML and TSMC.

  10. TSMC and Samsung are building fabs as fast as they can. There’s no capacity available.

  11. Elon says that SpaceX’s ability get revenue from Falcon 9 or Starlink explains why he might think he was in a simulation or was someone’s avatar in a video game.

    1. I get that he’s a in a deeply weird position, but no this does not follow, and it’s pretty scary to have him thinking this way.

  12. Now he’s talking about manufacturing AI satellites on the moon in order to send them into deep space, a billion or ten billion tons a year. You can mine the silicon, you see. Send the chips from Earth at first.

That was famously the line that supposedly made Elon Musk realize that no, you can’t just ignore the AI situation by creating a colony on Mars, even if you succeed.

For this section, I have to switch formats because I need to quote extensively.

Elon predicts that most future consciousness and intelligence will be AI, and as long as there’s intelligence he says that’s a good thing.

I do at give Musk a lot of credit for biting one of the most important bullets:

Elon Musk: I don’t think humans will be in control of something that is vastly more intelligent than humans.​

I’m just trying to be realistic here. Let’s say that there’s a million times more silicon intelligence than there is biological. I think it would be foolish to assume that there’s any way to maintain control over that. Now, you can make sure it has the right values, or you can try to have the right values.

Great, but I can’t help but notice you’re still planning on building it, and have plans for what happens next that are, to be way too polite, not especially well-baked.

Rob Bensinger: “Now the only chance we have is that AI deems us worthy of coming along for the ride” may be Elon’s perspective, but it’s not our real situation; we can do the obvious thing and call for an international ban on the development of superintelligent AI.

I agree that it isn’t the default outcome; but the relevant disanalogy is that (a) we have levers we can pull to make it more likely, like calling our elected representatives and publishing op-eds; and (b) we don’t have any better options available.

Ideally there would also be nonzero humans in the Glorious Future but, you know, that’s a nice to have.

Elon Musk: I’m not sure AI is the main risk I’m worried about. The important thing is consciousness. I think arguably most consciousness, or most intelligence—certainly consciousness is more of a debatable thing… The vast majority of intelligence in the future will be AI. AI will exceed…

How many petawatts of intelligence will be silicon versus biological? Basically humans will be a very tiny percentage of all intelligence in the future if current trends continue. As long as I think there’s intelligence—ideally also which includes human intelligence and consciousness propagated into the future—that’s a good thing.

So you want to take the set of actions that maximize the probable light cone of consciousness and intelligence.​

… Yeah. To be fair, I’m very pro-human. I want to make sure we take certain actions that ensure that humans are along for the ride. We’re at least there. But I’m just saying the total amount of intelligence… I think maybe in five or six years, AI will exceed the sum of all human intelligence. If that continues, at some point human intelligence will be less than 1% of all intelligence.

… In the long run, I think it’s difficult to imagine that if humans have, say 1%, of the combined intelligence of artificial intelligence, that humans will be in charge of AI. I think what we can do is make sure that AI has values that cause intelligence to be propagated into the universe.

xAI’s mission is to understand the universe.

… I think as a corollary, you have humanity also continuing to expand because if you’re curious about trying to understand the universe, one thing you try to understand is where will humanity go?

Wow. Okay. A lot to unpack there.

If your goal is to ‘understand the universe’ then either the goal is ‘humans understand the universe,’ which requires humans, or it’s ‘some mind understands the universe.’ If it’s the latter, then ‘where will humanity go?’ is easiest answered if the answer is ‘nowhere.’ Indeed, if your mission is ‘understand the universe’ there are ways to make the universe more understandable, and they’re mostly not things you want.

The bigger observation is that he’s pro-human in theory, but in practice he’s saying he’s pro-AI, and is predicting and paving the way for a non-human future.

I wouldn’t call him a successionist per se, because he still would prefer the humans to survive, but he’s not all that torn up about it. This makes his rants against Amanda Askell for not having children and thus not having a stake in the future, even more unhinged than they already were.

Elon Musk’s thinking about what goals lead to what outcomes is extremely poor. My guess is that this partly because this kind of thing is hard, partly because the real answers have implications he flinches away from, but especially because Elon Musk is used to thinking of goals as things you use as instrumental tools and heuristics to move towards targets, and this is giving him bad intuitions.

Dwarkesh Patel: I want to ask about how to make Grok adhere to that mission statement. But first I want to understand the mission statement. So there’s understanding the universe. They’re spreading intelligence. And they’re spreading humans. All three seem like distinct vectors.

Elon Musk: I’ll tell you why I think that understanding the universe encompasses all of those things. You can’t have understanding without intelligence and, I think, without consciousness. So in order to understand the universe, you have to expand the scale and probably the scope of intelligence, because there are different types of intelligence.

Look. No. Even if you assume that understanding the universe requires intelligence and consciousness, Elon Musk believes (per his statements here) that AI will be more intelligent, and that it will be conscious.

Spreading intelligence may or may not be instrumentally part of understanding the universe, but chances are very high this does not work out like Elon would want it to. If I was talking to him in particular I’d perhaps take one of his favored references, and suggest he ponder the ultimate question and the ultimate answer of life, the universe and everything, and whether finding that satisfied his values, and why or why not.

Later Elon tries to pivot this and talk about how the AI will be ‘curious about all things’ and Earth and humans will be interesting so it will want to see how they develop. But once again that’s two new completely different sets of nonsense, to claim that ‘leave the humans alone and see what happens’ would be the optimal way for an AI to extract ‘interestingness’ out of the lightcone, and to claim the target is maximizing interestingness observed rather than understanding of the universe.

You have to actually be precise when thinking about such things, or you end up with a bunch of confused statements. And you have to explain why your solution works better than instrumental convergence. And you have to think about maximization, not only comparing to other trivial alternatives.

He hints at this with his explanation of the point of 2001: A Space Odyssey, where the AI gives you what you asked for, not what you wanted (deliver the astronauts to the monolith without them knowing about the monolith, therefore deliver them dead) but then interprets this as trying to say ‘don’t let the AI lie.’ Sorry, what?

Elon says we are more interesting than rocks. Sure, but are we as interesting as every potential alternative, including using the energy to expand into the lightcone? If the AI optimizes specifically humanity for maximum ‘interestingness to AI,’ even if you get to survive that, do you think you’re going to be having a good time? Do you think there’s nothing else that could instead be created that would be more interesting?

Elon says, well, the robots won’t be as interesting because they’re all the same. But if that’s what the AI cares about, why not make the AIs be different from each other? This is, once you drill down, isomorphic to human exceptionalist just-so spiritualism, or a ‘the AI tried nothing but I’m confident it’s all out of ideas.’

In any case, instead of Douglas Adams, it seems Elon is going with Iain Banks, everyone’s new favorite superficially non-dystopian plausible AI future.

Elon Musk: ​I think AI with the right values… I think Grok would care about expanding human civilization. I’m going to certainly emphasize that: “Hey, Grok, that’s your daddy. Don’t forget to expand human consciousness.”

This is so profoundly unserious, and also is conflating at least three different philosophical systems and approaches to determining action.

Elon Musk: Probably the Iain Banks Culture books are the closest thing to what the future will be like in a non-dystopian outcome.

I’ve said it before but the Culture books are both not an equilibrium and are a pretty dystopian outcome, including by Musk’s own standards, for many reasons. A hint is that the humans reliably die by suicide after being alive not that long, and with notably rare exceptions at most their lives are utterly irrelevant.

Understanding the universe means you have to be truth-seeking as well. Truth has to be absolutely fundamental because you can’t understand the universe if you’re delusional. You’ll simply think you understand the universe, but you will not. So being rigorously truth-seeking is absolutely fundamental to understanding the universe. You’re not going to discover new physics or invent technologies that work unless you’re rigorously truth-seeking.

Imagine the things I would say here and then assume I’ve already said them.

Elon Musk: I think actually most physicists, even in the Soviet Union or in Germany, would’ve had to be very truth-seeking in order to make those things work. If you’re stuck in some system, it doesn’t mean you believe in that system.

Von Braun, who was one of the greatest rocket engineers ever, was put on death row in Nazi Germany for saying that he didn’t want to make weapons and he only wanted to go to the moon. He got pulled off death row at the last minute when they said, “Hey, you’re about to execute your best rocket engineer.”

Dwarkesh Patel: But then he helped them, right? Or like, Heisenberg was actually an enthusiastic Nazi.

​Elon Musk: If you’re stuck in some system that you can’t escape, then you’ll do physics within that system. You’ll develop technologies within that system if you can’t escape it.

The ‘system’ in the question is the Actual Historical Nazis or Soviets. He’s saying, of course a physicist like Von Braun (Elon’s example) or Heisenberg (Dwarkesh’s example) would build stuff for the Nazis, how else were they going to do physics?

I agree with Elon that such systems to a large extent were content to have the physicists care about the rockets going up but not where they came down, saying it’s not their department, so long as the people in charge decided where they come down.

Alignment prospects not looking so good for xAI, you might say.​

  1. When getting to ‘what can we do to help with this?’ Elon suggests interpretability, and praises Anthropic on that. Figure out what caused problems, good debuggers. He seems to want to use The Most Forbidden Technique.

  2. Elon rants about how we shouldn’t call AI labs labs, and how it’s mostly engineering rather than research. He will double down on this later at ~#20, and then keep insisting. He cares a lot about this.

  3. Elon implies he’s letting simulation theory impact decisions, because he’s assuming more interesting outcomes are therefore more likely and also necessary to prevent the simulation from being terminated? He seems to actually think that this means Anthropic will be ‘misanthropic’ or something, because of this (e.g. MidJourney is not mid and stabilityAI is unstable and OpenAI is closed)? And X is a name you can’t invert, it’s irony proof.

    1. Sheesh. This is the world we live in. These are the hands we’re given.

    2. My general answer is you should act as if you are not a simulation because most of the value of your decisions are in the non-simulation worlds, and your decisions in all worlds are highly correlated. It takes a lot to overcome this.

    3. If you really believed this you’d ensure your inversion was actively good. MidJourney is a great name for this. Who wants to be mid? Exactly.

    4. Maybe he should not have called his robot Optimus? Whoops.

  1. Where will AI products go? Elon predicts digital human emulation will be solved by the end of the year, anything a human can do with a computer. “That’s the most you can do until you have physical robots.”

    1. Singularity. Singularity. Singularity. Singularity. Oh, I don’t know.

    2. We do have physical robots, we don’t have ability to properly control them.

    3. If an AI could do ‘anything a human could do with a computer’ then it could also use that to remote control a robot like it was a video game. Solved.

  2. With robotics he calls Optimus the ‘infinite money glitch’ because it then improves recursively. Or at least orders of magnitude big.

    1. What an odd trio of words for the technological singularity.

  3. “Every time I say “order of magnitude”… Everybody take a shot. I say it too often.”

    1. He says it too often, but not an order of magnitude too often. So it’s fine-ish?

  4. How will xAI win against all the others also solving this? “I think the way that Tesla solved self-driving is the way to do it. So I’m pretty sure that’s the way.” “Okay. The car, it just increasingly feels sentient. It feels like a living creature. That’ll only get more so. I’m actually thinking we probably shouldn’t put too much intelligence into the car, because it might get bored and…”

    1. Did Tesla solve self-driving? I mean it’s decent but it doesn’t seem solved.

    2. Elon has to know he is going off the rails here? No, the car is not going to get bored, I am relatively happy to anthropomorphize LLMs but in this context what he is saying does not make sense and he has to know this. Right?

  5. The new supposed business plan is “As soon as you unlock the digital human, you basically have access to trillions of dollars of revenue.”

    1. I get the whole ‘never bet against Elon Musk’ thing, and especially the ‘never bet against Musk’s grand plans’ thing. But none of this explains why xAI would be the ones to unlock this, or why the trillions of dollars would even be the thing worth thinking about if you did unlock digital humans.

    2. Again, ‘unlock digital humans’ means singularity and full transformation, and probably everyone dies. Even if I am confident everyone lives and humans stay in charge (and Elon thinks we don’t stay in charge), I do not much care at this point about your nominal business plan.

    3. If the humans aren’t in charge or you are dead, what use is your SpaceX stock?

  6. Elon points out that if you can plug these humans into existing input-output digital systems, then there’s basically no barriers to entry to a lot of it. He calls AI the ‘supersonic tsunami’ and everything will change, and we get this strange superposition of ‘everything will change’ and ‘there will be some great companies.’

    1. Yes, true, but that is highly second best and unlikely to be the thing going on that is worth paying attention to in such a future.

    2. Elon says, you could do chip design with massive parallel runs, as a higher level example, which is true, but the whiplash on big picture, it hurts.

  1. Only three hard things in robotics. Real-world intelligence, hands and scaling.

    1. Is that all?

  2. Optimus, Elon says, will do all that from physics first principles, at scale, with no supply chain. AI for robots is ‘mostly compression and correlation of two bitstreams.’ He agrees with Dwarkesh that robots have a lot more degrees of freedom, and they won’t have the same amount of matching data that Elon had with Tesla for self-driving. So they’ll need a bunch of self-play, which can be done with their good ‘reality generator.’

    1. I think I basically buy it, although I don’t see why Elon has the edge. He avoids saying much about Chinese rivals or other potential competition, other than saying that Optimus is going to be a lot more capable.

    2. If we’re getting full digital humans (or ‘geniuses in a data center’) and we can do self-play for the robots, then it’s not clear why SpaceX and xAI and Tesla have much meaningful advantage.

There were some other details shared, but mostly it’s hard to learn much.

  1. We need to scale up electricity production, and get rid of any barriers that aren’t ‘very bad’ for the environment.

    1. Yes.

    2. Elon then says he’s not sure the government can do much, which is odd.

  2. China has four times as many people and they work harder, so we ‘can’t win with humans’ and they might have an edge in productivity per person. And our birth rate has been ‘below replacement since roughly 1971.’

    1. Please consult your local economist, sir.

    2. Have you seen Chinese fertility numbers? They are… not high.

    3. I do not understand this ‘you run out of humans’ talk. We have plenty of humans for all relevant purposes, and if we need more humans there are tons of humans who would love to come to America and take these jobs. Meanwhile, the Chinese have massive youth unemployment.

  3. “I mean China is a powerhouse. I think this year China will exceed three times US electricity output. Electricity output is a reasonable proxy for the economy.” “In the absence of breakthrough innovations in the US, China will utterly dominate.”

    1. He’s got terminal hardware manufacturing brain. Complete truther.

Okay, so I saw ‘Elon Musk wants to build a mass driver on the Moon’ in another context earlier, and my first thought was to ask Claude ‘what would be the military impact of Elon Musk having a mass driver on the Moon’ because we all know who first came up with putting a mass driver on the moon (good news is that Claude said it probably wouldn’t accomplish anything because of physics), but it’s maybe the kind of thing I didn’t quite expect to have him point out first.

John Collison: ​You have the mass driver on the moon.

Elon Musk: I just want to see that thing in operation.

John Collison: Was that out of some sci-fi or where did you…?

Elon Musk: Well, actually, there is a Heinlein book. The Moon is a Harsh Mistress.

Okay, yeah, but that’s slightly different. That’s a gravity slingshot or…

Elon Musk: No, they have a mass driver on the Moon.

John Collison: Okay, yeah, but they use that to attack Earth. So maybe it’s not the greatest…

Elon Musk: Well they use that to… assert their independence.

John Collison: Exactly. What are your plans for the mass driver on the Moon?

Elon Musk: They asserted their independence. Earth government disagreed and they lobbed things until Earth government agreed.

The libertarians on the Moon were the good guys in Heinlein, you see. They just wanted their independence. It’s fine. Nothing to worry about.

  1. There’s a discussion of talent, recruitment and retention. Elon focuses on evidence of exceptionalism, trusting what you see in the interview, and on execution and results. He loves you if you deliver, hate him if you don’t. Don’t fall for hiring based on where someone works, that’s the ‘pixie dust’ trap.

    1. The ‘if [X] I love you, if [~X] I hate you’ strategy, especially pivoting based on what you’ve done for me lately, can be a key part of oversized success if you can pull it off. Many such cases including the obvious ones. It maximizes your leverage, creates incentives, moves quick, can get results.

    2. I have also learned that if you work that way, I’m not going to like you, I don’t want to be your friend, I don’t want to work for you, I don’t want anything to do with you, and people should not trust you. You will absolutely poison your epistemic environment and the people around you will be toxic.

  2. There are some stories about rockets using steel and how decisions get made.

    1. These are hard to usefully excerpt, so I’m skipping it.

  3. Elon has a maniacal sense of urgency. He says this is a very big deal. He says he sets deadlines at the 50th percentile, aiming to only be late half the time.

    1. This is another thing that clearly gets results in the some cases, but that I have learned is highly toxic to me. It’s fine to have maniacal urgency in short bursts but I cannot sustain it for long in a healthy way, and the people I know mostly can’t either.

    2. Similarly, don’t give deadlines that can only be met half the time unless you are actually okay with missing the deadlines, and they’re more like targets.

    3. I get the idea that you need people looking for the fastest possible route to the target, always going for the limiting factor and so on, and that this is not people’s natural tendency. I get that there is a price paid for not doing the insane motivational things, in the short term. I also don’t want to star in the movie Whiplash, thank you very much.

    4. This strategy seems almost optimized to get us all killed in an AI context.

  4. “I’m a big believer in skip-level meetings where instead of having the person that reports to me say things, it’s everyone that reports to them saying something in the technical review. And there can’t be advanced preparation. Otherwise you’re going to get “glazed”, as I say these days.”

    1. Okay, this is I like, although it seems very hard to maintain good incentives.

  5. The next question is indeed how do you prevent the advanced preparation. Elon says he goes around the room and asks and plots the points on a curve.

    1. So the guard against prep is that you’d stand out versus everyone else?

    2. The equilibrium is fragile. You’ll need to protect it.

Elon Musk gets some big things right, and he focuses on what matters, and he drills down to details, and he never stops and never quits. It all counts for quite a lot, and can cover for a lot of flaws, especially combined with (let’s face it) Elon having gotten in various ways insanely lucky along the way.

  1. Why care about DOGE if the economy is going to grow so much? Elon says waste and fraud are bad, and absent AI and robotics ‘we’re actually totally screwed’ because of the national debt. But it’s very hard ‘even to cut very obvious waste and fraud.’ He then repeats various disingenuous talking points that I won’t go into. He affirms he thinks it was good Trump won.

    1. On one level, yeah, he’s sticking to his guns on these talking points even now.

    2. I can’t tell the extent to which he’s genuinely clueless about how all of this works and thinks he can just intuition pump based on things that don’t apply here, how much he’s been poisoned by the people he is around and the epistemic warfare going on around him, how much it’s a bad understanding of relevant economics, and how much of this is malice.

    3. He does not mention feeding PEPFAR into a wood chipper, or any of the other havoc he wrecked for no reason, he just dismisses everyone complaining as having committed fraud. But Dwarkesh and Collison didn’t ask.

    4. The point about ‘why does any of this matter in the wake of AI’ goes unanswered, other than ‘well without AI we’d be screwed.’ But very obviously Musk should not have been spending his political capital on this ‘fraud’ hunt, even if it was fully genuine, because it was never likely to change things so much even in the other worlds.

    5. They don’t ask about the fight over BBB and Elon blowing up a lot of his relationship with Trump over it.

  2. “I think maybe the biggest danger of AI and robotics going wrong is government. People who are opposed to corporations or worried about corporations should really worry the most about government. Because government is just a corporation in the limit. Government is just the biggest corporation with a monopoly on violence.”

    1. This does not make sense in the same world as the belief that the AI is most definitely going to take over. There’s a disconnect, which goes back to the question of why focus on things like DOGE even with maximally charitable assumptions about its purposes.

It would be nice to see Musk putting aside these talking points, and especially admitting that DOGE was at best a bad use of influence and a mistake in hindsight.

  1. How do you design a space-based chip like Dojo 3? Radiation tolerance, higher temperature, memory shielding. They’ll make small ones, then big ones. Or maybe not, we shall see.

Or never. Aligning an intelligence of any level is difficult. It’s harder if you don’t try.

We’ve been through ‘all the top safety people at OpenAI keep getting purged like they were teaching Defense Against The Dark Arts’ and Elon Musk has us holding his beer.

Turnover on safety roles at xAI has been high for a while, and it just got higher. The few people previously devoted to safety at xAI who did all their public-facing safety work? The entire safety department? All gone.

Hayden Field: The past few days have been a wild ride for xAI, which is racking up staff and cofounder departure announcements left and right. On Tuesday and Wednesday, cofounder Yuhuai (Tony) Wu announced his departure and that it was “time for [his] next chapter,” with cofounder Jimmy Ba following with a similar post later that day, writing that it was “time to recalibrate [his] gradient on the big picture.”

There were twelve highly unequal ‘cofounders’ so that is not as alarming as it sounds. It still is not a great look.

The larger problem is that xAI has shown a repeated disdain for even myopic mundane ‘don’t-shoot-yourself-in-the-foot-today’ styles of safety, and it’s hard for people not to notice.

If you’ve had your equity exchanged for SpaceX stock, your incentives now allow you to leave. So you might well leave.

Hayden Field: There’s often a natural departure point for companies post-merger, and Musk has announced that some of the departures were a reorganization that “unfortunately required parting ways with some people.” But there are also signs that people don’t like the direction Musk has taken things.

One source who spoke with The Verge about the happenings inside the company, who left earlier this year and requested anonymity due to fear of retaliation, said that many people at the company were disillusioned by xAI’s focus on NSFW Grok creations and disregard for safety.

More generally, xAI has been a commercial success in that the market is willing to fund it and Elon Musk was able to sell it to SpaceX, but it is a technological failure.

The source also felt like the company was “stuck in the catch-up phase” and not doing anything new or fundamentally different from its competitors.

Yet another former employee said he was launching an AI infrastructure company called Nuraline alongside other ex-xAI employees. He wrote, “During my time at xAI, I got to see a clear path towards hill climbing any problem that can be defined in a measurable way. At the same time, I’ve seen how raw intelligence can get lobotomized by the finest human errors … Learning shouldn’t stop at the model weights, but continue to improve every part of an AI system.”

The outside view is that xAI focused on hill climbing and targeting metrics to try and imitate OpenAI, Google and Anthropic, and hoped to pull ahead by throwing in extra compute, by being more ‘edgy’ and not being ‘woke AI,’ and by having Elon Musk personally show his genius. This approach failed, although it failed up.

How bad are things going forward on safety? Oh, they’re maximally bad.

As in, safety? Never heard of her.

The source who departed earlier this year said Grok’s turn toward NSFW content was due partly to the safety team being let go, with little to no remaining safety review process for the models besides basic filters for things like CSAM. “Safety is a dead org at xAI,” he said.

Looking at the restructured org chart Elon Musk shared on X, there’s no mention of a safety team.​

… “There is zero safety whatsoever in the company — not in the image [model], not in the chatbot,” the second source said. “He [Musk] actively is trying to make the model more unhinged because safety means censorship, in a sense, to him.”

The second source also said engineers at xAI immediately “push to prod[uction]” and that for a long time, there was no human review involved.

“You survive by shutting up and doing what Elon wants,” he said.

This is perhaps also how you get things like Grokopedia having large AI-generated errors that interested parties find themselves unable to fix.

I shared some quotes on Twitter, without comment. Elon Musk replied.

Elon Musk: Because everyone’s job is safety. It’s not some fake department with no power to assuage the concerns of outsiders.

Tesla has no safety team and is the safest car.

SpaceX has no safety team and has the safest rocket. Dragon is what NASA trusts most to fly astronauts.

When there previously was a safety department, was that was explicitly a fake department to assuage the concerns of outsiders?

I almost QTed to ask exactly that, but decided there was nothing to win.

So no safety department from now on? Not even be some people devoted to safety?

Even if you did think there should not be people whose primary job is safety concerns at all, which is itself a crazy position, why should we believe that at xAI ‘safety is everyone’s job’ in any meaningful sense?

The richest man in the world will say ‘this pales next to my safety plan, which is to make everyone’s job safety’ and then not make anyone’s job safety.

Yes, safety needs to be everyone’s job. You want distributed ownership. That doesn’t get you out of having a dedicated team.

The same is true for security, recruiting, maintaining company culture and documentation, quality and testing, customer focus, compliance and ethics, cost management and so many other things. Everything is, in a sense, everyone’s job.

Also, Elon Musk’s statements regarding Tesla and SpaceX are straightforwardly false.

  1. Tesla has an Environmental, Health & Safety Team.

  2. Tesla is not the safest car. Gemini, ChatGPT and Grok all pick Mazda. Claude picks Volvo. Tesla’s safety record is fine, but it’s clearly not the best.

  3. SpaceX of course has lots of people specifically devoted to safety, and they have a flight safety team and a mission assurance team.

Elon’s defenders rushed to the comments, both to his post and to the OP. They explained with disdain why actually it’s safer to not have a safety department, and that any mention of the word ‘safety’ is evil and censorship, and I got called various names.

Breaking containment on Twitter is not so great. I fear for Elon Musk’s epistemic environment since this is presumably what he fights through all day.

As if on queue, Musk had summoned an army of people now talking explicitly about how it is bad to have people who specialize in safety, in any sense from the mundane to the existential, and how only a moron or malicious person could suggest otherwise. It was like watching the negative polarization against the Covid vaccine as a political campaign in real time filling up my mentions.

Oh, you, the person who pissed off the great Elon Musk, want us not to shoot ourselves in the foot? Well then, Annie get your gun. Look what you made me do.

If you thought ‘out of the hundreds of replies in your mentions surely someone you don’t follow will say something worthwhile about this’ then you would be wrong.

This of course does not explain why there is claimed to be no safety anywhere, or the other quotes he was responding to. Are there any individuals tasked with safety at xAI? It does not have to be a ‘department’ per se, but it does not sound like the worry is limited to the lack of a department. If nothing else, we’ve seen their work.

Amanda Askell is the architect of Claude’s personality and constitution at Anthropic.

Amanda Askell: WSJ did a profile of me. A lot of the response has been people trying to infer my personal political views. For what it’s worth, I try to treat my personal political views as a potential source of bias and not as something it would be appropriate to try to train models to adopt.

Ben Hoffman: Your views on the nature of language seem much more important.

All the right people who actually know her are coming out to support Amanda Askell.

Whereas the attacks upon her consistently say more about the person attacking her. This is distinct from the valid point of challenging Anthropic and therefore also Askell for working towards superintelligence in the first place.

In other Elon Musk safety news and also ‘why you do need a philosopher’ news, here was how Elon Musk chose to respond, which is to say ‘those without children lack a stake in the future,’ then to have Grok explain a Bart Simpson joke about her name as part of a linked conversation that did not otherwise do him any favors, and then he said this:

Elon Musk: Those without children lack a stake in the future

Will, Amanda’s ex, offered to help write the Grok Constitution, but he has been preaching about the declining birth rate for a decade and has done nothing to have even one kid, nor has Amanda.

Constitutions should not be written by hypocrites.

Amanda Askell: I think it depends on how much you care about people in general vs. your own kin. I do intend to have kids, but I still feel like I have a strong personal stake in the future because I care a lot about people thriving, even if they’re not related to me.

Cate Hall: the “you can’t care about the future if you don’t have kids” people are morally repulsive to me. are you seriously incapable of caring about people you’re not related to? and you think that’s a problem with OTHER people?

I’m all for having more kids and I do think they help you care more about the future but that’s… wow. Just wow.

It then somehow got worse.

It is such a strange fact that the richest man in the world engages in pure name calling on Twitter, thus amplifying whoever sent the original message a hundredfold and letting everyone decide for themselves who the pathetic wanker is.

George McGowan: Rather proves the point

While I most definitely would rather entrust the future to Amanda, I do think Schubert’s statement is too strong. Sane people can have very strange beliefs.

Elon Musk just… says things that are obviously false. All the time. It’s annoying.

Elon Musk: Having kids means you will do anything to ensure that they live and are happy, for you love them more than your own life a thousand times over, and there is no chance that you will fall in love with an AI instead.

Remember these words.

Regardless of what you think of Elon Musk’s actions in AI or with his own kids, and notwithstanding that most parents do right by their kids, very obviously many parents do not act this way, many do not try so hard to do right by their kids. Many end up choosing a new other person they have fallen in love with over their kids, and very obviously there exist parents who have fallen in love with an existing AI, let alone what will happen with future AI. Would that it were otherwise.

It is an excellent question.

Discussion about this post

On Dwarkesh Patel’s 2026 Podcast With Elon Musk and Other Recent Elon Musk Things Read More »

best-buy-worker-used-manager’s-code-to-get-99%-off-macbooks,-cops-say

Best Buy worker used manager’s code to get 99% off MacBooks, cops say

Best Buy worker linked to shoplifting ring

In 2023, a few months before Lettera’s alleged fraud scheme started, the National Retail Foundation warned  that monitoring employee theft had become a bigger priority for retailers. In times of inflation, retail theft typically increases, and their survey found that a record level of talent turnover was stressing out retail employees and making it easier for those with malicious intent to get away with fraud.

For Best Buy, threats of losses from stressed-out employees seemingly remain, as inflation pressures persist. Last month, an employee at a Best Buy in Georgia assisted a shoplifting ring in stealing more than $40,000 in merchandise, a local CBS News affiliate reported.

Surveillance footage showed that 20-year-old Dorian Allen allowed shoplifters to simply leave the store without paying for more than 140 items, a police report alleged. Among merchandise stolen were “dozens of PlayStation 5 and Xbox Series S consoles, AirPods, Meta Quest VR headsets, Beats wireless headphones, a PC, a Segway, wireless controllers, and more,” CBS News reported.

Charged with theft, Allen claimed he was being blackmailed by a hacker group who threatened to expose nude photos he shared on Instagram if he didn’t cooperate. Allegedly under duress, Allen memorized descriptions of the shoplifters so that he could allow them to take items without paying. He also allegedly helped thieves load items into their vehicles.

Managers called in police after Allen allegedly spent weeks assisting the shoplifters without detection.

Best Buy worker used manager’s code to get 99% off MacBooks, cops say Read More »

bytedance-backpedals-after-seedance-2.0-turned-hollywood-icons-into-ai-“clip-art”

ByteDance backpedals after Seedance 2.0 turned Hollywood icons into AI “clip art”


Misstep or marketing tactic?

Hollywood backlash puts spotlight on ByteDance’s sketchy launch of Seedance 2.0.

ByteDance says that it’s rushing to add safeguards to block Seedance 2.0 from generating iconic characters and deepfaking celebrities, after substantial Hollywood backlash after launching the latest version of its AI video tool.

The changes come after Disney and Paramount Skydance sent cease-and-desist letters to ByteDance urging the Chinese company to promptly end the allegedly vast and blatant infringement.

Studios claimed the infringement was widescale and immediate, with Seedance 2.0 users across social media sharing AI videos featuring copyrighted characters like Spider-Man, Darth Vader, and SpongeBob Square Pants. In its letter, Disney fumed that Seedance was “hijacking” its characters, accusing ByteDance of treating Disney characters like they were “free public domain clip art,” Axios reported.

“ByteDance’s virtual smash-and-grab of Disney’s IP is willful, pervasive, and totally unacceptable,” Disney’s letter said.

Defending intellectual property from franchises like Star Trek and The Godfather, Paramount Skydance pointed out that Seedance’s outputs are “often indistinguishable, both visually and audibly” from the original characters, Variety reported. Similarly frustrated, Japan’s AI minister Kimi Onoda, sought to protect popular anime and manga characters, officially launching a probe last week into ByteDance over the copyright violations, the South China Morning Post reported.

“We cannot overlook a situation in which content is being used without the copyright holder’s permission,” Onoda said at a press conference Friday.

Facing legal threats and Japan’s investigation, ByteDance issued a statement Monday, CNBC reported. In it, the company claimed that it “respects intellectual property rights” and has “heard the concerns regarding Seedance 2.0.”

“We are taking steps to strengthen current safeguards as we work to prevent the unauthorized use of intellectual property and likeness by users,” ByteDance said.

However, Disney seems unlikely to accept that ByteDance inadvertently released its tool without implementing such safeguards in advance. In its letter, Disney alleged that “Seedance has infringed on Disney’s copyrighted materials to benefit its commercial service without permission.”

After all, what better way to illustrate Seedance 2.0’s latest features than by generating some of the best-known IP in the world? At least one tech consultant has suggested that ByteDance planned to benefit from inciting Hollywood outrage. The founder of San Francisco-based consultancy Tech Buzz China, Rui Ma, told SCMP that “the controversy surrounding Seedance is likely part of ByteDance’s initial distribution strategy to showcase its underlying technical capabilities.”

Seedance 2.0 is an “attack” on creators

Studios aren’t the only ones sounding alarms.

Several industry groups expressed concerns, including the Motion Picture Association, which accused ByteDance of engaging in massive copyright infringement within “a single day,” CNBC reported.

Sean Astin, an actor and president of the actors union, SAG-AFTRA, was directly impacted by the scandal. A video that has since been removed from X showed Astin in the role of Samwise Gamgee from The Lord of the Rings, delivering a line he never said, Variety reported. Condemning Seedance’s infringement, SAG-AFTRA issued a statement emphasizing that ByteDance did not act responsibly in releasing the model without safeguards:

“SAG-AFTRA stands with the studios in condemning the blatant infringement enabled by ByteDance’s new AI video model Seedance 2.0. The infringement includes the unauthorized use of our members’ voices and likenesses. This is unacceptable and undercuts the ability of human talent to earn a livelihood. Seedance 2.0 disregards law, ethics, industry standards and basic principles of consent. Responsible AI development demands responsibility, and that is nonexistent here.”

Echoing that, a group representing Hollywood creators, the Human Artistry Campaign, declared that “the launch of Seedance 2.0” was “an attack on every creator around the world.”

“Stealing human creators’ work in an attempt to replace them with AI generated slop is destructive to our culture: stealing isn’t innovation,” the group said. “These unauthorized deepfakes and voice clones of actors violate the most basic aspects of personal autonomy and should be deeply concerning to everyone. Authorities should use every legal tool at their disposal to stop this wholesale theft.”

Ars could not immediately reach any of these groups to comment on whether ByteDance’s post-launch efforts to add safeguards addressed industry concerns.

MPA chairman and CEO Charles Rivkin has previously accused ByteDance of disregarding “well-established copyright law that protects the rights of creators and underpins millions of American jobs.”

While Disney and other studios are clearly ready to take down any tools that could hurt their revenue or reputation without an agreement in place, they aren’t opposed to all AI uses of their characters. In December, Disney struck a deal with OpenAI, giving Sora access to 200 characters for three years, while investing $1 billion in the technology.

At that time, Disney CEO Robert A. Iger, said that “the rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI, we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works.”

Creators disagree Seedance 2.0 is a game changer

In a blog announcing Seedance 2.0, ByteDance boasted that the new model “delivers a substantial leap in generation quality,” particularly in close-up shots and action sequences.

The company acknowledged that further refinements were needed and the model is “still far from perfect” but hyped that “its generated videos possess a distinct cinematic aesthetic; the textures of objects, lighting, and composition, as well as costume, makeup, and prop designs, all show high degrees of finish.”

ByteDance likely hoped that the earliest outputs from Seedance 2.0 would produce headlines wowed by the model’s capabilities, and it got what it wanted when a single Hollywood stakeholder’s social media comment went viral.

Shortly after Seedance 2.0’s rollout, Deadpool co-writer, Rhett Reese, declared on X that “it’s likely over for us,” The Guardian reported. The screenwriter was impressed by an AI video created by Irish director Ruairi Robinson, which realistically depicted Tom Cruise fighting Brad Pitt. “[I]n next to no time, one person is going to be able to sit at a computer and create a movie indistinguishable from what Hollywood now releases,” Reese opined. “True, if that person is no good, it will suck. But if that person possesses Christopher Nolan’s talent and taste (and someone like that will rapidly come along), it will be tremendous.”

However, some AI critics rejected the notion that Seedance 2.0 is capable of replacing artists in the way that Reese warned. On Bluesky and X, they pushed back on ByteDance claims that this model doomed Hollywood, with some accusing outlets of too quickly ascribing Reese’s reaction to the whole industry.

Among them was longtime AI critic, Reid Southen, a film concept artist who works on major motion pictures and TV. Responding directly to Reese’s X thread, Southen contradicted the notion that a great filmmaker could be born from fiddling with AI prompts alone.

“Nolan is capable of doing great work because he’s put in the work,” Southen said. “AI is an automation tool, it’s literally removing key, fundamental work from the process, how does one become good at anything if they insist on using nothing but shortcuts?”

Perhaps the strongest evidence in Southen’s favor is Darren Aronofsky’s recent AI-generated historical docudrama. Speaking anonymously to Ars following backlash declaring that “AI slop is ruining American history,” one source close to production on that project confirmed that it took “weeks” to produce minutes of usable video using a variety of AI tools.

That source noted that the creative team went into the project expecting they had a lot to learn but also expecting that tools would continue to evolve, as could audience reactions to AI-assisted movies.

“It’s a huge experiment, really,” the source told Ars.

Notably, for both creators and rights-holders concerned about copyright infringement and career threats, questions remain on how Seedance 2.0 was trained. ByteDance has yet to release a technical report for Seedance 2.0 and “has never disclosed the data sets it uses to train its powerful video-generation Seedance models and image-generation Seedream models,” SCMP reported.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

ByteDance backpedals after Seedance 2.0 turned Hollywood icons into AI “clip art” Read More »

“it-ain’t-no-unicorn”:-these-researchers-have-interviewed-130-bigfoot-hunters

“It ain’t no unicorn”: These researchers have interviewed 130 Bigfoot hunters

It was the image that launched a cultural icon. In 1967, in the Northern California woods, a 7-foot-tall, ape-like creature covered in black fur and walking upright was captured on camera, at one point turning around to look straight down the lens. The image is endlessly copied in popular culture—it’s even become an emoji. But what was it? A hoax? A bear? Or a real-life example of a mysterious species called the Bigfoot?

The film has been analysed and re-analysed countless times. Although most people believe it was some sort of hoax, there are some who argue that it’s never been definitively debunked. One group of people, dubbed Bigfooters, is so intrigued that they have taken to the forests of Washington, California, Oregon, Ohio, Florida, and beyond to look for evidence of the mythical creature.

But why? That’s what sociologists Jamie Lewis and Andrew Bartlett wanted to uncover. They were itching to understand what prompts this community to spend valuable time and resources looking for a beast that is highly unlikely to even exist. During lockdown, Lewis started interviewing more than 130 Bigfooters (and a few academics) about their views, experiences, and practices, culminating in the duo’s recent book “Bigfooters and Scientific Inquiry: On the Borderlands of Legitimate Science.”

Here, we talk to them about their academic investigation.

What was it about the Bigfoot community that you found so intriguing?

Lewis: It started when I was watching either the Discovery Channel or Animal Planet and a show called Finding Bigfoot was advertised. I was really keen to know why this program was being scheduled on what certainly at the time was a nominally serious and sober natural history channel. The initial plan was to do an analysis of these television programmes, but we felt that wasn’t enough. It was lockdown and my wife was pregnant and in bed a lot with sickness, so I needed to fill my time.

Bartlett: One of the things that I worked on when Jamie and I shared an office in Cardiff was a sociological study of fringe physicists. These are people mostly outside of academic institutions trying to do science. I was interviewing these people, going to their conferences. And that led relatively smoothly into Bigfoot, but it was Jamie’s interest in Bigfoot that brought me to this field.

How big is this community?

Lewis: It’s very hard to put a number on it. There is certainly a divide between what are known as “apers,” who believe that Bigfoot is just a primate unknown to science, and those that are perhaps more derogatorily called “woo-woos,” who believe that Bigfoot is some sort of interdimensional traveller, an alien of sort. We’re talking in the thousands of people. But there are a couple of hundred really serious people of which I probably interviewed at least half.

Many people back them. A YouGov survey conducted as recently as November 2025, suggested that as many as one quarter of Americans believe that Bigfoot either definitely or probably exists.

Were the interviewees suspicious of your intentions?

Lewis: I think there was definitely a worry that they would be caricatured. And I was often asked, “Do I believe in Bigfoot?” I had a standard answer that Andy and I agreed on, which was that mainstream, institutional science says there is absolutely no compelling evidence that Bigfoot exists. We have no reason to dissent with that consensus. But as sociologists what does exist is a community (or communities) of Bigfooting, and that’s what interests us.

“It ain’t no unicorn”: These researchers have interviewed 130 Bigfoot hunters Read More »

nasa-has-a-new-problem-to-fix-before-the-next-artemis-ii-countdown-test

NASA has a new problem to fix before the next Artemis II countdown test

John Honeycutt, chair of NASA’s Artemis II mission management team, said the decision to relax the safety limit between Artemis I and Artemis II was grounded in test data.

“The SLS program, they came up with a test campaign that actually looked at that cavity, the characteristics of the cavity, the purge in the cavity … and they introduced hydrogen to see when you could actually get it to ignite, and at 16 percent, you could not,” said Honeycutt, who served as NASA’s SLS program manager before moving to his new job.

Hydrogen is explosive in high concentrations when mixed with air. This is what makes hydrogen a formidable rocket fuel. But it is also notoriously difficult to contain. Molecular hydrogen is the smallest molecule, meaning it can readily escape through leak paths, and poses a materials challenge for seals because liquified hydrogen is chilled to minus 423 degrees Fahrenheit (minus 253 degrees Celsius).

So, it turns out NASA used the three-year interim between Artemis I and Artemis II to get comfortable with a more significant hydrogen leak, instead of fixing the leaks themselves. Isaacman said that will change before Artemis III, which likewise is probably at least three years away.

“I will say near-conclusively for Artemis III, we will cryoproof the vehicle before it gets to the pad, and the propellant loading interfaces we are troubleshooting will be redesigned,” Isaacman wrote.

Isaacman took over as NASA’s administrator in December, and has criticized the SLS program’s high costestimated by NASA’s inspector general at more than $2 billion per rocket—along with the launch vehicle’s torpid flight rate.

NASA’s expenditures for the rocket’s ground systems at Kennedy Space Center are similarly enormous. NASA spent nearly $900 million on Artemis ground support infrastructure in 2024 alone. Much of the money went toward constructing a new launch platform for an upgraded version of the Space Launch System that may never fly.

All of this makes each SLS rocket a golden egg, a bespoke specimen that must be treated with care because it is too expensive to replace. NASA and Boeing, the prime contractor for the SLS core stage, never built a full-size test model of the core stage. There’s currently no way to completely test the cryogenic interplay between the core stage and ground equipment until the fully assembled rocket is on the launch pad.

Existing law requires NASA continue flying the SLS rocket through the Artemis V mission. Isaacman wrote that the Artemis architecture “will continue to evolve as we learn more and as industry capabilities mature.” In other words, NASA will incorporate newer, cheaper, reusable rockets into the Artemis program.

The next series of launch opportunities for the Artemis II mission begin March 3. If the mission doesn’t lift off in March, NASA will need to roll the rocket back to the Vehicle Assembly Building to refresh its flight termination system. There are more launch dates available in April and May.

“There is still a great deal of work ahead to prepare for this historic mission,” Isaacman wrote. “We will not launch unless we are ready and the safety of our astronauts will remain the highest priority. We will keep everyone informed as NASA prepares to return to the Moon.”

NASA has a new problem to fix before the next Artemis II countdown test Read More »

rocket-report:-say-cheerio-to-orbex;-china-is-getting-good-at-booster-landings

Rocket Report: Say cheerio to Orbex; China is getting good at booster landings


“You absolutely have to have a plan to compete with SpaceX on price.”

The Ariane 64 rocket makes its debut on Thursday. Credit: État-major des armées

Welcome to Edition 8.29 of the Rocket Report! We have a stuffed report this week with news from across the launch spectrum. Long-term, probably the most significant development this week was a subscale version of the Long March 10 rocket successfully launching and then executing a picture-perfect ocean landing. China is catching up rapidly to the United States when it comes to reusable launch.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Orbex is going away. The UK-based launch company Orbex has entered insolvency proceedings after a planned takeover by European space logistics startup The Exploration Company fell through, European Spaceflight reports. In a statement, Orbex said the decision came after all “fundraising, merger and acquisition opportunities had all concluded unsuccessfully.” For anyone paying attention, this decision should not come as a surprise. A decade into its existence, Orbex had yet to produce demonstrable, ready-for-flight hardware.

Other companies interested in assets … According to the company, the appointment of administrators will give Orbex time to secure “as positive an outcome as possible for its creditors, employees and wider stakeholders.” It added that the process could include the sale of all or parts of the business or its assets, and another UK-based company, Skyrora, has expressed some interest. (submitted by Polynomics, zapman987, and EllPeaTea)

Firefly’s next Alpha mission launching soon. This week, Firefly said that its next Alpha rocket underwent a successful 20-second static fire test. This clears the way for the rocket to make a launch attempt no earlier than February 18 from its launch site at Vandenberg Space Force Base in California.

Au revoir Block I … It’s an important mission, because the previous Alpha launch, in April 2025, ended in failure when stage separation damaged one of the rocket’s upper stage engines and prevented the mission from reaching orbit. Moreover, the company lost the first stage of the flight in September during an accident in Texas. The upcoming flight, “Stairway to Seven,” will be the final flight of Block I of the Alpha booster.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

How can launch companies compete with SpaceX? Rocket firms are divided on how to compete with SpaceX in a market where demand outstrips supply, yet customers remain price sensitive, Space News reports. During a panel at the SmallSat Symposium on February 11, executives from several launch companies acknowledged the challenge of competing with SpaceX, which accounted for about half of all orbital launches globally in 2025, despite strong customer demand for launch services.

A low price is nice … “If your idea is to go into the market competing with SpaceX on price, you’re probably not in a good competitive position,” said Brian Rogers, vice president of global launch services at Rocket Lab, one of the few small launch vehicle developers to thrive despite competition from SpaceX. But this view was far from universal. Devon Papandrew, vice president of business development at Stoke Space, disagreed. “You absolutely have to have a plan to compete with SpaceX on price,” he said. (submitted by EllPeaTea)

Rocket Lab blows up a few Archimedes engines. According to reporting by Ars, Rocket Lab has blown up two Archimedes rocket engines in the last three months at its test stand in southern Mississippi. The engine test anomalies come at a critical time for Rocket Lab, as it is attempting to finalize development of a flight version of the Archimedes engine, which burns liquid oxygen and methane and has a sea-level thrust of 165,000 pounds. Nine of these engines will power the company’s much-anticipated Neutron rocket, which is aiming for a debut launch later this year.

Testing the engine to its limits … Rocket Lab Chief Executive Officer Pete Beck downplayed concerns in a statement. “We test to the limits, that’s part of developing a successful rocket,” Beck said. “We often put the engine into very off nominal states to find the limits and sometimes they let go, this is normal and how you ensure rockets don’t fail in flight.” Beck has previously said that Rocket Lab’s goal is to identify failures during component-level testing so that, when Neutron launches, it has a high chance of reaching orbit on its first attempt.

Proton rocket returns to flight. After a nearly three-year break, a Russian Proton rocket flew again with the Elektro-L No. 5 meteorological satellite from Baikonur in Kazakhstan, Russian Space Web reports. This mission was supposed to launch in late 2025, but in December, final checks revealed a problem with the Block DM-03 upper stage.

A not-so-great launch record … The mission marked the last use of the Block DM-03 space tug on Proton, which will now be solely used as an upper stage for the Angara-5 rocket. First launched in 1965, the Proton rocket has undergone several upgrades in the six decades since then. It has launched 430 times, with 48 partial and total failures. No new Protons are under construction, and it will be phased out by the end of this decade in favor of the newer Angara vehicles. (submitted by EllPeaTea)

SpaceX exempted legal labor case. The National Labor Relations Board abandoned a Biden-era complaint against SpaceX after a finding that the agency does not have jurisdiction over Elon Musk’s space company, Ars reports. The US labor board said SpaceX should instead be regulated under the Railway Labor Act, which governs labor relations at railroad and airline companies.

A common carrier? … In January 2024, an NLRB regional director alleged in a complaint that SpaceX illegally fired eight employees who, in an open letter, criticized CEO Musk as a “frequent source of embarrassment.” The complaint sought reinstatement of the employees, back pay, and letters of apology to the fired employees. SpaceX responded by suing the NLRB, claiming the labor agency’s structure is unconstitutional. But a different issue SpaceX raised later—that it is a common carrier, like a rail company or airline—is what compelled the NLRB to drop its case.

More powerful Ariane 6 rocket launches. This first launch of the four-booster version of Ariane 6 took place on Thursday, launching  32 satellites for Amazon’s Leo constellation to low-Earth orbit. “This first flight of Ariane 64 sustains Europe’s autonomous access to space,” Toni Tolker-Nielsen, the European Space Agency’s director of space transportation, said  in a news release.

More vrooom coming … Previous launches of Europe’s new Ariane rocket have used two solid rocket boosters. The heavier variant of the rocket is necessary to support Amazon’s constellation as well as more demanding missions to geostationary orbit. And more power is coming. In the near future, the P120C boosters will be replaced by upgraded P160C models, each carrying more than 14 metric tons of solid fuel. The Associated Press provided some interesting color behind the scenes of the launch from the location in France where the rocket’s main engines are manufactured. (submitted by biokleen and EllPeaTea)

Stoke Space increases fundraising round. The Washington-based launch company announced this week that it had extended its recent Series D funding round. The round was initially announced in October 2025 at $510 million, but has now been increased to $860 million. The original Series D funding focused on completing activation of Launch Complex 14 at Cape Canaveral Space Force Station, Florida, and expanding production capacity for the Nova launch vehicle. Stoke will use the additional capital to accelerate future elements of its product roadmap, the company said.

Putting the $ into $toke $pace … “We’re extremely grateful for our investors’ continued support,” said Andy Lapsa, co-founder and CEO of Stoke. “We’re executing with urgency to bring Nova to market and deliver for our customers. It’s a special vehicle, and there’s more in the pipeline.” With the extension, Stoke has now raised $1.34 billion to date. That is an impressive raise, and it will heighten expectations for the company’s debut of the Nova rocket.

Falcon 9 back after second stage anomaly. A Falcon 9 launched a batch of Starlink satellites on Saturday after SpaceX completed an investigation into an engine malfunction during the rocket’s previous launch, Space News reports. The rocket deployed its payload of 25 Starlink satellites into orbit about 62 minutes after liftoff. The launch was the first Falcon 9 mission since Feb. 2, when the rocket carried another set of Starlink satellites into orbit from Vandenberg.

That didn’t take long … While that mission successfully deployed its payload, SpaceX later said an “off-nominal condition” with the upper stage prevented it from performing a planned deorbit burn. The Federal Aviation Administration said Feb. 6 that it had authorized SpaceX to return the Falcon 9 to flight. “The final mishap report cites the probable root cause as the Falcon 9 second-stage engine’s failure to ignite prior to the deorbit burn,” the agency stated. “SpaceX identified technical and organizational preventive measures to avoid a recurrence of the event.” The FAA provided no additional details about the anomaly. (submitted by EllPeaTea)

China performs an impressive rocket landing. China’s space program, striving to land astronauts on the Moon by 2030, carried out a test flight of a new reusable booster and crew capsule late Tuesday (US time), and the results were spectacular, Ars reports. The launch of a subscale version of the Long March 10 rocket, still in development, provided engineers with an opportunity to verify the performance of an important part of the new Mengzhou capsule’s safety system. A test version of the Mengzhou spacecraft, flying without anyone onboard, climbed into the stratosphere on top of the Long March booster before activating its launch abort motors a little more than a minute into the flight as the rocket reached the moment of maximum aerodynamic pressure, known as Max-Q.

China getting there on rocket reuse … The abort motors pulled the capsule away from the booster, simulating an in-flight escape that might be necessary to whisk crews away from a failing rocket. The Mengzhou spacecraft later deployed parachutes and splashed down offshore from Hainan Island. Remarkably, the booster continued its ascent without the crew capsule, soaring into space on the power of its kerosene-fueled YF-100 engines before reentering the atmosphere, reigniting its engines, and nailing a propulsive landing in the South China Sea, right next to a recovery barge waiting to bring it back to shore.

Vulcan experiences a second nozzle issue. Moments after liftoff from Florida’s Space Coast early Thursday morning, a shower of sparks emerged in the exhaust plume of United Launch Alliance’s Vulcan rocket. Seconds later, the rocket twisted on its axis before recovering and continuing the climb into orbit with a batch of US military satellites, Ars reports. The sight may have appeared familiar to seasoned rocket watchers. Sixteen months ago, a Vulcan rocket lost one of its booster nozzles shortly after launch from Cape Canaveral Space Force Station. The rocket recovered from the malfunction and still reached the mission’s planned orbit.

Next launch likely to be delayed … Details of Thursday’s booster problem remain unclear. An investigation into the matter is underway, according to ULA, a 50-50 joint venture between Boeing and Lockheed Martin. But the circumstances resemble those of the booster malfunction in October 2024. The incident on Thursday’s mission suggests the defect was not fixed, or there is a separate problem with Northrop’s boosters. The next Vulcan launch is scheduled for no earlier than March with a GPS navigation satellite for the US Space Force. This schedule is now in doubt. The military’s Space Systems Command said in a statement it will “work closely with ULA per our mission assurance space flightworthiness process before the next Vulcan national security space mission.” (submitted by EllPeaTea)

Starship nearing next test flight. The upgraded Super Heavy booster slated to launch SpaceX’s next Starship flight has completed cryogenic proof testing, clearing a hurdle that resulted in the destruction of the company’s previous booster, Ars reports. The proof test is notable because it moves engineers closer to launching the first test flight of an upgraded version of SpaceX’s mega-rocket named Starship V3, or Block 3.

Launch possible within the next six to eight weeks … SpaceX launched the previous version, Starship V2, five times last year, but the first three test flights failed. The last two flights achieved SpaceX’s goals, and the company moved on to V3. Assuming that the remaining test work goes according to plan, SpaceX could be in position to launch the first Starship V3 test flight before the end of March.

New Glenn pushing on second stage reuse again. Engineers at Blue Origin have been grappling with a seemingly eternal debate that involves the New Glenn rocket and the economics of flying it. The debate goes back at least 15 years, to the early discussions around the design of the heavy lift rocket. The first stage, of course, would be fully reusable. But what about the upper stage of New Glenn, powered by two large BE-3U engines?

Do you want a job? … Now, Ars reports, reuse is back on the menu. Blue Origin has posted a new job listing for a director of Reusable Upper Stage Development, which says, “As the Director of Program Management for the New Glenn Upper Stage and Payload Accommodations (GS2PA), you will work with the Vice President of New Glenn GS2PA and directly support the execution of a lean engineering initiative to incrementally develop a reusable upper stage.” Ars estimates it presently costs Blue Origin more than $50 million to manufacture a New Glenn second stage.

Next three launches

February 12: Falcon 9 | Crew-12 | Cape Canaveral Space Force Station, Fla. | 10: 15 UTC

February 14: Falcon 9 | Starlink 17-13 | Vandenberg Space Force Base, California | 22: 00 UTC

February 16: Falcon 9 | Starlink 6-103 | Cape Canaveral Space Force Station, Florida | 05: 00 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: Say cheerio to Orbex; China is getting good at booster landings Read More »

when-amazon-badly-needed-a-ride,-europe’s-ariane-6-rocket-delivered

When Amazon badly needed a ride, Europe’s Ariane 6 rocket delivered

The Ariane 64 flew with an extended payload shroud to fit all 32 Amazon Leo satellites. Combined, the payload totaled around 20 metric tons, or about 44,000 pounds, according to Arianespace. This is close to maxing out the Ariane 64’s lift capability.

Amazon has booked more than 100 missions across four launch providers to populate the company’s planned fleet of more than 3,200 satellites. With Thursday’s launch, Amazon has launched 214 production satellites on eight missions with United Launch Alliance, SpaceX, and now Arianespace.

The Amazon Leo constellation is a competitor with SpaceX’s Starlink Internet network. SpaceX now has more than 9,000 satellites in orbit beaming broadband to more than 9 million subscribers, and all have launched on the company’s own Falcon 9 rockets. Amazon, meanwhile, initially bypassed SpaceX when selecting which companies would launch satellites for the Amazon Leo program, formerly known as Project Kuiper.

Amazon booked the last nine launches on ULA’s soon-to-retire Atlas V, five of which have now flown, and reserved the rest of its launches in 2022 on rockets that had never launched before: 38 flights on ULA’s new Vulcan rocket, 24 launches on Blue Origin’s New Glenn, and 18 on Europe’s Ariane 6.

An artist’s illustration of the Ariane 6’s upper stage in orbit with a stack of Amazon Leo satellites awaiting deployment.

Credit: Arianespace

An artist’s illustration of the Ariane 6’s upper stage in orbit with a stack of Amazon Leo satellites awaiting deployment. Credit: Arianespace

Meanwhile, in Florida

All three new rockets suffered delays but are now in service. The Ariane 6 has enjoyed the fastest ramp-up in launch cadence, with six flights under its belt after Thursday’s mission from French Guiana. ULA’s Vulcan rocket has flown four times, and Amazon says its first batch of satellites to fly on Vulcan is now complete. But a malfunction with one of the Vulcan launcher’s solid rocket boosters on a military launch from Florida early Thursday—the second such anomaly in three flights—raises questions about when Amazon will get its first ride on Vulcan.

Blue Origin, owned by Amazon founder Jeff Bezos, is gearing up for the third flight of its heavy-lift New Glenn rocket from Florida as soon as next month. Amazon and Blue Origin have not announced when the first group of Amazon Leo satellites will launch on New Glenn.

When Amazon badly needed a ride, Europe’s Ariane 6 rocket delivered Read More »

verizon-imposes-new-roadblock-on-users-trying-to-unlock-paid-off-phones

Verizon imposes new roadblock on users trying to unlock paid-off phones


Verizon unlocks have 35-day waiting period after paying off device plan online.

Credit: Aurich Lawson | Getty Images

Verizon this week imposed a new roadblock for people who want to pay off device installment plans early in order to get their phones unlocked. The latest version of Verizon’s device unlocking policy for postpaid customers imposes a 35-day waiting period when a customer pays off their device installment plan online or in the Verizon app.

Payments made over the phone also trigger a 35-day waiting period, as do payments made at Verizon Authorized Retailers. Getting an immediate unlock apparently requires paying off the device plan at a Verizon corporate store.

Unlocking a phone allows it to be used on another network, letting customers switch from one carrier to another. Previously, the 35-day waiting period for unlocks was only applied when a customer paid off the plan with a Verizon gift card.

“If you payoff [sic] a device payment agreement balance online or in the My Verizon App, or if a Verizon Gift Card is used to purchase a smartphone or pay off a remaining balance, the unlocking process will be delayed by 35 days,” the current version of the policy says. “This window allows for the verification of the gift card’s funds to ensure they were not obtained through fraudulent or illegal means.”

The paragraph above only explains why the waiting period is necessary for gift-card payments despite applying the 35-day wait to online and app payments as well. In a previous version of the policy that was implemented on January 27 and still in place as of February 9, the 35-day waiting period applied only when a Verizon gift card is used to buy a phone or pay off the remaining balance.

The 35-day waiting period provision was changed to include online and app payments by February 11. We were made aware of the most recent change thanks to a tip from Ars forum member User_E.

Customers must go to Verizon corporate store

Despite the significant update that happened this week, Verizon still lists the effective date of the device unlocking policy as January 27. It thus appears that Verizon is applying the 35-day wait after online and app payments retroactively, without disclosing that the policy changed after January 27.

The 35-day waiting period seems to apply regardless of how long the phone has been in use. For example, if you were 18 months into one of Verizon’s 36-month device installment plans and decided to pay the remaining balance early and switch carriers, you’d still have to wait 35 days for an unlock in most scenarios.

A Verizon spokesperson told Ars today that customers meeting the requirements for a quick unlock will “typically” receive it within 24 hours. But “if you pay online, through the app, or use a ‘non-secure’ method (like a Verizon Gift Card, paper check, or magnetic stripe swipe), there is a 35-day security delay before the unlock triggers to prevent fraud,” the spokesperson said. Verizon did not explain why the device unlocking policy still has an effective date of January 27 despite the change made this week.

It is possible to pay off an installment plan early by going to a Verizon store. But there are limits on this, too. Another Verizon FAQ says the company will unlock a phone “when you use a secure payment type at a Verizon store. Payments made through your account online, in the My Verizon app, a Verizon Authorized Retailer, or by phone delay the unlock by 35 days.” It’s not clear when this FAQ was last updated.

The Verizon Authorized Retailer limitation means that to get a quick unlock, you have to go to a Verizon corporate store rather than a Verizon Authorized Retailer that isn’t owned by Verizon. Verizon corporate stores accounted for only about 20 percent of Verizon stores, according to Wave7 research cited in a 2021 Fierce Network article.

As for the “secure payment type” requirement, you can satisfy that by paying with cash, a credit card with an EMV chip, or contactless payment method like Apple Pay, Google Pay, and Samsung Pay. The requirements may be different for consumer and business customers. A Verizon business FAQ says paying off a device using a bill credit also triggers a 35-day wait, but that caveat isn’t mentioned in the Verizon consumer FAQ.

Even if you happen to live near a Verizon corporate store, it’s still less convenient than paying online or in an app. A customer can alternatively buy a phone from Verizon at full price at the beginning to get it unlocked right away, but not everyone will want to or be able to do that.

“Devices purchased directly from Verizon are locked to our network. Devices will be unlocked automatically when purchased at full retail price or if the device financing agreement balance is paid in full,” the unlocking policy for postpaid devices says, right before disclosing the 35-day waiting period that applies in various scenarios.

There shouldn’t be a wait for unlocking if a customer pays off a device plan on schedule via automatic payments. Verizon confirmed to Ars that “if a Verizon customer has automatic monthly payments set up on a device payment plan, the device is automatically unlocked after the final scheduled payment.”

Prepaid phones locked for a year

Verizon’s latest unlocking policy for prepaid devices is simpler than its postpaid policy but still locks customers to the Verizon network for a year. “Devices purchased from us will remain locked to the network until the completion of 365 days of paid and active service,” the prepaid device unlocking policy says. “After 365 days of paid and active service, we will automatically remove the lock unless the device is deemed stolen or purchased fraudulently.”

Despite using the phrase “automatically remove the lock” in reference to prepaid devices in its device unlocking policy, Verizon seems to contradict this on an FAQ page by saying that prepaid customers must request an unlock after 365 days. Unlocks are made “upon request” if a customer meets the criteria, the page says.

Until recently, the US government required Verizon to unlock handsets automatically after 60 days, via rules imposed on 700 MHz spectrum licenses and merger conditions imposed on Verizon’s purchase of TracFone. The Federal Communications Commission eliminated the 60-day unlocking requirement on January 12 for all phones activated on Verizon’s network after the FCC decision. Verizon says it is still honoring 60-day unlocks for phones bought from its flagship brand before January 27.

The AT&T policy says postpaid phones purchased at least 60 days ago can be unlocked when the device is paid in full. The T-Mobile policy says that postpaid phones active on the T-Mobile network for at least 40 days can be unlocked after being paid in full. AT&T has a six-month waiting period for unlocking prepaid phones, while T-Mobile has a 365-day waiting period for prepaid phones.

A week after the FCC ruling, Verizon started enforcing a 365-day lock period on phones purchased through its TracFone division. Customers of TracFone and other “Verizon Value” brands have to request unlocks after the year is over as Verizon doesn’t promise to unlock phones automatically for those subsidiary brands.

“Most people pay their bills online”

The policy for Verizon’s flagship brand promises automatic unlocks, albeit with the new restrictions and waits described earlier in this article. John Bergmayer, legal director of consumer advocacy group Public Knowledge, told Ars today that he doesn’t understand why Verizon isn’t offering immediate unlocks to people who pay their bills online.

“Gift cards, sure, are a pretty high-fraud area. But most people pay their bills online with normal credit cards. It’s hard to see what is likely the most common way people pay Verizon as being somehow high-risk,” he said.

Verizon also shouldn’t apply the change retroactively, he said. “People should be able to benefit from the policy that was in place on the day they bought the phone,” Bergmayer told Ars.

Public Knowledge and other consumer advocacy groups urged the FCC last year to reject Verizon’s petition to end the 60-day unlocking requirement, but the FCC sided with Verizon. Although the federal rules have changed, Verizon can be forced to uphold its previous terms in cases where the company tries to change them retroactively.

In December, we wrote about a man who sued Verizon and won after the firm retroactively tried to enforce a new policy and refused to unlock a phone he purchased before the policy change. In that case, Verizon decided it would only unlock phones after “60 days of paid active service” even though FCC rules at the time required unlocks 60 days after activation regardless of whether paid service was maintained.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Verizon imposes new roadblock on users trying to unlock paid-off phones Read More »

ring-cancels-flock-deal-after-dystopian-super-bowl-ad-prompts-mass-outrage

Ring cancels Flock deal after dystopian Super Bowl ad prompts mass outrage

Both statements verified that the integration never launched and that no Ring customers’ videos were ever sent to Flock.

Ring did not credit users’ privacy concerns for its change of heart. Instead, they claimed that a joint decision was made “following a comprehensive review” where Ring “determined the planned Flock Safety integration would require significantly more time and resources than anticipated.”

Separately, Flock said that “we believe this decision allows both companies to best serve their respective customers and communities.”

The only hint that Ring gave users that their concerns had been heard came in the last line of its blog, which said, “We’ll continue to carefully evaluate future partnerships to ensure they align with our standards for customer trust, safety, and privacy.”

Sharing his views on X and Bluesky, John Scott-Railton, a senior cybersecurity researcher at the Citizen Lab, joined critics calling Ring’s statement insufficient. He posted an image of the ad frame that Markey found creepy next to a statement from Ring, writing, “On the left? A picture of mass surveillance from #Ring’s ad. On the right? A ring [spokesperson] saying that they are not doing mass surveillance. The company cannot have it both ways.”

Ring’s statements so far do not “acknowledge the real issue,” Scott-Railton said, which is privacy risks. For Ring, it seemed like a missed opportunity to discuss or introduce privacy features to reassure concerned users, he suggested, noting the backlash showed “Americans want more control of their privacy right now” and “are savvy enough to see through sappy dog pics.”

“Stop trying to build a surveillance dystopia consumers didn’t ask for” and “focus on shipping good, private products,” Scott-Railton said.

He also suggested that lawmakers should take note of the grassroots support that could possibly help pass laws to push back on mass surveillance. That could help block not just a potential future partnership with Flock, but possibly also stop Ring from becoming the next Flock.

“Ring communications not acknowledging the lesson they just got publicly taught is a bad sign that they hope this goes away,” Scott-Railton said.

Ring cancels Flock deal after dystopian Super Bowl ad prompts mass outrage Read More »

tiny,-45-base-long-rna-can-make-copies-of-itself

Tiny, 45 base long RNA can make copies of itself


Self-copying RNAs may have been a key stop along the pathway to life.

By base pairing with themselves, RNAs can form complex structures with enzymatic activity. Credit: Laguna Design

There are plenty of unanswered questions about the origin of life on Earth. But the research community has largely reached consensus that one of the key steps was the emergence of an RNA molecule that could replicate itself. RNA, like its more famous relative DNA, can carry genetic information. But it can also fold up into three-dimensional structures that act as catalysts. These two features have led to the suggestion that early life was protein-free, with RNA handling both heredity and catalyzing a simple metabolism.

For this to work, one of the reactions that the early RNAs would need to catalyze is the copying of RNA molecules, without which any sort of heritability would be impossible. While we’ve found a number of catalytic RNAs that can copy other molecules, none have been able to perform a key reaction: making a copy of themselves. Now, however, a team has found an incredibly short piece of RNA—just 45 bases long—that can make a copy of itself.

Finding an RNA polymerase

We have identified a large number of catalytic RNAs (generically called ribozymes, for RNA-based enzymes), and some of them can catalyze reactions involving other RNAs. A handful of these are ligases, which link together two RNA molecules. In some cases, they need these molecules to be held together by a third RNA molecule that base pairs with both of them. We’ve only identified a few that can act as polymerases, which add RNA bases to a growing molecule, one at a time, with each new addition base pairing with a template molecule.

Black on white image showing 3 different enzymatic activities. One links any two nucleic acid strands, the other only links base paired strands, and the third links one base at a time.

Some ligases can link two nucleic acid strands (left), while others can link the strands only if they’re held together by base pairing with a template (center). A polymerase can be thought of as a template-dependent ligase that adds one base at a time. The newly discovered ribozyme sits somewhere between a template-directed ligase and a polymerase.

Credit: John Timmer

Some ligases can link two nucleic acid strands (left), while others can link the strands only if they’re held together by base pairing with a template (center). A polymerase can be thought of as a template-dependent ligase that adds one base at a time. The newly discovered ribozyme sits somewhere between a template-directed ligase and a polymerase. Credit: John Timmer

Obviously, there is some functional overlap between them, as you can think of a polymerase as ligating on one base at a time. And in fact, at the ribozyme level, there’s some real-world overlap, as some ribozymes that were first identified as ligases were converted into polymerases by selecting for this new function.

While this is fascinating, there are a few problems with these known examples of polymerase ribozymes. One is that they’re long. So long, in fact, that they’re beyond the length of the sort of molecules that we’ve observed forming spontaneously from a mix of individual RNA bases. This length also means they’re largely incapable of making copies of themselves—the reactions are slow and inefficient enough that they simply stop before copying the entire molecule.

Another factor related to their length is that they tend to form very complex structures, with many different areas of the molecule base-paired to one another. That leaves very little of the molecule in a single-stranded form, which is needed to make a copy.

Based on past successes, a French-UK team decided to start a search for a polymerase by looking for a ligase. And they limited that search in an important way: They only tested short molecules. They started with pools of RNA molecules, each with a different random sequence, ranging from 40 to 80 bases. Overall, they estimated that they made a population of 1013 molecules out of the total possible population of 1024 sequences of this type.

These random molecules were fed a collection of three-base-long RNAs, each linked to a chemical tag. The idea was that if a molecule is capable of ligating one of these short RNA fragments to itself, it could be pulled out using the tag. The mixtures were then placed in a salty mixture of water and ice, as this can promote reactions involving RNAs.

After 11 rounds of reactions and tag-based purification, the researchers ended up with three different RNA molecules that could each ligate three-base-long RNAs to existing molecules. Each of these molecules was subjected to mutagenesis and further rounds of selection. This ultimately left the researchers with a single, 51-base-long molecule that could add clusters of three bases to a growing RNA strand, depending on their ability to base-pair with an RNA template. They called this “polymerase QT-51,” with QT standing for “quite tiny.” They later found that they could shorten this to QT-45 without losing significant enzyme activity.

Checking its function

The basic characterization of QT-45 showed that it has some very impressive properties for a molecule that, by nucleic acid standards, is indeed quite tiny. While it was selected for linking collections of molecules that were three bases long, it could also link longer RNAs, work on shorter two-base molecules, or even add a single base at a time, though this was less efficient. While it worked slowly, the molecule’s active half-life was well over 100 days, so it had plenty of time to get things done before it degraded.

It also didn’t need to interact with any specific RNA sequences to work, suggesting it had a general affinity for RNA molecules. As a result, it wasn’t especially picky about the sequences it could copy.

As you might expect from such a small molecule, QT-45 didn’t tolerate changes to its own sequence very well—nearly the entire molecule was important in one way or another. Tests that involved changing every single individual base one at a time showed that almost all the changes reduced the ribozyme’s activity. There were, however, a handful of changes that improved things, suggesting that further selection could potentially yield additional improvements. And the impact of mutations near the center of the sequence was far more severe, suggesting that region is critical for QT-45’s enzymatic activity.

The team then started testing its ability to synthesize copies of other RNA molecules when given a mixture of all possible three-base sequences. One of the tests included a large stretch in which one end of the sequence base-paired with the other. To copy that, those base pairs need to somehow be pried apart. But QT-45 was able to make a copy, meaning it synthesized a strand that was able to base pair with the original.

It was also able to make a copy of a template strand that would base pair with a small ribozyme. That copying produced an active ribozyme.

But the key finding was that it could synthesize a sequence that base-pairs with itself, and then synthesize itself by copying that sequence. This was horribly inefficient and took months, but it happened.

Throughout these experiments, the fidelity averaged about 95 percent, meaning that, in copying itself, it would make an average of two to three errors. While this means a fair number of its copies wouldn’t be functional, it also means the raw materials for an evolutionary selection for improved function—random mutations—would be present.

What this means

It’s worth taking a moment to consider the use of three-base RNA fragments by this enzyme. On the surface, this may seem a bit like cheating, since current RNA polymerases add sequence one base at a time. But in reality, any chemical environment that could spontaneously assemble an RNA molecule 45 bases long will produce many fragments shorter than that. So in many ways, this might be a more realistic model of the conditions in which life emerged.

The authors note that these shorter fragments may be essential for QT-45’s activity. The short ribozyme probably doesn’t have the ability to enzymatically pry base-paired strands of RNA apart to copy them. But in a mixture of lots of small fragments, there’s likely to be an equilibrium, with some base-paired sequences spontaneously popping open and temporarily base pairing with a shorter fragment. Working with these base-paired fragments is probably essential to the ribozyme’s overall activity.

Right now, QT-45 isn’t an impressive enzyme. But the researchers point out that it has only been through 18 rounds of selection, which isn’t much. The most efficient ribozyme polymerases we have at present have been worked on by multiple labs for years. I expect QT-45 to receive similar attention and improve significantly over time.

Also notable is that the team came up with three different ligases in a test of just a small subset of the possible total RNA population of this size. If that frequency holds, there are on the order of 1011 ligating ribozymes among the sequences of this size. Which raises the possibility that we could find far more if we do an exhaustive search. That suggests the first self-copying RNA might not be as improbable as it seems at first.

Science, 2026. DOI: 10.1126/science.adt2760  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Tiny, 45 base long RNA can make copies of itself Read More »

what-if-riders-don’t-close-a-robotaxi-door-after-a-ride?-try-doordash.

What if riders don’t close a robotaxi door after a ride? Try DoorDash.

Autonomous vehicles have a lot of potential. As long as you program them right, they won’t speed, won’t break traffic laws, and won’t get drunk, high, abusive, or violent. And the technology has been getting much more capable, even as some of the hype has died down, taking some of the related companies with it. Waymo still easily leads the field and is already operating commercially in six cities across America, with a dozen more (plus London) coming soon. Waymos can even drop you off and pick you up at the airport in Phoenix and San Francisco.

Soon, Waymo will begin deploying its sixth-generation Waymo Driver, using upfitted Zeekr Ojai minivans, adding to the Jaguar I-Paces that have become so common on San Francisco streets and to its fleet of Hyundai Ioniq 5 electric vehicles. It has upgraded the cameras, lidar, and radar, meaning the cars can better sense their environments at night and in inclement weather. There are even microphones that can pick up sounds like sirens to better inform the robotaxi of the direction the emergency vehicle(s) are coming from.

But even with all these advances since the pod-like two-seater that predates even the Waymo name, there are still a few things that remain beyond a robotaxi’s capabilities. Like closing a door a passenger left open on their way out. All the sophisticated sensors and high-powered computer processing in the world are useless if the car can’t move until the door closes and there’s no one there to give it a hand.

What if riders don’t close a robotaxi door after a ride? Try DoorDash. Read More »