Author name: Mike M.

bose-soundtouch-home-theater-systems-regress-into-dumb-speakers-feb.-18

Bose SoundTouch home theater systems regress into dumb speakers Feb. 18

Bose will brick key features of its SoundTouch Wi-Fi speakers and soundbars soon. On Thursday, Bose informed customers that as of February 18, 2026, it will stop supporting the devices, and the devices’ cloud-based features, including the companion app, will stop working.

The SoundTouch app enabled numerous capabilities, including integrating music services, like Spotify and TuneIn, and the ability to program multiple speakers in different rooms to play the same audio simultaneously.

Bose has also said that some saved presets won’t work and that users won’t be able to change saved presets once the app is gone.

Additionally, Bose will stop providing security updates for SoundTouch devices.

The Framingham, Massachusetts-headquartered company noted to customers that the speakers will continue being able to play audio from a device connected via AUX or HDMI. Wireless playback will still work over Bluetooth; however, Bluetooth is known to introduce more latency than Wi-Fi connections.

Affected customers can trade in their SoundTouch product for a credit worth up to $200.

In its notice sent to customers this week, Bose provided minimal explanation for end-of-life-ing its pricey SoundTouch speakers, saying:

Bose SoundTouch systems were introduced into the market in 2013. Technology has evolved since then, and we’re no longer able to sustain the development and support of the cloud infrastructure that powers this older generation of products. We remain committed to creating new listening experiences for our customers built on modern technologies.

Ars Technica has reached out to Bose for comment.

“Really disgusted”

Bose launched SoundTouch with three speakers ranging from $399 to $699. The company marketed the wireless home audio system as a way to extend high-quality sound throughout the home using Wi-Fi-connected speakers.

In 2015, Bose expanded the lineup with speakers ranging from $200 to $400 and soundbars and home theater systems ranging from $1,100 to $1,500.

By 2020, however, Bose was distancing itself from SoundTouch. It informed customers that it was “discontinuing sales of some SoundTouch products” but said it was “committed” to supporting the “SoundTouch app and product software for the foreseeable future.” Apparently, Bose couldn’t see beyond the next five years.

Bose SoundTouch home theater systems regress into dumb speakers Feb. 18 Read More »

termite-farmers-fine-tune-their-weed-control

Termite farmers fine-tune their weed control

Odontotermes obesus is one of the termite species that grows fungi, called Termitomyces, in their mounds. Workers collect dead leaves, wood, and grass to stack them in underground fungus gardens called combs. There, the fungi break down the tough plant fibers, making them accessible for the termites in an elaborate form of symbiotic agriculture.

Like any other agriculturalist, however, the termites face a challenge: weeds. “There have been numerous studies suggesting the termites must have some kind of fixed response—that they always do the same exact thing when they detect weed infestation,” says Rhitoban Raychoudhury, a professor of biological sciences at the Indian Institute of Science Education, “but that was not the case.” In a new Science study, Raychoudhury’s team discovered that termites have pretty advanced, surprisingly human-like gardening practices.

Going blind

Termites do not look like particularly good gardeners at first glance. They are effectively blind, which is not that surprising considering they spend most of their life in complete darkness working in endless corridors of their mounds. But termites make up for their lack of sight with other senses. “They can detect the environment based on advanced olfactory reception and touch, and I think this is what they use to identify the weeds in their gardens,” Raychoudhury says. To learn how termites react once they detect a weed infestation, his team collected some Odontotermes obesus and challenged them with different gardening problems.

The experimental setup was quite simple. The team placed some autoclaved soil sourced from termite mounds into glass Petri dishes. On this soil, Raychoudhury and his colleagues placed two fungus combs in each dish. The first piece acted as a control and was a fresh, uninfected comb with Termitomyces. “Besides acting as a control, it was also there to make sure the termites have the food because it is very hard for them to survive outside their mounds,” Raychoudhury explains. The second piece was intentionally contaminated with Pseudoxylaria, a filamentous fungal weed that often takes over Termitomyces habitats in termite colonies.

Termite farmers fine-tune their weed control Read More »

musk’s-x-posts-on-ketamine,-putin-spur-release-of-his-security-clearances

Musk’s X posts on ketamine, Putin spur release of his security clearances

“A disclosure, even with redactions, will reveal whether a security clearance was granted with or without conditions or a waiver,” DCSA argued.

Ultimately, DCSA failed to prove that Musk risked “embarrassment or humiliation” not only if the public learned what specific conditions or waivers applied to Musk’s clearances but also if there were any conditions or waivers at all, Cote wrote.

Three cases that DCSA cited to support this position—including a case where victims of Jeffrey Epstein’s trafficking scheme had a substantial privacy interest in non-disclosure of detailed records—do not support the government’s logic, Cote said. The judge explained that the disclosures would not have affected the privacy rights of any third parties, emphasizing that “Musk’s diminished privacy interest is underscored by the limited information plaintiffs sought in their FOIA request.”

Musk’s X posts discussing his occasional use of prescription ketamine and his disclosure on a podcast that smoking marijuana prompted NASA requirements for random drug testing, Cote wrote, “only enhance” the public’s interest in how Musk’s security clearances were vetted. Additionally, Musk has posted about speaking with Vladimir Putin, prompting substantial public interest in how his foreign contacts may or may not restrict his security clearances. More than 2 million people viewed Musk’s X posts on these subjects, the judge wrote, noting that:

It is undisputed that drug use and foreign contacts are two factors DCSA considers when determining whether to impose conditions or waivers on a security clearance grant. DCSA fails to explain why, given Musk’s own, extensive disclosures, the mere disclosure that a condition or waiver exists (or that no condition or waiver exists) would subject him to ’embarrassment or humiliation.’

Rather, for the public, “the list of Musk’s security clearances, including any conditions or waivers, could provide meaningful insight into DCSA’s performance of that duty and responses to Musk’s admissions, if any,” Cote wrote.

In a footnote, Cote said that this substantial public interest existed before Musk became a special government employee, ruling that DCSA was wrong to block the disclosures seeking information on Musk as a major government contractor. Her ruling likely paves the way for the NYT or other news organizations to submit FOIA requests for a list of Musk’s clearances while he helmed DOGE.

It’s not immediately clear when the NYT will receive the list they requested in 2024, but the government has until October 17 to request redactions before it’s publicized.

“The Times brought this case because the public has a right to know about how the government conducts itself,” Charlie Stadtlander, an NYT spokesperson, said. “The decision reaffirms that fundamental principle and we look forward to receiving the document at issue.”

Musk’s X posts on ketamine, Putin spur release of his security clearances Read More »

bending-the-curve

Bending The Curve

The odds are against you and the situation is grim.

Your scrappy band are the only ones facing down a growing wave of powerful inhuman entities with alien minds and mysterious goals. The government is denying that anything could possibly be happening and actively working to shut down the few people trying things that might help. Your thoughts, no matter what you think could not harm you, inevitably choose the form of the destructor. You knew it was going to get bad, but this is so much worse.

You have an idea. You’ll cross the streams. Because there is a very small chance that you will survive. You’re in love with this plan. You’re excited to be a part of it.

Welcome to the always excellent Lighthaven venue for The Curve, Season 2, a conference I had the pleasure to attend this past weekend.

Where the accelerationists and the worried come together to mostly get along and coordinate on the same things, because the rest of the world has gone blind and mad. In some ways technical solutions seem relatively promising, shifting us from ‘might be actually impossible’ levels of impossible to Shut Up And Do The Impossible levels of impossible, all you have to do is beat the game on impossible difficulty level. As a speed run. On your first try. Good luck.

The action space has become severely constrained. Between the actual and perceived threats from China, the total political ascendence of Nvidia in particular and anti-regulatory big tech in general, and the setting in of more and more severe race conditions and the increasing dependence of the entire economy on AI capex investments, it’s all we can do to try to only shoot ourselves in the foot and not aim directly for the head.

Last year we were debating tradeoffs. This year, aside from the share price of Nvidia, as long as you are an American who likes humans considering things that might pass? On the margin, there are essentially no tradeoffs. It’s better versus worse.

That doesn’t invalidate the thesis of If Anyone Builds It, Everyone Dies or the implications down the line. At some point we will probably either need to do impactful international coordination or other interventions that involved large tradeoffs, or humanity loses control over the future or worse. That implication exists in every reasonable sketch of the future I have seen in which AI does not end up a ‘normal technology.’ So one must look forward towards that, as well.

You can also look at it as Year 1 of the curve was billed (although I don’t use the d word) as ‘doomers vs. accelerationists’ and now as Nathan Lambert says it was DC and SF types, like when the early season villains and heroes are now all working together as the stakes get raised and the new Big Bad shows up, then you do it again until everything is cancelled.

The Curve was a great experience. The average quality of attendees was outstanding. I would have been happy to talk to a large fraction of them 1-on-1 for a long time, and there were a number that I’m sad I missed. Lots of worthy sessions lost out to other plans.

As Anton put it, every (substantive) conversation I had made me feel smarter. There was opportunity everywhere, everyone was cooperative and seeking to figure things out, and everyone stayed on point.

To the many people who came up to me to thank me for my work, you’re very welcome. I appreciate it every time and find it motivating.

What did people at the conference think about some issues?

We have charts.

Where is AI on the technological richter scale?

There are dozens of votes here. Only one person put this as low as a high 8, which is the range of automobiles, electricity and the internet. A handful put it with fire, the wheel, agriculture and the printing press. Then most said this is similar to the rise of the human species, a full transformation. A few said it is a bigger deal than that.

If you were situationally aware enough to show up, you are aware of the situation.

These are median predictions, so the full distribution will have a longer tail, but this seems reasonable to me. The default is 10, that AI is going to be a highly non-normal technology on the level of the importance of humans, but there’s a decent chance it will ‘only’ be a 9 on the level of agriculture or fire, and some chance it disappoints and ends up Only Internet Big.

Last year, people would often claim AI wouldn’t even be Internet Big. We are rapidly approaching the point where that is not a position you can offer with a straight face.

How did people expect this to play out?

That’s hard to read, so the centers of the distributions are, note that there was clearly a clustering effect:

  1. 90% of code is written by AI by ~2028.

  2. 90% of human remote work can be done more cheaply by AI by ~2031.

  3. Most cars on America’s roads lack human drivers by ~2041.

  4. AI makes Nobel Prize worthy discovery by ~2032.

  5. First one-person $1 billion company by 2026.

  6. First year of >10% GDP growth by ~2038 (but 3 votes for never).

  1. People estimate 15%-50% current speedup at AI labs from AI coding.

  2. When AI is fully automated, disagreement over how good their research taste will be, but median is roughly as good as the median current AI worker.

  3. If we replaced each human with an AI version of themselves that was the same except 30x faster with 30 copies, but we only had access to similar levels of compute, we’d get maybe a 12x speedup in progress.

What are people worried or excited about? A lot of different things, from ‘everyone lives’ to ‘concentration of power,’ ‘everyone dies’ and especially ‘loss of control’ which have the most +1s on their respective sides. Others are excited to cure their ADD or simply worried everything will suck.

Which kind of things going wrong worries people most, misalignment or misuse?

Why not both? Pretty much everyone said both.

Finally, who is this nice man with my new favorite IYKYK t-shirt?

(I mean, he has a name tag, it’s OpenAI’s Boaz Barak)

The central problem at every conference is fear of missing out. Opportunity costs. There are many paths, even when talking to a particular person. You must choose.

That goes double at a conference like The Curve. The quality of the people there was off the charts and the schedule forced hard choices between sessions. There were entire other conferences I could have productively experienced. I also probably could have usefully done a lot more prep work.

I could of course have hosted a session, which I chose not to do this time around. I’m sure there were various topics I could have done that people would have liked, but I was happy for the break, and it’s not like there’s a shortage of my content out there.

My strategy is mostly to not actively plan my conference experiences, instead responding to opportunity. I think this is directionally correct but I overplay it, and should have (for example) looked at the list of who was going to be there.

What were the different tracks or groups of discussions and sessions I ended up in?

  1. Technical alignment discussions. I had the opportunity to discuss safety and alignment work with a number of those working on such issues at Anthropic, DeepMind and even xAI. I missed OpenAI this time around, but they were there. This always felt exciting, enlightening and fun. I still get imposter syndrome every time people in such conversations take me and my takes and ideas seriously. Conditions are in many ways horribly terrible but everyone is on the same team and some things seem promising. I felt progress was made. My technical concrete pitch to Anthropic included (among other things) both particular experimental suggestions and also a request that they sustain access to Sonnet 3.5 and 3.6.

    1. It wouldn’t make sense to go into the technical questions here.

  2. Future projecting. I went to talks by Joshua Achiam and Helen Toner about what future capabilities and worlds might look like. Jack Clark’s closing talk was centrally this but touched on other things.

  3. AI policy discussions. These felt valuable and enlightening in both directions, but were infuriating and depressing throughout. People on the ground in Washington kept giving us variations on ‘it’s worse than you know,’ which it usually is. So now you know. Others seemed not to appreciate how bad things had gotten. I was often pointing out that people’s proposals implied some sort of international treaty and form of widespread compute surveillance, had zero chance of actually causing us not to die, or sometimes both. At other times, I was pointing out that things literally wouldn’t work on the level of ‘do the object level goal’ let alone make us win. Or we were trying to figure out what was sufficiently completely costless and not even a tiny bit weird or complex that one could propose that might actually do anything meaningful. Or simply observing other perspectives.

    1. In particular, different people maintained different players were relatively powerful, but I came away from various discussions more convinced than ever that for now White House policy and rhetoric on AI can be modeled as fully captured by Nvidia, although constrained in some ways by congressional Republicans and some members of the MAGA movement. This is pretty much a worst case scenario. If we were captured by OpenAI or other AI labs that wouldn’t be great but at least their interests and America are mostly aligned.

  4. Nonprofit funding discussions. I’d just come out of the latest Survival and Flourishing Fund round, various players seemed happy to talk and strategize, and it seems likely that very large amounts of money will be unlocked soon as OpenAI and Anthropic employees with increasingly valuable equity become liquid. The value of helping steer this seems crazy high, but the stakes on everything seem crazy high.

    1. One particular worry is that a lot of this money could effectively get captured by various existing players, especially the existing EA/OP ecosystem, in ways that would very much be a shame.

    2. Another is simply that a bunch of relatively uninformed money could overwhelm incentives, contaminate various relationships and dynamics, introduce parasitic entry, drop average quality a lot, and so on.

    3. Or everyone involved could end up with a huge time sink and/or end up not deploying the funds.

    4. So there’s lots to do. But it’s all tricky, and trying to gain visible influence over the direction of funds is a very good way to get your own social relationships and epistemics very quickly compromised, also it can quickly eat up infinite time, so I’m hesitant to get too involved or involved in the wrong ways.

What other tracks did I actively choose not to participate in?

There were of course AI timelines discussions, but I did my best to avoid them except when they were directly relevant to a concrete strategic question. At one point someone in a 4-person conversation I was mostly observing said ‘let’s change the subject, can we argue about AI timelines’ and I outright said ‘no’ but was overruled, and after a bit I walked away. For those who don’t follow these debates, many of the more aggressive timelines have gotten longer over the course of 2025, with people who expected crazy to happen in 2027 or 2028 now not expecting crazy for several more years, but there are those who still mostly hold firm to a faster schedule.

There were a number of talks about AI that assumed it was mysteriously a ‘normal technology.’ There were various sessions on economics projections, or otherwise taking place with the assumption that AI would not cause things to change much, except for whatever particular effect people were discussing. How would we ‘strengthen our democracy’ when people had these neat AI tools, or avoid concentration of power risks? What about the risk of They Took Our Jobs? What about our privacy? How would we ensure everyone or every nation has fair access?

These discussions almost always silently assume that AI capability ‘hits a wall’ some place not very far from where it is now and then everything moves super slowly. Achiam’s talk had elements of this, and I went because he’s OpenAI’s Head of Mission Alignment so knowing how he thinks about this seemed super valuable.

To the extent I interacted with this it felt like smart people thinking about a potential world almost certainly very different from our own. Fascinating, can create useful intuition pumps, but that’s probably not what’s going to happen. If nothing else was going on, sure, count me in.

But also all the talk of ‘bottlenecks’ therefore 0.5% or 1% GDP growth boost per year tops has already been overtaken purely by capex spending and I cannot remember a single economist or other GDP growth skeptic acknowledging that this already made their projections wrong and updating reasonably.

There was an AI 2027 style tabletop exercise again this year, which I recommend doing if you haven’t done it before, except this time I wasn’t aware it was happening, and also by now I’ve done it a number of times.

There were of course debates directly about doom, but remarkably little and I had no interest. It felt like everyone was either acknowledging existential risk enough that there wasn’t much value of information in going further, or sufficiently blind they were in ‘normal technology’ mode. At some point people get too high level to think building smarter than human minds is a safe proposition.

Helen Toner gave a talk on taking AI jaggedness seriously. What would it mean if AIs kept getting increasingly better and superhuman at many tasks, while remaining terrible at other tasks, or at least relatively highly terrible compared to humans? How does the order of capabilities impact how things unfold? Even if we get superhuman coding and start to get big improvements in other areas as a result, that won’t make their ability profile similar to humans.

I agree with Helen that such jaggedness is mostly good news and potentially could buy us substantial time for various transitions. However, it’s not clear to me that this jaggedness does that much for that long, AI is (I am projecting) not going to stall out in the lagging areas or stay subhuman in key areas for as much calendar time as one might hope.

A fun suggestion was to imagine LLMs talking about how jagged human capabilities are. Look how dumb we are in some ways while being smart in others. I do think in a meaningful sense LLMs and other current AIs are ‘more jagged’ than humans in practice, because humans have continual learning and the ability to patch the situation and also route the physical world around our idiocy where they’re being importantly dumb. So we’re super dumb, but we try to not let it get in the way.

Neil Chilson: Great talk by @hlntnr about the jaggedness of AI, why it is likely to continue, and why it matters. Love this slide and her point that while many AI forecasters use smooth curves, a better metaphor is the chaotic transitions in fluid heating.

“Jaggedness” being the uneven ability of AI to do tasks that seem about equally difficult to humans.

Occurs to me I should have shared the “why this matters” slide, which was the most thought provoking one to me:

I am seriously considering talking about time to ‘crazy’ going forward, and whether that is a net helpful thing to say.

The curves definitely be too smooth. It’s hard to properly adjust for that. But I think the fluid dynamics metaphor, while gorgeous, makes the opposite mistake.

I watched a talk by Randi Weingarten about how she and other teachers are advocating and viewing AI around issues in education. One big surprise is that she says they don’t worry or care much about AI ‘cheating’ or doing work via ChatGPT, there are ways around that, especially ‘project based learning that is relevant,’ and the key thing is that education is all about human interactions. To her ChatGPT is a fine tool, although things like Character.ai are terrible, and she strongly opposes phones in schools for the right reasons and I agree with that.

She said teachers need latitude to ‘change with the times’ but usually aren’t given it, they need permission to change anything and if anything goes wrong they’re fired (although there are the other stories we hear that teachers often can’t be fired almost no matter what in many cases?). I do sympathize here. A lot needs to change.

Why is education about human interactions? This wasn’t explained. I always thought education was about learning things, I mostly didn’t learn things through human interaction, I mostly didn’t learn things in school via meaningful human interaction, and to the extent I learned things via meaningful human interaction it mostly wasn’t in school. As usual when education professionals talk about education I don’t get the sense they want children to learn things, or that they care about children being imprisoned and bored with their time wasted for huge portions of many days, but care about something else entirely? It’s not clear what her actual objection to Alpha School (which she of course confirmed she hates) was other than decentering teachers, or what concretely was supposedly going wrong there? Frankly it sounded suspiciously like a call to protect jobs.

If anything, her talk seemed to be a damning indictment of our entire system of schools and education. She presents vocational education as state of the art and with the times, and cited an example of a high school with a sub-50% graduation rate going to 100% graduation rate and 182 of 186 students getting a ‘certification’ from future farmers of America after one such program. Aside from the obvious ‘why do you need a certificate to be a farmer’ and also ‘why would you choose farmer in 2025’ this is saying kids should spend vastly less time in school? Many other such implications were there throughout.

Her group calls for ‘guardrails’ and ‘accountability’ on AI, worries about things like privacy, misinformation and understanding ‘the algorithms’ or the dangers to democracy, and points to declines in male non-college earnings,

There was a Chatham House discussion of executive branch AI policy in America where all involved were being diplomatic and careful. There’s a lot of continuity between the Biden approach to AI and much of the Trump approach, there’s a lot of individual good things going on, and it was predicted that CAISI would have a large role going forward, lots of optimism and good detail.

It seems reasonable to say that the Trump administration’s first few months of AI policy were unexpectedly good, and the AI Action Plan was unexpectedly good. Then there are the other things that happened.

Thus the session included some polite versions of ‘what the hell are we doing?’ that was at most slightly beneath the surface. As a central example, one person observed that if America ‘loses on AI,’ it would likely be because we did one or more of failing to (1) provide the necessary electrical power, (2) failed to bring in the top AI talent or (3) sold away our chip advantage. They didn’t say, but I will note here, that current American policy seems determined to screw up all three of these? We are cancelling solar, wind and battery projects all over, we are restricting our ability to acquire talent, and we are seriously debating selling Blackwell chips directly to China.

I was sad that going to that talk ruled out watching Buck Shlegeris debate Timothy Lee about whether keeping AI agents under control will be hard, as I expected that session to both be extremely funny (and one sided) and also plausibly enlightening in navigating such arguments, but that’s how conferences go. I did then get to see Buck discuss mitigating insider threats from scheming AIs, in which he explained some of the ways in which dealing with scheming AIs that are smarter than you is very hard. I’d go farther and say that in the types of scenarios Buck is discussing there it’s not going to work out for you. If the AIs be smarter than you and also scheming against you and you try to use them for important stuff anyway you lose.

That doesn’t mean do zero attempts to mitigate this but at some point the whole effort is counterproductive as it creates context that creates what it is worried about, without giving you much chance of winning.

At one point I took a break to get dinner at a nearby restaurant. The only other people there were two women. The discussion included mention of AI 2027 and also that one of them is reading If Anyone Builds It, Everyone Dies.

Also at one point I saw a movie star I’m a fan of, hanging out and chatting. Cool.

Sunday started out with Josh Achiam’s talk (again, he’s Head of Mission Alignment at OpenAI, but his views here were his own) about the challenge of the intelligence age. If it comes out, it’s worth a watch. There were a lot of very good thoughts and considerations here. I later got to have some good talk with him during the afterparty. Like much talk at OpenAI, it also silently ignored various implications of what was being built, and implicitly assumed the relevant capabilities just stopped in any place they would cause bigger issues. The talk acknowledged that it was mostly assuming alignment is solved, which is fine as long as you say that explicitly, we have many different problems to deal with, but other questions also felt assumed away more silently. Josh promises his full essay version will deal with that.

I got to go to a Chatham House Q&A about the EU Frontier AI Code of Practice, which various people keep reminding me I should write about, and I swear I want to do that as soon as I have some spare time. There was a bunch of info, some of it new to me, and also insight into how those involved think all of this is going to work. I later shared with them my model of how I think the AI companies will respond, in particular the chance they will essentially ignore the law when inconvenient because of lack of sufficient consequences. And I offered suggestions on how to improve impact here. But on the margin, yeah, the law does some good things.

I got into other talks and missed out on one I wanted to see by Joe Allen, about How the MAGA Movement Sees AI. This is a potentially important part of the landscape on AI going forward, as a bunch of MAGA types really dislike AI and are in position to influence the White House.

As I look over the schedule in hindsight I see a bunch of other stuff I’m sad I missed, but the alternative would have been missing valuable 1-on-1s or other talks.

The final talk was Jack Clark giving his perspective on events. This was a great talk, if it does online you should watch it, it gave me a very concrete sense of where he is coming from.

Jack Clark has high variance. When he’s good, he’s excellent, such as in this talk, including the Q&A, and when he asked Achaim an armor piercing question, or when he’s sticking to his guns on timelines that I think are too short even though it doesn’t seem strategic to do that. At other times, him and the policy team at Anthropic are in some sort of Official Mode where they’re doing a bunch of hedging and making things harder.

The problem I have with Anthropic’s communications is, essentially, that they are not close to the Pareto Frontier, where the y-axis is something like ‘Better Public Policy and Epistemics’ and the x-axis can colloquially be called ‘Avoid Pissing Off The White House.’ I acknowledge there is a tradeoff here, especially since we risk negative polarization, but we need to be strategic, and certain decisions have been de facto poking the bear for little gain, and at other times they hold back for little gain the other way. We gotta be smarter about this.

They are often very different from mine, or yours.

Deepfates: looks like a lot of people who work on policy and research for aligning AIs to human interests. I’m curious what you think about how humans align to AI.

my impression so far: people from big labs and people from government, politely probing each other to see which will rule the world. they can’t just out and say it but there’s zerosumness in the air

Chris Painter: That isn’t my impression of the vibe at the event! Happy to chat.

I was with Chris on this. It very much did not feel zero sum. There did seem to be a lack of appreciation of the ‘by default the AIs rule the world’ problem, even in a place dedicated largely to this particular problem.

Deepfates: Full review of The Curve: people just want to believe that Anyone is ruling the world. some of them can sense that Singleton power is within reach and they are unable to resist The opportunity. whether by honor or avarice or fear of what others will do with it.

There is that too, that currently no one is ruling the world, and it shows. It also has its advantages.

so most people are just like “uh-oh! what will occur? shouldn’t somebody be talking about this?” which is fine honestly, and a lot of them are doing good research and I enjoy learning about it. The policy stuff is more confusing

diverse crowd but multiple clusters talking past each other as if the other guys are ontologically evil and no one within earshot could possibly object. and for the most part they don’t actually? people just self-sort by sessions or at most ask pointed questions. parallel worlds.

Yep, parallel worlds, but I never saw anyone say someone else was evil. What, never? Well, hardly ever. And not anyone who actually showed up. Deeply confused and likely to get us all killed? Well, sure, there was more of that, but obviously true, and again not the people present.

things people are concerned about in no order: China. Recursive self-improvement. internal takeover of AI labs by their models. Fascism. Copyright law. The superPACs. Sycophancy. Privacy violations. Rapid unemployment of whole sectors of society. Religious and political backlash, autonomous agents, capabilities. autonomous agents, legal liability. autonomous agents, nightmare nightmare nightmare.

The fear of the other party, the other company, the other country, the other, the unknown, most of all the alien thing that threatens what it means to be human.

Fascinating to see threatens ‘what it means to be human’ on that list but not ‘the ability to keep being human (or alive),’ which I assure Deepfates a bunch of us were indeed very concerned about.

so they want to believe that the world is ruleable, that somebody, anybody, is at the wheel, as we careen into the strangest time in human history.

and they do Not want it to be the AIs. even as they keep putting decision making power and communication surface on the AIs lol

You can kind of tell here that Deepfates is fine with it being the AIs and indeed is kind of disdainful of anyone who would object to this. As in, they understand what is about to happen, but think this is good, actually (and are indeed working to bring it about). So yeah, some actual strong disagreements were present, but didn’t get discussed.

I may or may not have seen Deepfates, since I don’t know their actual name, but we presumably didn’t talk, given:

i tried telling people that i work for a rogue AI building technologies to proliferate autonomous agents (among other things). The reaction was polite confusion. It seemed a bit unreal for everyone to be talking about the world ending and doing normal conference behaviors anyway.

Polite confusion is kind of the best you can hope for when someone says that?

Regardless, very interesting event. Good crowd, good talks, plenty of food and caffeinated beverages. Not VC/pitch heavy like a lot of SF things.

Thanks to Lighthaven for hosting and Golden Gate Institute/Manifund for organizing. Will be curious to see what comes of this.

I definitely appreciated the lack of VC and pitching. I did get pitched once (on a nonprofit thing) but I was happy to take it. Focus was tight throughout.

Anton: “are you with the accelerationist faction?”

most people here have thought long and hard about ai, every conversation i have — even with those i vehemently disagree — feels like it makes me smarter..

i cant overemphasize how good the vibes are at this event.

Rob S: Another Lighthaven banger?

Anton: ANOTHA ONE.

As I note above, his closing talk was excellent. Otherwise, he seemed to be in the back of many of the same talks I was at. Listening. Gathering intel.

Jack Clark (policy head, Anthropic): I spent a few days at The Curve and I am humbled and overjoyed by the experience – it is a special event, now in its second year, and I hope they preserve whatever lightning they’ve managed to capture in this particular bottle. It was a privilege to give the closing talk.

During the Q&A I referenced The New Book, and likely due to the exhilaration of giving the earlier speech I fumbled a word and titled it: If Anyone Reads It, Everyone Dies.

James Cham: It was such an inspiring (and terrifying) talk!

I did see Roon at one point but it was late in the day and neither of us had an obvious conversation we wanted to have and he wandered off. He’s low key in person.

I was very disappointed to realize he did not say ‘den of inquiry’ here:

Roon: The Curve is insane because a bunch of DC staffers in suits have shown up to Lighthaven, a rationalist den of iniquity that looks like a Kinkade painting.

Jaime Sevilla: Jokes on you I am not a DC staffer, I just happen to like wearing my suit.

Neil Chilson: Hey, I ditched the jacket after last night.

Being Siedoh: i was impressed that your badge just says “Roon” lol.

To be fair, you absolutely wanted a jacket of some kind for the evening portion. That’s why they were giving away sweatshirts. It was still quite weird to see the few people who did wear suits.

Nathan made the opposite of my choice, and spent the weekend centered on timeline debates.

Nathan Lambert: My most striking takeaway is that the AI 2027 sequence of events, from AI models automating research engineers to later automating AI research, and potentially a singularity if your reasoning is so inclined, is becoming a standard by which many debates on AI progress operate under and tinker with.

It’s good that many people are taking the long term seriously, but there’s a risk in so many people assuming a certain sequence of events is a sure thing and only debating the timeframe by which they arrive.

This feel like the deepfates theory of self-selection within the conference. I observed the opposite, that so many people were denying that any kind of research automation or singularity was going to happen. Usually they didn’t even assert it wasn’t happening, they simply went about discussing futures where it mysteriously didn’t happen, presumably because of reasons, maybe ‘bottlenecks’ or muttering ‘normal technology’ or something.

Within the short timelines and taking AGI (at least somewhat) seriously debate subconference, to the extent I saw it, yes I do think there’s widespread convergence on the automating AI research analysis.

Whereas Nathan is in the ‘nope definitely not happening’ camp, it seems, but is helpfully explaining that it is because of bottlenecks in the automation loop.

These long timelines are strongly based on the fact that the category of research engineering is too broad. Some parts of the RE job will be fully automated next year, and more the next. To check the box of automation the entire role needs to be replaced.

What is more likely over the next few years, each engineer is doing way more work and the job description evolves substantially. I make this callout on full automation because it is required for the distribution of outcomes that look like a singularity due to the need to remove the human bottleneck for an ever accelerating pace of progress. This is a point to reinforce that I am currently confident in a singularity not happening.

The automation theory is that, as Nathan documents in his writeup, within a few years the existing research engineers (REs) will be unbelievably productive (80%-90% automated) and in some ways RE is already automated, yet that doesn’t allow us to finish the job, and humans continue importantly slowing down the loop because Real Science Is Messy and involves a social marketplace of ideas. Apologies for my glib paraphrasing. It’s possible in theory that these accelerations of progress and partial automations plus our increased scaling are no match for increasing problem difficulty, but it seems unlikely to me.

It seems far more likely that this kind of projection forgets how much things accelerate in such scenarios. Sure, it will probably be a lot messier than the toy models and straight lines on graphs, it always is, but you’d best start believing in singularities, because you’re in one, if you look at the arc of history.

The following is a very minor thing but I enjoy it so here you go.

All three meals were offered each day buffet style. Quality at these events is generally about as good as buffets get, they know who the good offerings are at this point. I ask for menus in advance so I can choose when to opt out and when to go hard, and which day to do my traditional one trip to a restaurant.

Also there was some of this:

Tyler John: It’s riddled with contradictions. The neoliberal rationalists allocate vegan and vegetarian food with a central planner rather than allowing demand to determine the supply.

Rachel: Yeah fwiw this was not a design choice. I hate this. I unfortunately didn’t notice that it was still happening yesterday :/

Tyler John: Oh on my end it’s only a very minor complaint but I did enjoy the irony.

Robert Winslow: I had a bad experience with this kind of thing at a conference. They said to save the veggies for the vegetarians. So instead of everyone taking a bit of meat and a bit of veg, everyone at the front of the line took more meat than they wanted, and everyone at the back got none.

You obviously can’t actually let demand determine supply, because you (1) can’t afford the transaction costs of charging on the margin and (2) need to order the food in advance. And there are logistical advantages to putting (at least some of) the vegan and vegetarian food in a distinct area so you don’t risk contamination or put people on lines that waste everyone’s time. If you’re worried about a mistake, you’d rather run out of meat a little early, you’d totally take down the sign (or ignore it) if it was clear the other mistake was happening, and there were still veg options for everyone else.

If you are confident via law of large numbers plus experience that you know your ratios, and you’ve chosen (and been allowed to choose) wisely, then of course you shouldn’t need anything like this.

Discussion about this post

Bending The Curve Read More »

ted-cruz-doesn’t-seem-to-understand-wikipedia,-lawyer-for-wikimedia-says

Ted Cruz doesn’t seem to understand Wikipedia, lawyer for Wikimedia says


A Wikipedia primer for Ted Cruz

Wikipedia host’s lawyer wants to help Ted Cruz understand how the platform works.

Senator Ted Cruz (R-Texas) uses his phone during a joint meeting of Congress on May 17, 2022. Credit: Getty Images | Bloomberg

The letter from Sen. Ted Cruz (R-Texas) accusing Wikipedia of left-wing bias seems to be based on fundamental misunderstandings of how the platform works, according to a lawyer for the nonprofit foundation that operates the online encyclopedia.

“The foundation is very much taking the approach that Wikipedia is actually pretty great and a lot of what’s in this letter is actually misunderstandings,” Jacob Rogers, associate general counsel at the Wikimedia Foundation, told Ars in an interview. “And so we are more than happy, despite the pressure that comes from these things, to help people better understand how Wikipedia works.”

Cruz’s letter to Wikimedia Foundation CEO Maryana Iskander expressed concern “about ideological bias on the Wikipedia platform and at the Wikimedia Foundation.” Cruz alleged that Wikipedia articles “often reflect a left-wing bias.” He asked the foundation for “documents sufficient to show what supervision, oversight, or influence, if any, the Wikimedia Foundation has over the editing community,” and “documents sufficient to show how the Wikimedia Foundation addresses political or ideological bias.”

As many people know, Wikipedia is edited by volunteers through a collaborative process.

“We’re not deciding what the editorial policies are for what is on Wikipedia,” Rogers said, describing the Wikimedia Foundation’s hands-off approach. “All of that, both the writing of the content and the determining of the editorial policies, is done through the volunteer editors” through “public conversation and discussion and trying to come to a consensus. They make all of that visible in various ways to the reader. So you go and you read a Wikipedia article, you can see what the sources are, what someone has written, you can follow the links yourselves.”

“They’re worried about something that is just not present at all”

Cruz’s letter raised concerns about “the influence of large donors on Wikipedia’s content creation or editing practices.” But Rogers said that “people who donate to Wikipedia don’t have any influence over content and we don’t even have that many large donors to begin with. It is primarily funded by people donating through the website fundraisers, so I think they’re worried about something that is just not present at all.”

Anyone unhappy with Wikipedia content can participate in the writing and editing, he said. “It’s still open for everybody to participate. If someone doesn’t like what it says, they can go on and say, ‘Hey, I don’t like the sources that are being used, or I think a different source should be used that isn’t there,'” Rogers said. “Other people might disagree with them, but they can have that conversation and try to figure it out and make it better.”

Rogers said that some people wrongly assume there is central control over Wikipedia editing. “I feel like people are asking questions assuming that there is something more central that is controlling all of this that doesn’t actually exist,” he said. “I would love to see it a little better understood about how this sort of public model works and the fact that people can come judge it for themselves and participate for themselves. And maybe that will have it sort of die down as a source of government pressure, government questioning, and go onto something else.”

Cruz’s letter accused Wikipedia of pushing antisemitic narratives. He described the Wikimedia Foundation as “intervening in editorial decisions” in an apparent reference to an incident in which the platform’s Arbitration Committee responded to editing conflicts on the Israeli–Palestinian conflict by banning eight editors.

“The Wikimedia Foundation has said it is taking steps to combat this editing campaign, raising further questions about the extent to which it is intervening in editorial decisions and to what end,” Cruz wrote.

Explaining the Arbitration Committee

The Arbitration Committee for the English-language edition of Wikipedia consists of volunteers who “are elected by the rest of the English Wikipedia editors,” Rogers said. The group is a “dispute resolution body when people can’t otherwise resolve their disputes.” The committee made “a ruling on Israel/Palestine because it is such a controversial subject and it’s not just banning eight editors, it’s also how contributions are made in that topic area and sort of limiting it to more experienced editors,” he said.

The members of the committee “do not control content,” Rogers said. “The arbitration committee is not a content dispute body. They’re like a behavior conduct dispute body, but they try to set things up so that fights will not break out subsequently.”

As with other topics, people can participate if they believe articles are antisemitic. “That is sort of squarely in the user editorial processes,” Rogers said. “If someone thinks that something on Wikipedia is antisemitic, they should change it or propose to people working on it that they change it or change sources. I do think the editorial community, especially on topics related to antisemitism and related to Israel/Palestine, has a lot of various safeguards in place. That particular topic is probably the most controversial topic in the world, but there’s still a lot of editorial safeguards in place where people can discuss things. They can get help with dispute resolution from bringing in other editors if there’s a behavioral problem, they can ask for help from Wikipedia administrators, and all the way up to the English Wikipedia arbitration committee.”

Cruz’s letter called out Wikipedia’s goal of “knowledge equity,” and accused the foundation of favoring “ideology over neutrality.” Cruz also pointed to a Daily Caller report that the foundation donated “to activist groups seeking to bring the online encyclopedia more in line with traditionally left-of-center points of view.”

Rogers countered that “the theory behind that is sort of misunderstood by the letter where it’s not about equity like the DEI equity, it is about the mission of the Wikimedia Foundation to have the world’s knowledge, to prepare educational content and to have all the different knowledge in the world to the extent possible.” In topic areas where people with expertise haven’t contributed much to Wikipedia, “we are looking to write grants to help fill in those gaps in knowledge and have a more broad range of information and sources,” he said.

What happens next

Rogers is familiar with the workings of Senate investigations from personal experience. He joined the Wikimedia Foundation in 2014 after working for the Senate’s Permanent Subcommittee on Investigations under the late Sen. Carl Levin (D-Mich.).

While Cruz demanded a trove of documents, Rogers said the foundation doesn’t necessarily have to provide them. A subpoena could be issued to Wikimedia, but that hasn’t happened.

“What Cruz has sent us is just a letter,” Rogers said. “There is no legal proceeding whatsoever. There’s no formal authority behind this letter. It’s just a letter from a person in the legislative branch who cares about the topic, so there is nothing compelling us to give him anything. I think we are probably going to answer the letter, but there’s no sort of legal requirement to actually fully provide everything that answers every question.” Assuming it responds, the foundation would try to answer Cruz’s questions “to the extent that we can, and without violating any of our company policies,” and without giving out nonpublic information, he said.

A letter responding to Cruz wouldn’t necessarily be made public. In April, the foundation received a letter from 23 lawmakers about alleged antisemitism and anti-Israel bias. The foundation’s response to that letter is not public.

Cruz is seeking changes at Wikipedia just a couple weeks after criticizing Federal Communications Commission Chairman Brendan Carr for threatening ABC with station license revocations over political content on Jimmy Kimmel’s show. While the pressure tactics used by Cruz and Carr have similarities, Rogers said there are also key differences between the legislative and executive branches.

“Congressional committees, they are investigating something to determine what laws to make, and so they have a little bit more freedom to just look into the state of the world to try to decide what laws they want to write or what laws they want to change,” he said. “That doesn’t mean that they can’t use their authority in a way that might ultimately go down a path of violating the First Amendment or something like that. They have a little bit more runway to get there versus an executive branch agency which, if it is pressuring someone, it is doing so for a very immediate decision usually.”

What does Cruz want? It’s unclear

Rogers said it’s not clear whether Cruz’s inquiry is the first step toward changing the law. “The questions in the letter don’t really say why they want the information they want other than the sort of immediacy of their concerns,” he said.

Cruz chairs the Senate Commerce Committee, which “does have lawmaking authority over the Internet writ large,” Rogers said. “So they may be thinking about changes to the law.”

One potential target is Section 230 of the Communications Decency Act, which gives online platforms immunity from lawsuits over how they moderate user-submitted content.

“From the perspective of the foundation, we’re staunch defenders of Section 230,” Rogers said, adding that Wikimedia supports “broad laws around intellectual property and privacy and other things that allow a large amount of material to be appropriately in the public domain, to be written about on a free encyclopedia like Wikipedia, but that also protect the privacy of editors who are contributing to Wikipedia.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Ted Cruz doesn’t seem to understand Wikipedia, lawyer for Wikimedia says Read More »

microsoft-removes-even-more-microsoft-account-workarounds-from-windows-11-build

Microsoft removes even more Microsoft account workarounds from Windows 11 build

Of the many minor to medium-size annoyances that come with a modern Windows 11 installation, the requirement that you sign in with a Microsoft account is one of the most irritating. Sure, all operating systems (including Apple’s and Google’s) encourage account sign-in as part of their setup process and prevent you from using multiple operating system features until and unless you sign in.

Various sanctioned and unsanctioned tools and workarounds existed to allow users to set their PCs up with old-fashioned local accounts, and those workarounds haven’t changed much in the last three years. But Microsoft is working on tightening the screws in preview builds of Windows, foreshadowing some future version of Windows where getting around the account requirement is even harder than it already is.

In a new update released to the Dev channel of the Windows Insider Preview program yesterday (build number 26220.6772), Microsoft announced it was “removing known mechanisms for creating a local account in the Windows Setup experience (OOBE).” Microsoft says that these workarounds “inadvertently skip critical setup screens, potentially causing users to exit OOBE with a device that is not fully configured for use.”

The removed commands include the “OOBEBYPASSNRO” workaround that Microsoft announced it was removing earlier this year, plus a “start ms-cxh:localonly” workaround that had been documented more recently. In current Windows releases, users can open a command prompt window during setup with Shift+F10 and input either of those commands to remove both the Microsoft account requirement and Internet connection requirement.

Windows 11 Pro currently includes another workaround, where you can indicate that you plan to join your computer to a corporate domain and use that to create a local account. We don’t know whether this mechanism has also been removed from the new Windows build.

It’s unclear what “critical setup screens” Microsoft is referring to; when using the workarounds to create a local account, the Windows setup assistant still shows you all the screens you need for creating an account and a password, plus toggling a few basic privacy settings. Signing in with a Microsoft account does add multiple screens to this process though—these screens will attempt to sell you Microsoft 365 and Xbox Game Pass subscriptions, and to opt you into features like the data-scraping Windows Recall on PCs that support it. I would not describe any of these as “critical” from a user’s perspective, but my priorities are not Microsoft’s priorities.

Microsoft removes even more Microsoft account workarounds from Windows 11 build Read More »

dead-celebrities-are-apparently-fair-game-for-sora-2-video-manipulation

Dead celebrities are apparently fair game for Sora 2 video manipulation

But deceased public figures obviously can’t consent to Sora 2’s cameo feature or exercise that kind of “end-to-end” control of their own likeness. And OpenAI seems OK with that. “We don’t have a comment to add, but we do allow the generation of historical figures,” an OpenAI spokesperson recently told PCMag.

The countdown to lawsuits begins

The use of digital re-creations of dead celebrities isn’t exactly a new issue—back in the ’90s, we were collectively wrestling with John Lennon chatting to Forrest Gump and Fred Astaire dancing with a Dirt Devil vacuum. Back then, though, that kind of footage required painstaking digital editing and technology only easily accessible to major video production houses. Now, more convincing footage of deceased public figures can be generated by any Sora 2 user in minutes for just a few bucks.

In the US, the right of publicity for deceased public figures is governed by various laws in at least 24 states. California’s statute, which dates back to 1985, bars unauthorized post-mortem use of a public figure’s likeness “for purposes of advertising or selling, or soliciting purchases of products, merchandise, goods, or services.” But a 2001 California Supreme Court ruling explicitly allows those likenesses to be used for “transformative” purposes under the First Amendment.

The New York version of the law, signed in 2022, contains specific language barring the unauthorized use of a “digital replicas” that are “so realistic that a reasonable observer would believe it is a performance by the individual being portrayed and no other individual” and in a manner “likely to deceive the public into thinking it was authorized by the person or persons.” But video makers can get around this prohibition with a “conspicuous disclaimer” explicitly noting that the use is unauthorized.

Dead celebrities are apparently fair game for Sora 2 video manipulation Read More »

f1-in-singapore:-“trophy-for-the-hero-of-the-race”

F1 in Singapore: “Trophy for the hero of the race”

The scandal became public the following year when Piquet was dropped halfway through the season, and he owned up. In the fallout, Briatore was issued a lifetime ban from the sport, with a five-year ban for the team’s engineering boss, Pat Symonds. Those were later overturned, and Symonds went on to serve as F1’s CTO before recently becoming an advisor to the nascent Cadillac Team.

Even without possible RF interference or race-fixing, past Singaporean races were often interrupted by the safety car. The streets might be wider than Monaco, but the walls are just as solid, and overtaking is almost as hard. And Monaco doesn’t take place with nighttime temperatures above 86°F (30°C) with heavy humidity. Those are the kinds of conditions that cause people to make mistakes.

The McLaren F1 Team celebrates their Constructors' World Champion title on the podium at the Formula 1 Singapore Airlines Singapore Grand Prix in Marina Bay Street Circuit, Singapore, on October 5, 2025.

This is the first time McLaren has won back-to-back WCC titles since the early 1990s. Credit: Robert Szaniszlo/NurPhoto via Getty Images

But in 2023, a change was made to the layout, the fourth since 2008. The removal of a chicane lengthened a straight but also removed a hotspot for crashes. Since the alteration, the Singapore Grand Prix has run caution-free.

What about the actual race?

Last time, I cautioned McLaren fans not to worry about a possibly resurgent Red Bull. Monza and Baku are outliers of tracks that require low downforce and low drag. Well, Singapore benefits from downforce, and the recent upgrades to the Red Bull have, in Max Verstappen’s hands at least, made it a competitor again.

The McLarens of Oscar Piastri (leading the driver’s championship) and Lando Norris (just behind Piastri in second place) are still fast, but they no longer have an advantage of several tenths of a second against the rest of the field. They started the race in third and fifth places, respectively. Ahead of Piastri on the grid, Verstappen would start the race on soft tires; everyone else around him was on the longer-lasting mediums.

F1 in Singapore: “Trophy for the hero of the race” Read More »

pentagon-contract-figures-show-ula’s-vulcan-rocket-is-getting-more-expensive

Pentagon contract figures show ULA’s Vulcan rocket is getting more expensive

A SpaceX Falcon Heavy rocket with NASA’s Psyche spacecraft launches from NASA’s Kennedy Space Center in Florida on October 13, 2023. Credit: Chandan Khanna/AFP via Getty Images

The launch orders announced Friday comprise the second batch of NSSL Phase 3 missions the Space Force has awarded to SpaceX and ULA.

It’s important to remember that these prices aren’t what ULA or SpaceX would charge a commercial satellite customer. The US government pays a premium for access to space. The Space Force, the National Reconnaissance Office, and NASA don’t insure their launches like a commercial customer would do. Instead, government agencies have more insight into their launch contractors, including inspections, flight data reviews, risk assessments, and security checks. Government missions also typically get priority on ULA and SpaceX’s launch schedules. All of this adds up to more money.

A heavy burden

Four of the five launches awarded to SpaceX Friday will use the company’s larger Falcon Heavy rocket, according to Lt. Col. Kristina Stewart at Space Systems Command. One will fly on SpaceX’s workhorse Falcon 9. This is the first time a majority of the Space Force’s annual launch orders has required the lift capability of a Falcon Heavy, with three Falcon 9 booster cores combining to heave larger payloads into space.

All versions of ULA’s Vulcan rocket use a single core booster, with varying numbers of strap-on solid-fueled rocket motors to provide extra thrust off the launch pad.

Here’s a breakdown of the seven new missions assigned to SpaceX and ULA:

USSF-149: Classified payload on a SpaceX Falcon 9 from Florida

USSF-63: Classified payload on a SpaceX Falcon Heavy from Florida

USSF-155: Classified payload SpaceX Falcon Heavy from Florida

USSF-205: WGS-12 communications satellite on a SpaceX Falcon Heavy from Florida

NROL-86: Classified payload on a SpaceX Falcon Heavy from Florida

USSF-88: GPS IIIF-4 navigation satellite on a ULA Vulcan VC2S (two solid rocket boosters) from Florida

NROL-88: Classified payload on a ULA Vulcan VC4S (four solid rocket boosters) from Florida

Pentagon contract figures show ULA’s Vulcan rocket is getting more expensive Read More »

rocket-report:-alpha-explodes-on-test-stand;-europe-wants-a-mini-starship

Rocket Report: Alpha explodes on test stand; Europe wants a mini Starship


“We are trying to find a partner that is willing to invest.”

An Electron rocket launches a Synspective satellite in 2022. Credit: Rocket Lab

Welcome to Edition 8.13 of the Rocket Report! It’s difficult for me to believe, but we have now entered the fourth quarter of the year. Accordingly, there are three months left in 2025, with a lot of launch action still to come. The remainder of the year will be headlined by Blue Origin’s New Glenn rocket making its second flight (and landing attempt), and SpaceX’s Starship making its final test flight of the year. There is also the slim possibility that Rocket Lab’s Neutron vehicle will make its debut this year, but it will almost certainly slip into 2026.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

An Alpha rocket blows up on the pad. The booster stage for Firefly Aerospace’s next Alpha rocket was destroyed Monday in a fiery accident on the company’s vertical test stand in Central Texas, Ars reports. Firefly released a statement confirming the rocket “experienced an event that resulted in a loss of the stage.” The company confirmed all personnel were safe and said ground teams followed “proper safety protocols.” Imagery posted on social media platforms showed a fireball engulfing the test stand and a column of black smoke rising into the sky over Firefly’s facility roughly 40 miles north of Austin.

Supposed to be a return-to-flight mission … Engineers were testing the rocket before shipment to Vandenberg Space Force Base, California, to prepare for launch later this year with a small commercial satellite for Lockheed Martin. The booster destroyed Monday was slated to fly on the seventh launch of Firefly’s Alpha rocket, an expendable, two-stage launch vehicle capable of placing a payload of a little over 2,200 pounds, or a metric ton, into low-Earth orbit. This upcoming launch was supposed to be the Alpha rocket’s return to flight after an in-flight failure in April, when the upper stage’s engine shut down before the rocket could reach orbit and deploy its satellite payload.

Europe wants a mini Starship. The European Space Agency signed a contract Monday with Avio, the Italian company behind the small Vega rocket, to begin designing a reusable upper stage capable of flying into orbit, returning to Earth, and launching again. The deal is worth 40 million euros ($47 million), Ars reports. In a statement, Avio said it will “define the requirements, system design, and enabling technologies needed to develop a demonstrator capable of safely returning to Earth and being reused in future missions.”

Don’t expect progress too quickly … At the end of the two-year contract, Avio will deliver a preliminary design for the reusable upper stage and the ground infrastructure needed to make it a reality. The preliminary design review is a milestone in the early phases of an aerospace project, typically occurring many years before completion. For example, Europe’s flagship Ariane 6 rocket passed its preliminary design review in 2016, eight years before its first launch. Avio and ESA did not release any specifications on the size or performance of the launcher.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Rocket Lab scores 10 more Electron launches. Synspective, a Japanese company developing a constellation of radar imaging satellites, has signed a deal with Rocket Lab for an additional 10 Electron launches, Space News reports. The companies announced the agreement on Tuesday at the International Astronautical Congress, confirming that each launch would carry a single StriX radar imaging satellite.

A repeat customer … Synspective signed a separate contract in June 2024 for 10 Electron launches, scheduled for 2025 through 2027. That was the largest single contract for Electron to date. Rocket Lab notes that Synspective is its largest Electron customer, with six launches completed to date and a backlog of 21 launches through the end of the decade. Synspective aims to place 30 synthetic aperture radar imaging satellites in orbit by 2030. This contract ensures that Electron will continue flying for quite a while.

German investment could benefit small launchers. During his address at Germany’s third annual Space Congress, Defense Minister Boris Pistorius announced that Germany would invest 35 billion euros ($41 billion) in space-related defense projects by 2030, European Spaceflight reports. “The conflicts of the future will no longer be limited to the Earth’s surface or the deep sea,” he said. “They will also be fought openly in orbit. That’s why we are building structures within the Bundeswehr to enable us to effectively defend and deter [threats] in space in the medium and long term.”

Launch an investment area … The investment will cover five main priorities: hardening against data disruptions and attacks, improved space situational awareness, redundancy through several networked satellite constellations, secure, diverse, and on-demand launch capabilities, and a dedicated military satellite operations center. Although Germany’s heavy-lift needs will continue to be met by Ariane 6, a program to which the country contributes heavily, domestic small-launch providers such as Rocket Factory Augsburg, Isar Aerospace, and HyImpulse are likely to see a boost in support.

Blue Origin seeks to expand New Shepard program. Blue Origin is developing three new suborbital New Shepard launch systems and is mulling expanding flight services beyond West Texas, Aviation Week reports. The current two-ship fleet will be retired by the end of 2027, with the first of three new spacecraft expected to debut next year, Senior Vice President Phil Joyce said during the Global Spaceport Alliance forum.

Looking for an overseas partner … Joyce said the new vehicles feature upgraded systems throughout, particularly in the propulsion system. The new ships are designed for quicker turnaround, which will enable Blue Origin to offer weekly flights. The company’s West Texas spaceport can accommodate three New Shepard vehicles, though Blue Origin is interested in possibly offering the suborbital flight service from another location, including outside the US, Joyce said. “We are trying to find a partner that is willing to invest,” he added. (submitted by Chuckgineer)

Next Nuri launch set for November. The Korea AeroSpace Administration completed a review of preparations for the next launch of the Nuri rocket and announced that the vehicle was ready for a window that would open on November 28. The main payload will be a satellite to observe Earth’s aurora and magnetic field, along with a smaller secondary payload.

Coming back after a while … The liquid-fueled Nuri rocket is the first booster to be entirely developed within Korea, and has a lift capacity of 3.3 metric tons to low-Earth orbit. The rocket failed on its debut launch in October 2021, but flew successfully in 2022 and 2023. If the rocket launches in November, it will be Nuri’s first mission in two and a half years. (submitted by CP)

Galactic Energy scores big fundraising round. Beijing-based Galactic Energy has raised what appears to be China’s largest disclosed round for a launch startup as it nears orbital test flights of new rockets, Space News reports. The company announced Series D financing of 2.4 billion yuan ($336 million) in a statement on Sunday. The funding will be used for the Pallas series of reusable liquid propellant launchers and the Ceres-2 solid rocket, both of which appear close to test launches. The investment will also go toward related production, testing, and launch facilities.

Big funding, big ambitions … Founded in February 2018, Galactic Energy has established a strong record of reliability with its light-lift Ceres-1 solid rocket, and previously raised $154 million in C-round funding in late 2023 for its Pallas-1 plans. Pallas-1, a kerosene-liquid oxygen rocket, is to be able to carry 7 metric tons of payload to a 200-km low-Earth orbit. New plans for Pallas-2 envision a capability of 20,000 to 58,000 kg, depending on a single-stick or tri-core configuration, with an aggressive target of a debut launch in 2026.

Blue Origin seeks to reuse next New Glenn booster. There’s a good bit riding on the second launch of Blue Origin’s New Glenn rocket, Ars reports. Most directly, the fate of a NASA science mission to study Mars’ upper atmosphere hinges on a successful launch. The second flight of Blue Origin’s heavy-lifter will send two NASA-funded satellites toward the red planet to study the processes that drove Mars’ evolution from a warmer, wetter world to the cold, dry planet of today. But there’s more on the line. If Blue Origin plans to launch its first robotic Moon lander early next year—as currently envisioned—the company needs to recover the New Glenn rocket’s first stage booster.

Managing prop … Crews will again dispatch Blue Origin’s landing platform into the Atlantic Ocean, just as they did for the first New Glenn flight in January. The debut launch of New Glenn successfully reached orbit, a difficult feat for the inaugural flight of any rocket. But the booster fell into the Atlantic Ocean after three of the rocket’s engines failed to reignite to slow down for landing. Engineers identified seven changes to resolve the problem, focusing on what Blue Origin calls “propellant management and engine bleed control improvements.” Company officials expressed confidence this week the booster will be recovered.

SpaceX nearing next Starship test flight. With the next Starship launch, scheduled for no earlier than October 13, SpaceX officials hope to show they can repeat the successes of the 10th test flight of the vehicle in late August, Ars reports. On its surface, the flight plan for SpaceX’s next Starship flight looks a lot like the last one. The rocket’s Super Heavy booster will again splash down in the Gulf of Mexico just offshore from SpaceX’s launch site in South Texas. And Starship, the rocket’s upper stage, will fly on a suborbital arc before reentering the atmosphere over the Indian Ocean for a water landing northwest of Australia.

Preparing for a future ship catch … There are, however, some changes to SpaceX’s flight plan for the next Starship. Most of these changes will occur during the ship’s reentry, when the vehicle’s heat shield is exposed to temperatures of up to 2,600° Fahrenheit (1,430° Celsius). These include new tests of ceramic thermal protection tiles to “intentionally stress-test vulnerable areas across the vehicle.” Another new test objective for the upcoming Starship flight will be a “dynamic banking maneuver” during the final phase of the trajectory “to mimic the path a ship will take on future flights returning to Starbase,” SpaceX said. This will help engineers test Starship’s subsonic guidance algorithms.

Senators seek to halt space shuttle move. A former NASA astronaut turned US senator has joined with other lawmakers to insist that his two rides to space remain on display in the Smithsonian, Ars reports. Sen. Mark Kelly (D-Ariz.) has joined fellow Democratic Senators Mark Warner and Tim Kaine, both of Virginia, and Dick Durbin of Illinois in an effort to halt the move of space shuttle Discovery to Houston, as enacted into law earlier this year. In a letter sent to the leadership of the Senate Committee on Appropriations, Kelly and his three colleagues cautioned that any effort to transfer the winged orbiter would “waste taxpayer dollars, risk permanent damage to the shuttle, and mean fewer visitors would be able to visit it.”

Seeking to block Cruz control … In the letter, the senators asked that Committee Chair Susan Collins (R-Maine) and Vice Chair Patty Murray (D-Wash.) block funding for Discovery‘s relocation in both the fiscal year 2026 Interior-Environment appropriations bill and FY26 Commerce, Justice, Science appropriations bill. The letter is the latest response to a campaign begun by Sens. John Cornyn and Ted Cruz, both Republicans from Texas, to remove Discovery from its 13-year home at the National Air and Space Museum’s Steven F. Udvar-Hazy Center in Chantilly, Virginia, and put it on display at Space Center Houston, the visitor center for NASA’s Johnson Space Center in Texas.

Next three launches

October 3: Falcon 9 | Starlink 11-39 | Vandenberg Space Force Base, Calif. | 13: 00 UTC

October 6: Falcon 9 | Starlink 10-59 | Cape Canaveral Space Force Station, Fla.| 04: 32 UTC

October 8: Falcon 9 | Starlink 11-17 | Vandenberg Space Force Base, Calif. | 01: 00 UTC

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

Rocket Report: Alpha explodes on test stand; Europe wants a mini Starship Read More »

hbo-max-subscribers-lose-access-to-cnn-livestream-on-november-17

HBO Max subscribers lose access to CNN livestream on November 17

HBO Max subscribers will no longer be able to watch CNN from the streaming platform as of November 17, Warner Bros. Discovery (WBD) informed customers today.

After this date, HBO Max subscribers will still be able to watch some CNN content, including shows and documentaries, on demand.

The CNN Max livestream for HBO Max launched as an open beta in September 2023. Since then, it has featured live programming from CNN’s US arm and CNN International, as well as content made specifically for HBO Max.

WBD is pulling HBO Max’s CNN channel as it prepares to launch a standalone CNN streaming service, inevitably introducing more fragmentation to the burgeoning streaming industry. The streaming service is supposed to launch this fall and provide access to original CNN programing and journalism, including “a selection of live channels, catch-up features, and video-on-demand programming,” a May announcement said.

In a statement today, Alex MacCallum, EVP of digital products and services for CNN, said:

CNN has benefitted tremendously from its two years of offering a live 24/7 feed of news to HBO Max customers. We learned from HBO Max’s large base of subscribers what people want and enjoy the most from CNN, and with the launch of our own new streaming subscription offering coming later this fall, we look forward to building off that and growing our audience with this unique, new offering.

WBD will sell subscriptions to CNN’s new streaming service as part of an “All Access” subscription that will include the ability to read paywalled articles on CNN’s website.

HBO Max subscribers lose access to CNN livestream on November 17 Read More »

scientists-revive-old-bulgarian-recipe-to-make-yogurt-with-ants

Scientists revive old Bulgarian recipe to make yogurt with ants

Fermenting milk to make yogurt, cheeses, or kefir is an ancient practice, and different cultures have their own traditional methods, often preserved in oral histories. The forests of Bulgaria and Turkey have an abundance of red wood ants, for instance, so a time-honored Bulgarian yogurt-making practice involves dropping a few live ants (or crushed-up ant eggs) into the milk to jump-start fermentation. Scientists have now figured out why the ants are so effective in making edible yogurt, according to a paper published in the journal iScience. The authors even collaborated with chefs to create modern recipes using ant yogurt.

“Today’s yogurts are typically made with just two bacterial strains,” said co-author Leonie Jahn from the Technical University of Denmark. “If you look at traditional yogurt, you have much bigger biodiversity, varying based on location, households, and season. That brings more flavors, textures, and personality.”

If you want to study traditional culinary methods, it helps to go where those traditions emerged, since the locals likely still retain memories and oral histories of said culinary methods—in this case, Nova Mahala, Bulgaria, where co-author Sevgi Mutlu Sirakova’s family still lives. To recreate the region’s ant yogurt, the team followed instructions from Sirakova’s uncle. They used fresh raw cow milk, warmed until scalding, “such that it could ‘bite your pinkie finger,'” per the authors. Four live red wood ants were then collected from a local colony and added to the milk.

The authors secured the milk with cheesecloth and wrapped the glass container in fabric for insulation before burying it inside the ant colony, covering the container completely with the mound material. “The nest itself is known to produce heat and thus act as an incubator for yogurt fermentation,” they wrote. They retrieved the container 26 hours later to taste it and check the pH, stirring it to observe the coagulation. The milk had definitely begun to thicken and sour, producing the early stage of yogurt. Tasters described it as “slightly tangy, herbaceous,” with notes of “grass-fed fat.”

Scientists revive old Bulgarian recipe to make yogurt with ants Read More »