Science

the-first-stars-may-not-have-been-as-uniformly-massive-as-we thought

The first stars may not have been as uniformly massive as we thought


Collapsing gas clouds in the early universe may have formed lower-mass stars as well.

Stars form in the universe from massive clouds of gas. Credit: European Southern Observatory, CC BY-SA

For decades, astronomers have wondered what the very first stars in the universe were like. These stars formed new chemical elements, which enriched the universe and allowed the next generations of stars to form the first planets.

The first stars were initially composed of pure hydrogen and helium, and they were massive—hundreds to thousands of times the mass of the Sun and millions of times more luminous. Their short lives ended in enormous explosions called supernovae, so they had neither the time nor the raw materials to form planets, and they should no longer exist for astronomers to observe.

At least that’s what we thought.

Two studies published in the first half of 2025 suggest that collapsing gas clouds in the early universe may have formed lower-mass stars as well. One study uses a new astrophysical computer simulation that models turbulence within the cloud, causing fragmentation into smaller, star-forming clumps. The other study—an independent laboratory experiment—demonstrates how molecular hydrogen, a molecule essential for star formation, may have formed earlier and in larger abundances. The process involves a catalyst that may surprise chemistry teachers.

As an astronomer who studies star and planet formation and their dependence on chemical processes, I am excited at the possibility that chemistry in the first 50 million to 100 million years after the Big Bang may have been more active than we expected.

These findings suggest that the second generation of stars—the oldest stars we can currently observe and possibly the hosts of the first planets—may have formed earlier than astronomers thought.

Primordial star formation

Video illustration of the star and planet formation process. Credit: Space Telescope Science Institute.

Stars form when massive clouds of hydrogen many light-years across collapse under their own gravity. The collapse continues until a luminous sphere surrounds a dense core that is hot enough to sustain nuclear fusion.

Nuclear fusion happens when two or more atoms gain enough energy to fuse together. This process creates a new element and releases an incredible amount of energy, which heats the stellar core. In the first stars, hydrogen atoms fused together to create helium.

The new star shines because its surface is hot, but the energy fueling that luminosity percolates up from its core. The luminosity of a star is its total energy output in the form of light. The star’s brightness is the small fraction of that luminosity that we directly observe.

This process where stars form heavier elements by nuclear fusion is called stellar nucleosynthesis. It continues in stars after they form as their physical properties slowly change. The more massive stars can produce heavier elements such as carbon, oxygen, and nitrogen, all the way up to iron, in a sequence of fusion reactions that end in a supernova explosion.

Supernovae can create even heavier elements, completing the periodic table of elements. Lower-mass stars like the Sun, with their cooler cores, can sustain fusion only up to carbon. As they exhaust the hydrogen and helium in their cores, nuclear fusion stops, and the stars slowly evaporate.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right.

The remnant of a high-mass star supernova explosion imaged by the Chandra X-ray Observatory, left, and the remnant of a low-mass star evaporating in a blue bubble, right. Credit: CC BY 4.0

High-mass stars have high pressure and temperature in their cores, so they burn bright and use up their gaseous fuel quickly. They last only a few million years, whereas low-mass stars—those less than two times the Sun’s mass—evolve much more slowly, with lifetimes of billions or even trillions of years.

If the earliest stars were all high-mass stars, then they would have exploded long ago. But if low-mass stars also formed in the early universe, they may still exist for us to observe.

Chemistry that cools clouds

The first star-forming gas clouds, called protostellar clouds, were warm—roughly room temperature. Warm gas has internal pressure that pushes outward against the inward force of gravity trying to collapse the cloud. A hot air balloon stays inflated by the same principle. If the flame heating the air at the base of the balloon stops, the air inside cools, and the balloon begins to collapse.

Stars form when clouds of dust collapse inward and condense around a small, bright, dense core. Credit: NASA, ESA, CSA, and STScI, J. DePasquale (STScI), CC BY-ND

Only the most massive protostellar clouds with the most gravity could overcome the thermal pressure and eventually collapse. In this scenario, the first stars were all massive.

The only way to form the lower-mass stars we see today is for the protostellar clouds to cool. Gas in space cools by radiation, which transforms thermal energy into light that carries the energy out of the cloud. Hydrogen and helium atoms are not efficient radiators below several thousand degrees, but molecular hydrogen, H₂, is great at cooling gas at low temperatures.

When energized, H₂ emits infrared light, which cools the gas and lowers the internal pressure. That process would make gravitational collapse more likely in lower-mass clouds.

For decades, astronomers have reasoned that a low abundance of H₂ early on resulted in hotter clouds whose internal pressure would be too hot to easily collapse into stars. They concluded that only clouds with enormous masses, and therefore higher gravity, would collapse, leaving more massive stars.

Helium hydride

In a July 2025 journal article, physicist Florian Grussie and collaborators at the Max Planck Institute for Nuclear Physics demonstrated that the first molecule to form in the universe, helium hydride, HeH⁺, could have been more abundant in the early universe than previously thought. They used a computer model and conducted a laboratory experiment to verify this result.

Helium hydride? In high school science you probably learned that helium is a noble gas, meaning it does not react with other atoms to form molecules or chemical compounds. As it turns out, it does—but only under the extremely sparse and dark conditions of the early universe, before the first stars formed.

HeH⁺ reacts with hydrogen deuteride—HD, which is one normal hydrogen atom bonded to a heavier deuterium atom—to form H₂. In the process, HeH⁺ also acts as a coolant and releases heat in the form of light. So the high abundance of both molecular coolants earlier on may have allowed smaller clouds to cool faster and collapse to form lower-mass stars.

Gas flow also affects stellar initial masses

In another study, published in July 2025, astrophysicist Ke-Jung Chen led a research group at the Academia Sinica Institute of Astronomy and Astrophysics using a detailed computer simulation that modeled how gas in the early universe may have flowed.

The team’s model demonstrated that turbulence, or irregular motion, in giant collapsing gas clouds can form lower-mass cloud fragments from which lower-mass stars condense.

The study concluded that turbulence may have allowed these early gas clouds to form stars either the same size or up to 40 times more massive than the Sun’s mass.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way.

The galaxy NGC 1140 is small and contains large amounts of primordial gas with far fewer elements heavier than hydrogen and helium than are present in our Sun. This composition makes it similar to the intensely star-forming galaxies found in the early universe. These early universe galaxies were the building blocks for large galaxies such as the Milky Way. Credit: ESA/Hubble & NASA, CC BY-ND

The two new studies both predict that the first population of stars could have included low-mass stars. Now, it is up to us observational astronomers to find them.

This is no easy task. Low-mass stars have low luminosities, so they are extremely faint. Several observational studies have recently reported possible detections, but none are yet confirmed with high confidence. If they are out there, though, we will find them eventually.The Conversation

Luke Keller is a professor of physics and astronomy at Ithaca College.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo of The Conversation

The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.

The first stars may not have been as uniformly massive as we thought Read More »

bluesky-now-platform-of-choice-for-science-community

Bluesky now platform of choice for science community


It’s not just you. Survey says: “Twitter sucks now and all the cool kids are moving to Bluesky”

Credit: Getty Images | Chris Delmas

Marine biologist and conservationist David Shiffman was an early power user and evangelist for science engagement on the social media platform formerly known as Twitter. Over the years, he trained more than 2,000 early career scientists on how to best use the platform for professional goals: networking with colleagues, sharing new scientific papers, and communicating with interested members of the public.

But when Elon Musk bought Twitter in 2022, renaming it X, changes to both the platform’s algorithm and moderation policy soured Shiffman on the social media site. He started looking for a viable alternative among the fledgling platforms that had begun to pop up: most notably Threads, Post, Mastodon, and Bluesky. He was among the first wave of scientists to join Bluesky and found that, even in its infancy, it had many of the features he had valued in “golden age” Twitter.

Shiffman also noticed that he wasn’t the only one in the scientific community having issues with Twitter. This impression was further bolstered by news stories in outlets like Nature, Science, and the Chronicle of Higher Education noting growing complaints about Twitter and increased migration over to Bluesky by science professionals. (Full disclosure: I joined Bluesky around the same time as Shiffman, for similar reasons: Twitter had ceased to be professionally useful, and many of the science types I’d been following were moving to Bluesky. I nuked my Twitter account in November 2024.)

A curious Shiffman decided to conduct a scientific survey, announcing the results in a new paper published in the journal Integrative and Comparative Biology. The findings confirm that, while Twitter was once the platform of choice for a majority of science communicators, those same people have since abandoned it in droves. And of the alternatives available, Bluesky seems to be their new platform of choice.

Shiffman, the author of Why Sharks Matter, described early Twitter recently on the blog Southern Fried Science as “the world’s most interesting cocktail party.”

“Then it stopped being useful,” Shiffman told Ars. “I was worried for a while that this incredibly powerful way of changing the world using expertise was gone. It’s not gone. It just moved. It’s a little different now, and it’s not as powerful as it was, but it’s not gone. It was for me personally, immensely reassuring that so many other people were having the same experience that I was. But it was also important to document that scientifically.”

Eager to gather solid data on the migration phenomenon to bolster his anecdotal observations, Shiffman turned to social scientist Julia Wester, one of the scientists who had joined Twitter at Shiffman’s encouragement years before, before also becoming fed up and migrating to Bluesky. Despite being “much less online” than the indefatigable Shiffman, Wester was intrigued by the proposition. “I was interested not just in the anecdotal evidence, the conversations we were having, but also in identifying the real patterns,” she told Ars. “As a social scientist, when we hear anecdotal evidence about people’s experiences, I want to know what that looks like across the population.”

Shiffman and Wester targeted scientists, science communicators, and science educators who used (or had used) both Twitter and Bluesky. Questions explored user attitudes toward, and experiences with, each platform in a professional capacity: when they joined, respective follower and post counts, which professional tasks they used each platform for, the usefulness of each platform for those purposes relative to 2021, how they first heard about Bluesky, and so forth.

The authors acknowledge that they are looking at a very specific demographic among social media users in general and that there is an inevitable self-selection effect. However, “You want to use the sample and the method that’s appropriate to the phenomenon that you’re looking at,” said Wester. “For us, it wasn’t just the experience of people using these platforms, but the phenomenon of migration. Why are people deciding to stay or move? How they’re deciding to use both of these platforms? For that, I think we did get a pretty decent sample for looking at the dynamic tensions, the push and pull between staying on one platform or opting for another.”

They ended up with a final sample size of 813 people. Over 90 percent of respondents said they had used Twitter for learning about new developments in their field; 85.5 percent for professional networking; and 77.3 percent for public outreach. Roughly three-quarters of respondents said that the platform had become significantly less useful for each of those professional uses since Musk took over. Nearly half still have Twitter accounts but use it much less frequently or not at all, while about 40 percent have deleted their accounts entirely in favor of Bluesky.

Making the switch

User complaints about Twitter included a noticeable increase in spam, porn, bots, and promoted posts from users who paid for a verification badge, many spreading extremist content. “I very quickly saw material that I did not want my posts to be posted next to or associated with,” one respondent commented. There were also complaints about the rise in misinformation and a significant decline in both the quantity and quality of engagement, with respondents describing their experiences as “unpleasant,” “negative,” or “hostile.”

The survey responses also revealed a clear push/pull dynamic when it came to the choice to abandon Twitter for Bluesky. That is, people felt they were being pushed away from Twitter and were actively looking for alternatives. As one respondent put it, “Twitter started to suck and all the cool people were moving to Bluesky.”

Bluesky was user-friendly with no algorithm, a familiar format, and helpful tools like starter packs of who to follow in specific fields, which made the switch a bit easier for many newcomers daunted by the prospect of rebuilding their online audience. Bluesky users also appreciated the moderation on the platform and having the ability to block or mute people as a means of disengaging from more aggressive, unpleasant conversations. That said, “If Twitter was still great, then I don’t think there’s any combination of features that would’ve made this many people so excited about switching,” said Shiffman.

Per Shiffman and Wester, an “overwhelming majority” of respondents said that Bluesky has a “vibrant and healthy online science community,” while Twitter no longer does. And many Bluesky users reported getting more bang for their buck, so to speak, on Bluesky. They might have a lower follower count, but those followers are far more engaged: Someone with 50,000 Twitter/X followers, for example, might get five likes on a given post; but on Bluesky, they may only have 5,000 followers, but their posts will get 100 likes.

According to Shiffman, Twitter always used to be in the top three in terms of referral traffic for posts on Southern Fried Science. Then came the “Muskification,” and suddenly Twitter referrals weren’t even cracking the top 10. By contrast, in 2025 thus far, Bluesky has driven “a hundred times as many page views” to Southern Fried Science as Twitter. Ironically, “the blog post that’s gotten the most page views from Twitter is the one about this paper,” said Shiffman.

Ars social media manager Connor McInerney confirmed that Ars Technica has also seen a steady dip in Twitter referral traffic thus far in 2025. Furthermore, “I can say anecdotally that over the summer we’ve seen our Bluesky traffic start to surpass our Twitter traffic for the first time,” McInerney said, attributing the growth to a combination of factors. “We’ve been posting to the platform more often and our audience there has grown significantly. By my estimate our audience has grown by 63 percent since January. The platform in general has grown a lot too—they had 10 million users in September of last year, and this month the latest numbers indicate they’re at 38 million users. Conversely, our Twitter audience has remained fairly static across the same period of time.”

Bubble, schmubble

As for scientists looking to share scholarly papers online, Shiffman pulled the Altmetrics stats for his and Wester’s new paper. “It’s already one of the 10 most shared papers in the history of that journal on social media,” he said, with 14 shares on Twitter/X vs over a thousand shares on Bluesky (as of 4 pm ET on August 20). “If the goal is showing there’s a more active academic scholarly conversation on Bluesky—I mean, damn,” he said.

“When I talk about fish on Bluesky, people ask me questions about fish. When I talk about fish on Twitter, people threaten to murder my family because we’re Jewish.”

And while there has been a steady drumbeat of op-eds of late in certain legacy media outlets accusing Bluesky of being trapped in its own liberal bubble, Shiffman, for one, has few concerns about that. “I don’t care about this, because I don’t use social media to argue with strangers about politics,” he wrote in his accompanying blog post. “I use social media to talk about fish. When I talk about fish on Bluesky, people ask me questions about fish. When I talk about fish on Twitter, people threaten to murder my family because we’re Jewish.” He compared the current incarnation of Twitter as no better than 4Chan or TruthSocial in terms of the percentage of “conspiracy-prone extremists” in the audience. “Even if you want to stay, the algorithm is working against you,” he wrote.

“There have been a lot of opinion pieces about why Bluesky is not useful because the people there tend to be relatively left-leaning,” Shiffman told Ars. “I haven’t seen any of those same people say that Twitter is bad because it’s relatively right-leaning. Twitter is not a representative sample of the public either.” And given his focus on ocean conservation and science-based, data-driven environmental advocacy, he is likely to find a more engaged and persuadable audience at Bluesky.

The survey results show that at this point, Bluesky seems to have hit a critical mass for the online scientific community. That said, Shiffman, for one, laments that the powerful Black Science Twitter contingent, for example, has thus far not switched to Bluesky in significant numbers. He would like to conduct a follow-up study to look into how many still use Twitter vs those who may have left social media altogether, as well as Bluesky’s demographic diversity—paving the way for possible solutions should that data reveal an unwelcoming environment for non-white scientists.

There are certainly limitations to the present survey. “Because this is such a dynamic system and it’s changing every day, I think if we did this study now versus when we did it six months ago, we’d get slightly different answers and dynamics,” said Wester. “It’s still relevant because you can look at the factors that make people decide to stay or not on Bluesky, to switch to something else, to leave social media altogether. That can tell us something about what makes a healthy, vibrant conversation online. We’re capturing one of the responses: ‘I’ll see you on Bluesky.’ But that’s not the only response. Public science communication is as important now as it’s ever been, so looking at how scientists have pivoted is really important.”

We recently reported on research indicating that social media as a system might well be doomed, since its very structure gives rise to the toxic dynamics that plague so much of social media: filter bubbles, algorithms that amplify the most extreme views to boost engagement, and a small number of influencers hogging the lion’s share of attention. That paper concluded that any intervention strategies were likely to fail. Both Shiffman and Wester, while acknowledging the reality of those dynamics, are less pessimistic about social media’s future.

“I think the problem is not with how social media works, it’s with how any group of people work,” said Shiffman. “Humans evolved in tiny social groupings where we helped each other and looked out for each other’s interests. Now I have to have a fight with someone 10,000 miles away who has no common interest with me about whether or not vaccines are bad. We were not built for that. Social media definitely makes it a lot easier for people who are anti-social by nature and want to stir conflict to find those conflicts. Something that took me way too long to learn is that you don’t have to participate in every fight you’re invited to. There are people who are looking for a fight and you can simply say, ‘No, thank you. Not today, Satan.'”

“The contrast that people are seeing between Bluesky and present-day Twitter highlights that these are social spaces, which means that you’re going to get all of the good and bad of humanity entering into that space,” said Wester. “But we have had new social spaces evolve over our whole history. Sometimes when there’s something really new, we have to figure out the rules for that space. We’re still figuring out the rules for these social media spaces. The contrast in moderation policies and the use (or not) of algorithms between those two platforms that are otherwise very similar in structure really highlights that you can shape those social spaces by creating rules and tools for how people interact with each other.”

DOI: Integrative and Comparative Biology, 2025. 10.1093/icb/icaf127  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Bluesky now platform of choice for science community Read More »

lawmaker:-trump’s-golden-dome-will-end-the-madness,-and-that’s-not-a-good-thing

Lawmaker: Trump’s Golden Dome will end the madness, and that’s not a good thing

“The underlying issue here is whether US missile defense should remain focused on the threat from rogue states and… accidental launches, and explicitly refrain from countering missile threats from China or Russia,” DesJarlais said. He called the policy of Mutually Assured Destruction “outdated.”

President Donald Trump speaks alongside Secretary of Defense Pete Hegseth in the Oval Office at the White House on May 20, 2025, in Washington, DC. President Trump announced his plans for the Golden Dome, a national ballistic and cruise missile defense system. Credit: Chip Somodevilla/Getty Images

Moulton’s amendment on nuclear deterrence failed to pass the committee in a voice vote, as did another Moulton proposal that would have tapped the brakes on developing space-based interceptors.

But one of Moulton’s amendments did make it through the committee. This amendment, if reconciled with the Senate, would prohibit the Pentagon from developing a privatized or subscription-based missile defense intercept capability. The amendment says the US military can own and operate such a system.

Ultimately, the House Armed Services Committee voted 55–2 to send the NDAA to a vote on the House floor. Then, lawmakers must hash out the differences between the House version of the NDAA with a bill written in the Senate before sending the final text to the White House for President Trump to sign into law.

More questions than answers

The White House says the missile shield will cost $175 billion over the next three years. But that’s just to start. A network of space-based missile sensors and interceptors, as prescribed in Trump’s executive order, will eventually number thousands of satellites in low-Earth orbit. The Congressional Budget Office reported in May that the Golden Dome program may ultimately cost up to $542 billion over 20 years.

The problem with all of the Golden Dome cost estimates is that the Pentagon has not settled on an architecture. We know the system will consist of a global network of satellites with sensors to detect and track missile launches, plus numerous interceptors in orbit to take out targets in space and during their “boost phase” when they’re moving relatively slowly through the atmosphere.

The Pentagon will order more sea- and ground-based interceptors to destroy missiles, drones, and aircraft as they near their targets within the United States. All of these weapons must be interconnected with a sophisticated command and control network that doesn’t yet exist.

Will Golden Dome’s space-based interceptors use kinetic kill vehicles to physically destroy missiles targeting the United States? Or will the interceptors rely on directed energy weapons like lasers or microwave signals to disable their targets? How many interceptors are actually needed?

These are all questions without answers. Despite the lack of detail, congressional Republicans approved $25 billion for the Pentagon to get started on the Golden Dome program as part of the Trump-backed One Big Beautiful Bill Act. The bill passed Congress with a party-line vote last month.

Israel’s Iron Dome aerial defense system intercepts a rocket launched from the Gaza Strip on May 11, 2021. Credit: Jack Guez/AFP via Getty Images

Moulton earned a bachelor’s degree in physics and master’s degrees in business and public administration from Harvard University. He served as a Marine Corps platoon leader in Iraq and was part of the first company of Marines to reach Baghdad during the US invasion of 2003. Moulton ran for the Democratic presidential nomination in 2020 but withdrew from the race before the first primary contest.

The text of our interview with Moulton is published below. It is lightly edited for length and clarity.

Ars: One of your amendments that passed committee would prevent the DoD from using a subscription or pay-for-service model for the Golden Dome. What prompted you to write that amendment?

Moulton: There were some rumors we heard that this is a model that the administration was pursuing, and there was reporting in mid-April suggesting that SpaceX was partnering with Anduril and Palantir to offer this kind of subscription service where, basically, the government would pay to access the technology rather than own the system. This isn’t an attack on any of these companies or anything. It’s a reassertion of the fundamental belief that these are responsibilities of our government. The decision to engage an intercontinental ballistic missile is a decision that the government must make, not some contractors working at one of these companies.

Ars: Basically, the argument you’re making is that war-fighting should be done by the government and the armed forces, not by contractors or private companies, right?

Moulton: That’s right, and it’s a fundamental belief that I’ve had for a long time. I was completely against contractors in Iraq when I was serving there as a younger Marine, but I can’t think of a place where this is more important than when you’re talking about nuclear weapons.

Ars: One of the amendments that you proposed, but didn’t pass, was intended to reaffirm the nation’s strategy of nuclear deterrence. What was the purpose of this amendment?

Moulton: Let’s just start by saying this is fundamentally why we have to have a theory that forms a foundation for spending hundreds of billions of taxpayer dollars. Golden Dome has no clear design, no real cost estimate, and no one has explained how this protects or enhances strategic stability. And there’s a lot of evidence that it would make strategic stability worse because our adversaries would no longer have confidence in Mutual Assured Destruction, and that makes them potentially much more likely to initiate a strike or overreact quickly to some sort of confrontation that has the potential to go nuclear.

In the case of the Russians, it means they could activate their nuclear weapon in space and just take out our Golden Dome interceptors if they think we might get into a nuclear exchange. I mean, all these things are horrific consequences.

Like I said in our hearing, there are two explanations for Golden Dome. The first is that every nuclear theorist for the last 75 years was wrong, and thank God, Donald Trump came around and set us right because in his first administration and every Democratic and Republican administration, we’ve all been wrong—and really the future of nuclear deterrence is nuclear defeat through defense and not Mutually Assured Destruction.

The other explanation, of course, is that Donald Trump decided he wants the golden version of something his friend has. You can tell me which one’s more likely, but literally no one has been able to explain the theory of the case. It’s dangerous, it’s wasteful… It might be incredibly dangerous. I’m happy to be convinced that Golden Dome is the right solution. I’m happy to have people explain why this makes sense and it’s a worthwhile investment, but literally nobody has been able to do that. If the Russians attack us… we know that this system is not going to be 100 percent effective. To me, that doesn’t make a lot of sense. I don’t want to gamble on… which major city or two we lose in a scenario like that. I want to prevent a nuclear war from happening.

Several Chinese DF-5B intercontinental ballistic missiles, each capable of delivering up to 10 independently maneuverable nuclear warheads, are seen during a parade in Beijing on September 3, 2015. Credit: Xinhua/Pan Xu via Getty Images

Ars: What would be the way that an administration should propose something like the Golden Dome? Not through an executive order? What process would you like to see?

Moulton: As a result of a strategic review and backed up by a lot of serious theory and analysis. The administration proposes a new solution and has hearings about it in front of Congress, where they are unafraid of answering tough questions. This administration is a bunch of cowards who can who refuse to answer tough questions in Congress because they know they can’t back up their president’s proposals.

Ars: I’m actually a little surprised we haven’t seen any sort of architecture yet. It’s been six months, and the administration has already missed a few of Trump’s deadlines for selecting an architecture.

Moulton: It’s hard to develop an architecture for something that doesn’t make sense.

Ars: I’ve heard from several retired military officials who think something like the Golden Dome is a good idea, but they are disappointed in the way the Trump administration has approached it. They say the White House hasn’t stated the case for it, and that risks politicizing something they view as important for national security.

Moulton: One idea I’ve had is that the advent of directed energy weapons (such as lasers and microwave weapons) could flip the cost curve and actually make defense cheaper than offense, whereas in the past, it’s always been cheaper to develop more offensive capabilities rather than the defensive means to shoot at them.

And this is why the Anti-Ballistic Missile Treaty in the early 1970s was so effective, because there was this massive arms race where we were constantly just creating a new offensive weapon to get around whatever defenses our adversary proposed. The reason why everyone would just quickly produce a new offensive weapon before that treaty was put into place is because it was easy to do.

My point is that I’ve even thrown them this bone, and I’m saying, ‘Here, maybe that’s your reason, right?” And they just look at me dumbfounded because obviously none of them are thinking about this. They’re just trying to be lackeys for the president, and they don’t recognize how dangerous that is.

Ars: I’ve heard from a chorus of retired and even current active duty military leaders say the same thing about directed energy weapons. You essentially can use one platform in space take take numerous laser shots at a missile instead of expending multiple interceptors for one kill.

Moulton: Yes, that’s basically the theory of the case. Now, my hunch is that if you actually did the serious analysis, you would determine that it still decreases state strategic stability. So in terms of the overall safety and security of the United States, whether it’s directed energy weapons or kinetic interceptors, it’s still a very bad plan.

But I’m even throwing that out there to try to help them out here. “Maybe this is how you want to make your case.” And they just look at me like deer in the headlights because, obviously, they’re not thinking about the national security of the United States.

Ars: I also wanted to ask about the Space Force’s push to develop weapons to use against other satellites in orbit. They call these counter-space capabilities. They could be using directed energy, jamming, robotic arms, anti-satellite missiles. This could take many different forms, and the Space Force, for the first time, is talking more openly about these issues. Are these kinds of weapons necessary, in your view, or are they too destabilizing?

Moulton: I certainly wish we could go back to a time when the Russians and Chinese were not developing space weapons—or were not weaponizing space, I should say, because that was the international agreement. But the reality of the world we live in today is that our adversaries are violating that agreement. We have to be prepared to defend the United States.

Ars: Are there any other space policy issues on your radar or things you have concerns about?

Moulton: There’s a lot. There’s so much going on with space, and that’s the reason I chose this subcommittee, even though people would expect me to serve on the subcommittee dealing with the Marine Corps, because I just think space is incredibly important. We’re dealing with everything from promotion policy in the Space Force to acquisition reform to rules of engagement, and anything in between. There’s an awful lot going on there, but I do think that one of the most important things to talk about right now is how dangerous the Golden Dome could be.

Lawmaker: Trump’s Golden Dome will end the madness, and that’s not a good thing Read More »

scientists-unlock-secret-to-thick,-stable-beer-foams

Scientists unlock secret to thick, stable beer foams

For many beer lovers, a nice thick head of foam is one of life’s pure pleasures, and the longer that foam lasts, the better the beer-drinking experience. A team of Swiss researchers spent seven years studying why some beer foams last longer than others and found that the degree of fermentation—i.e., whether a given beer has been singly, doubly, or triply fermented—is crucial, according to a new paper published in the journal Physics of Fluids.

As previously reported, foams are ubiquitous in everyday life, found in foods (whipped cream), beverages (beer, cappuccino), shaving cream and hair-styling mousse, packing peanuts, building insulation, flame-retardant materials, and so forth. All foams are the result of air being beaten into a liquid formula that contains some kind of surfactant (active surface agent), usually fats or proteins in edible foams, or chemical additives in non-edible products. That surfactant strengthens the liquid film walls of the bubbles to keep them from collapsing.

Individual bubbles typically form a sphere because that’s the shape with the minimum surface area for any volume and hence is the most energy-efficient. One reason for the minimizing principle when it comes to a bubble’s shape is that many bubbles can then tightly pack together to form a foam. But bubbles “coarsen” over time, the result of gravity pulling down on the liquid and thinning out the walls. Eventually, they start to look more like soccer balls (polyhedrons). In a coarsening foam, smaller bubbles are gradually absorbed by larger ones. There is less and less liquid to separate the individual bubbles, so they press together to fill the space.

This “jamming” is why foams are typically far more rigid than their gas (95 percent) and liquid (5 percent) components. The more tightly the bubbles jam together, the less they can move around and the greater the pressure inside them becomes, giving them properties of a solid.

Various factors can affect foam stability. For instance, in 2019, Japanese researchers investigated a phenomenon known as “collective bubble collapse,” or CBC, in which breaking one bubble at the edge of a foam results in a cascading effect as the breakage spreads to other bubbles in the foam. They identified two distinct mechanisms for the resulting CBCs: a so-called “propagating mode,” in which a broken bubble is absorbed into the liquid film, and a “penetrating mode,” in which the breakage of a bubble causes droplets to shoot off and hit other bubbles, causing them to break in turn.

Scientists unlock secret to thick, stable beer foams Read More »

google’s-ai-model-just-nailed-the-forecast-for-the-strongest-atlantic-storm-this-year

Google’s AI model just nailed the forecast for the strongest Atlantic storm this year

In early June, shortly after the beginning of the Atlantic hurricane season, Google unveiled a new model designed specifically to forecast the tracks and intensity of tropical cyclones.

Part of the Google DeepMind suite of AI-based weather research models, the “Weather Lab” model for cyclones was a bit of an unknown for meteorologists at its launch. In a blog post at the time, Google said its new model, trained on a vast dataset that reconstructed past weather and a specialized database containing key information about hurricanes tracks, intensity, and size, had performed well during pre-launch testing.

“Internal testing shows that our model’s predictions for cyclone track and intensity are as accurate as, and often more accurate than, current physics-based methods,” the company said.

Google said it would partner with the National Hurricane Center, an arm of the National Oceanic and Atmospheric Service that has provided credible forecasts for decades, to assess the performance of its Weather Lab model in the Atlantic and East Pacific basins.

All eyes on Erin

It had been a relatively quiet Atlantic hurricane season until a few weeks ago, with overall activity running below normal levels. So there were no high-profile tests of the new model. But about 10 days ago, Hurricane Erin rapidly intensified in the open Atlantic Ocean, becoming a Category 5 hurricane as it tracked westward.

From a forecast standpoint, it was pretty clear that Erin was not going to directly strike the United States, but meteorologists sweat the details. And because Erin was such a large storm, we had concerns about how close Erin would get to the East Coast of the United States (close enough, it turns out, to cause some serious beach erosion) and its impacts on the small island of Bermuda in the Atlantic.

Google’s AI model just nailed the forecast for the strongest Atlantic storm this year Read More »

trump-admin-issues-stop-work-order-for-offshore-wind-project

Trump admin issues stop-work order for offshore wind project

In a statement to Politico’s E&E News days after the order was lifted in May, the White House claimed that Hochul “caved” and struck an agreement to allow “two natural gas pipelines to advance” through New York.

Hochul denied that any such deal was made.

Trump has made no effort to conceal his disdain for wind power and other renewable energies, and his administration has actively sought to stymie growth in the industry while providing what critics have described as “giveaways” to fossil fuels.

In a Truth Social post on Wednesday, Trump called wind and solar energy the “SCAM OF THE CENTURY,” criticizing states that have built and rely on them for power.

“We will not approve wind or farmer destroying Solar,” Trump wrote. “The days of stupidity are over in the USA!!!”

On Trump’s first day in office, the president issued a memorandum halting approvals, permits, leases, and loans for both offshore and onshore wind projects.

The GOP also targeted wind energy in the One Big Beautiful Bill Act, accelerating the phaseout of tax credits for wind and solar projects while mandating lease sales for fossil fuels and making millions of acres of federal land available for mining.

The administration’s subsequent consideration of rules to further restrict access to tax credits for wind and solar projects alarmed even some Republicans, prompting Iowa Sen. Chuck Grassley and Utah Sen. John Curtis to place holds on Treasury nominees as they awaited the department’s formal guidance.

Those moves have rattled the wind industry and created uncertainty about the viability of ongoing and future projects.

“The unfortunate message to investors is clear: the US is no longer a reliable place for long-term energy investments,” said the American Clean Power Association, a trade association, in a statement on Friday.

To Kathleen Meil, local clean energy deployment director at the League of Conservation Voters, that represents a loss not only for the environment but also for the US economy.

“It’s really easy to think about the visible—the 4,200 jobs across all phases of development that you see… They’ve hit more than 2 million union work hours on Revolution Wind,” Meil said.

“But what’s also really transformational is that it’s already triggered $1.3 billion in investment through the supply chain. So it’s not just coastal communities that are benefiting from these jobs,” she said.

“This hurts so many people. And why? There’s just no justification.”

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

Trump admin issues stop-work order for offshore wind project Read More »

how-the-cavefish-lost-its-eyes—again-and-again

How the cavefish lost its eyes—again and again


Mexican tetras in pitch-black caverns had no use for the energetically costly organs.

Photographs of Astyanax mexicanus, surface form with eyes (top) and cave form without eyes (bottom). Credit: Daniel Castranova, NICHD/NIH

Photographs of Astyanax mexicanus, surface form with eyes (top) and cave form without eyes (bottom). Credit: Daniel Castranova, NICHD/NIH

Time and again, whenever a population was swept into a cave and survived long enough for natural selection to have its way, the eyes disappeared. “But it’s not that everything has been lost in cavefish,” says geneticist Jaya Krishnan of the Oklahoma Medical Research Foundation. “Many enhancements have also happened.”

Though the demise of their eyes continues to fascinate biologists, in recent years, attention has shifted to other intriguing aspects of cavefish biology. It has become increasingly clear that they haven’t just lost sight but also gained many adaptations that help them to thrive in their cave environment, including some that may hold clues to treatments for obesity and diabetes in people.

Casting off expensive eyes

It has long been debated why the eyes were lost. Some biologists used to argue that they just withered away over generations because cave-dwelling animals with faulty eyes experienced no disadvantage. But another explanation is now considered more likely, says evolutionary physiologist Nicolas Rohner of the University of Münster in Germany: “Eyes are very expensive in terms of resources and energy. Most people now agree that there must be some advantage to losing them if you don’t need them.”

Scientists have observed that mutations in different genes involved in eye formation have led to eye loss. In other words, says Krishnan, “different cavefish populations have lost their eyes in different ways.”

Meanwhile, the fishes’ other senses tend to have been enhanced. Studies have found that cave-dwelling fish can detect lower levels of amino acids than surface fish can. They also have more tastebuds and a higher density of sensitive cells alongside their bodies that let them sense water pressure and flow.

Regions of the brain that process other senses are also expanded, says developmental biologist Misty Riddle of the University of Nevada, Reno, who coauthored a 2023 article on Mexican tetra research in the Annual Review of Cell and Developmental Biology. “I think what happened is that you have to, sort of, kill the eye program in order to expand the other areas.”

Killing the processes that support the formation of the eye is quite literally what happens. Just like non-cave-dwelling members of the species, all cavefish embryos start making eyes. But after a few hours, cells in the developing eye start dying, until the entire structure has disappeared. Riddle thinks this apparent inefficiency may be unavoidable. “The early development of the brain and the eye are completely intertwined—they happen together,” she says. That means the least disruptive way for eyelessness to evolve may be to start making an eye and then get rid of it.

In what Krishnan and Rohner have called “one of the most striking experiments performed in the field of vertebrate evolution,” a study published in 2000 showed that the fate of the cavefish eye is heavily influenced by its lens. Scientists showed this by transplanting the lens of a surface fish embryo to a cavefish embryo, and vice versa. When they did this, the eye of the cavefish grew a retina, rod cells, and other important parts, while the eye of the surface fish stayed small and underdeveloped.

Starving and bingeing

It’s easy to see why cavefish would be at a disadvantage if they were to maintain expensive tissues they aren’t using. Since relatively little lives or grows in their caves, the fish are likely surviving on a meager diet of mostly bat feces and organic waste that washes in during the rainy season. Researchers keeping cavefish in labs have discovered that, genetically, the creatures are exquisitely adapted to absorbing and storing nutrients. “They’re constantly hungry, eating as much as they can,” Krishnan says.

Intriguingly, the fish have at least two mutations that are associated with diabetes and obesity in humans. In the cavefish, though, they may be the basis of some traits that are very helpful to a fish that occasionally has a lot of food but often has none. When scientists compare cavefish and surface fish kept in the lab under the same conditions, cavefish fed regular amounts of standard fish food “get fat. They get high blood sugar,” Rohner says. “But remarkably, they do not develop obvious signs of disease.”

Fats can be toxic for tissues, Rohner explains, so they are stored in fat cells. “But when these cells get too big, they can burst, which is why we often see chronic inflammation in humans and other animals that have stored a lot of fat in their tissues.” Yet a 2020 study by Rohner, Krishnan, and their colleagues revealed that even very well-fed cavefish had fewer signs of inflammation in their fat tissues than surface fish do.

Even in their sparse cave conditions, wild cavefish can sometimes get very fat, says Riddle. This is presumably because, whenever food ends up in the cave, the fish eat as much of it as possible, since there may be nothing else for a long time to come. Intriguingly, Riddle says, their fat is usually bright yellow, because of high levels of carotenoids, the substance in the carrots that your grandmother used to tell you were good for your… eyes.

“The first thing that came to our mind, of course, was that they were accumulating these because they don’t have eyes,” says Riddle. In this species, such ideas can be tested: Scientists can cross surface fish (with eyes) and cavefish (without eyes) and look at what their offspring are like. When that’s done, Riddle says, researchers see no link between eye presence or size and the accumulation of carotenoids. Some eyeless cavefish had fat that was practically white, indicating lower carotenoid levels.

Instead, Riddle thinks these carotenoids may be another adaptation to suppress inflammation, which might be important in the wild, as cavefish are likely overeating whenever food arrives.

Studies by Krishnan, Rohner, and colleagues published in 2020 and 2022 have found other adaptations that seem to help tamp down inflammation. Cavefish cells produce lower levels of certain molecules called cytokines that promote inflammation, as well as lower levels of reactive oxygen species — tissue-damaging byproducts of the body’s metabolism that are often elevated in people with obesity or diabetes.

Krishnan is investigating this further, hoping to understand how the well-fed cavefish remain healthy. Rohner, meanwhile, is increasingly interested in how cavefish survive not just overeating, but long periods of starvation, too.

No waste

On a more fundamental level, researchers still hope to figure out why the Mexican tetra evolved into cave forms while any number of other Mexican river fish that also regularly end up in caves did not. (Globally, there are more than 200 cave-adapted fish species, but species that also still have populations on the surface are quite rare.) “Presumably, there is something about the tetras’ genetic makeup that makes it easier for them to adapt,” says Riddle.

Though cavefish are now well-established lab animals used in research and are easy to purchase for that purpose, preserving them in the wild will be important to safeguard the lessons they still hold for us. “There are hundreds of millions of the surface fish,” says Rohner, but cavefish populations are smaller and more vulnerable to pressures like pollution and people drawing water from caves during droughts.

One of Riddle’s students, David Perez Guerra, is now involved in a committee to support cavefish conservation. And researchers themselves are increasingly careful, too. “The tissues of the fish collected during our lab’s last field trip benefited nine different labs,” Riddle says. “We wasted nothing.”

This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

How the cavefish lost its eyes—again and again Read More »

spacex’s-latest-dragon-mission-will-breathe-more-fire-at-the-space-station

SpaceX’s latest Dragon mission will breathe more fire at the space station

“Our capsule’s engines are not pointed in the right direction for optimum boost,” said Sarah Walker, SpaceX’s director of Dragon mission management. “So, this trunk module has engines pointed in the right direction to maximize efficiency of propellant usage.”

When NASA says it’s the right time, SpaceX controllers will command the Draco thrusters to ignite and gently accelerate the massive 450-ton complex. All told, the reboost kit can add about 20 mph, or 9 meters per second, to the space station’s already-dizzying speed, according to Walker.

Spetch said that’s roughly equivalent to the total reboost impulse provided by one-and-a-half Russian Progress cargo vehicles. That’s about one-third to one-fourth of the total orbit maintenance the ISS needs in a year.

“The boost kit will help sustain the orbiting lab’s altitude, starting in September, with a series of burns planned periodically throughout the fall of 2025,” Spetch said.

After a few months docked at the ISS, the Dragon cargo capsule will depart and head for a parachute-assisted splashdown in the Pacific Ocean off the coast of California. SpaceX will recover the pressurized capsule to fly again, while the trunk containing the reboost kit will jettison and burn up in the atmosphere.

SpaceX’s Dragon spacecraft approaches the International Space Station for docking at 7: 05 am EDT (11: 05 UTC) on Monday. Credit: NASA TV/Ars Technica

While this mission is SpaceX’s 33rd cargo flight to the ISS under the auspices of NASA’s multibillion-dollar Commercial Resupply Services contract, it’s also SpaceX’s 50th overall Dragon mission to the outpost. This tally includes 17 flights of the human-rated Crew Dragon.

“With CRS-33, we’ll mark our 50th voyage to ISS,” Walker said. “Just incredible. Together, these missions have (carried) well over 300,000 pounds of cargo and supplies to the orbiting lab and well over 1,000 science and research projects that are not only helping us to understand how to live and work effectively in space… but also directly contributing to critical research that serves our lives here on Earth.”

Future Dragon trunks will be able to accommodate a reboost kit or unpressurized science payloads, depending on NASA’s needs at the space station.

The design of the Dragon reboost kit is a smaller-scale version of what SpaceX will build for a much larger Dragon trunk under a $843 million contract signed with NASA last year for the US Deorbit Vehicle. This souped-up Dragon will dock with the ISS and steer it back into the atmosphere after the lab’s decommissioning in the early 2030s. The deorbit vehicle will have 46 Draco thrusters—16 to control the craft’s orientation and 30 in the trunk to provide the impulse needed to drop the station out of orbit.

SpaceX’s latest Dragon mission will breathe more fire at the space station Read More »

time-is-running-out-for-spacex-to-make-a-splash-with-second-gen-starship

Time is running out for SpaceX to make a splash with second-gen Starship


SpaceX is gearing up for another Starship launch after three straight disappointing test flights.

SpaceX’s 10th Starship rocket awaits liftoff. Credit: Stephen Clark/Ars Technica

STARBASE, Texas—A beehive of aerospace technicians, construction workers, and spaceflight fans descended on South Texas this weekend in advance of the next test flight of SpaceX’s gigantic Starship rocket, the largest vehicle of its kind ever built.

Towering 404 feet (123.1 meters) tall, the rocket was supposed to lift off during a one-hour launch window beginning at 6: 30 pm CDT (7: 30 pm EDT; 23: 30 UTC) Sunday. But SpaceX called off the launch attempt about an hour before liftoff to investigate a ground system issue at Starbase, located a few miles north of the US-Mexico border.

SpaceX didn’t immediately confirm when it might try again to launch Starship, but it could happen as soon as Monday evening at the same time.

It will take about 66 minutes for the rocket to travel from the launch pad in Texas to a splashdown zone in the Indian Ocean northwest of Australia. You can watch the test flight live on SpaceX’s official website. We’ve also embedded a livestream from Spaceflight Now and LabPadre below.

This will be the 10th full-scale test flight of Starship and its Super Heavy booster stage. It’s the fourth flight of an upgraded version of Starship conceived as a stepping stone to a more reliable, heavier-duty version of the rocket designed to carry up to 150 metric tons, or some 330,000 pounds, of cargo to pretty much anywhere in the inner part of our Solar System.

But this iteration of Starship, known as Block 2 or Version 2, has been anything but reliable. After reeling off a series of increasingly successful flights last year with the first-generation Starship and Super Heavy booster, SpaceX has encountered repeated setbacks since debuting Starship Version 2 in January.

Now, there are just two Starship Version 2s left to fly, including the vehicle poised for launch this week. Then, SpaceX will move on to Version 3, the design intended to go all the way to low-Earth orbit, where it can be refueled for longer expeditions into deep space.

A closer look at the top of SpaceX’s Starship rocket, tail number Ship 37, showing some of the different configurations of heat shield tiles SpaceX wants to test on this flight. Credit: Stephen Clark/Ars Technica

Starship’s promised cargo capacity is unparalleled in the history of rocketry. The privately developed rocket’s enormous size, coupled with SpaceX’s plan to make it fully reusable, could enable cargo and human missions to the Moon and Mars. SpaceX’s most conspicuous contract for Starship is with NASA, which plans to use a version of the ship as a human-rated Moon lander for the agency’s Artemis program. With this contract, Starship is central to the US government’s plans to try to beat China back to the Moon.

Closer to home, SpaceX intends to use Starship to haul massive loads of more powerful Starlink Internet satellites into low-Earth orbit. The US military is interested in using Starship for a range of national security missions, some of which could scarcely be imagined just a few years ago. SpaceX wants its factory to churn out a Starship rocket every day, approximately the same rate Boeing builds its workhorse 737 passenger jets.

Starship, of course, is immeasurably more complex than an airliner, and it sees temperature extremes, aerodynamic loads, and vibrations that would destroy a commercial airplane.

For any of this to become reality, SpaceX needs to begin ticking off a lengthy to-do list of technical milestones. The interim objectives include things like catching and reusing Starships and in-orbit ship-to-ship refueling, with a final goal of long-duration spaceflight to reach the Moon and stay there for weeks, months, or years. For a time late last year, it appeared as if SpaceX might be on track to reach at least the first two of these milestones by now.

The 404-foot-tall (123-meter) Starship rocket and Super Heavy booster stand on SpaceX’s launch pad. In the foreground, there are empty loading docks where tanker trucks deliver propellants and other gases to the launch site. Credit: Stephen Clark/Ars Technica

Instead, SpaceX’s schedule for catching and reusing Starships, and refueling ships in orbit, has slipped well into next year. A Moon landing is probably at least several years away. And a touchdown on Mars? Maybe in the 2030s. Before Starship can sniff those milestones, engineers must get the rocket to survive from liftoff through splashdown. This would confirm that recent changes made to the ship’s heat shield work as expected.

Three test flights attempting to do just this ended prematurely in January, March, and May. These failures prevented SpaceX from gathering data on several different tile designs, including insulators made of ceramic and metallic materials, and a tile with “active cooling” to fortify the craft as it reenters the atmosphere.

The heat shield is supposed to protect the rocket’s stainless steel skin from temperatures reaching 2,600° Fahrenheit (1,430° Celsius). During last year’s test flights, it worked well enough for Starship to guide itself to an on-target controlled splashdown in the Indian Ocean, halfway around the world from SpaceX’s launch site in Starbase, Texas.

But the ship lost some of its tiles during each flight last year, causing damage to the ship’s underlying structure. While this wasn’t bad enough to prevent the vehicle from reaching the ocean intact, it would cause difficulties in refurbishing the rocket for another flight. Eventually, SpaceX wants to catch Starships returning from space with giant robotic arms back at the launch pad. The vision, according to SpaceX founder and CEO Elon Musk, is to recover the ship, quickly mount it on another booster, refuel it, and launch it again.

If SpaceX can accomplish this, the ship must return from space with its heat shield in pristine condition. The evidence from last year’s test flights showed engineers had a long way to go for that to happen.

Visitors survey the landscape at Starbase, Texas, where industry and nature collide. Credit: Stephen Clark/Ars Technica

The Starship setbacks this year have been caused by problems in the ship’s propulsion and fuel systems. Another Starship exploded on a test stand in June at SpaceX’s sprawling rocket development facility in South Texas. SpaceX engineers identified different causes for each of the failures. You can read about them in our previous story.

Apart from testing the heat shield, the goals for this week’s Starship flight include testing an engine-out capability on the Super Heavy booster. Engineers will intentionally disable one of the booster’s Raptor engines used to slow down for landing, and instead use another Raptor engine from the rocket’s middle ring. At liftoff, 33 methane-fueled Raptor engines will power the Super Heavy booster off the pad.

SpaceX won’t try to catch the booster back at the launch pad this time, as it did on three occasions late last year and earlier this year. The booster catches have been one of the bright spots for the Starship program as progress on the rocket’s upper stage floundered. SpaceX reused a previously flown Super Heavy booster for the first time on the most recent Starship launch in May.

The booster landing experiment on this week’s flight will happen a few minutes after launch over the Gulf of Mexico east of the Texas coastline. Meanwhile, six Raptor engines will fire until approximately T+plus 9 minutes to accelerate the ship, or upper stage, into space.

The ship is programmed to release eight Starlink satellite simulators from its payload bay in a test of the craft’s payload deployment mechanism. That will be followed by a brief restart of one of the ship’s Raptor engines to adjust its trajectory for reentry, set to begin around 47 minutes into the mission.

If Starship makes it that far, that will be when engineers finally get a taste of the heat shield data they were hungry for at the start of the year.

This story was updated at 8: 30 pm EDT after SpaceX scrubbed Sunday’s launch attempt.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Time is running out for SpaceX to make a splash with second-gen Starship Read More »

why-wind-farms-attract-so-much-misinformation-and-conspiracy theory

Why wind farms attract so much misinformation and conspiracy theory

The recent resistance

Academic work on the question of anti-wind farm activism is revealing a pattern: Conspiracy thinking is a stronger predictor of opposition than age, gender, education, or political leaning.

In Germany, the academic Kevin Winter and colleagues found that belief in conspiracies had many times more influence on wind opposition than any demographic factor. Worryingly, presenting opponents with facts was not particularly successful.

In a more recent article, based on surveys in the US, UK, and Australia that looked at people’s propensity to give credence to conspiracy theories, Winter and colleagues argued that opposition is “rooted in people’s worldviews.”

If you think climate change is a hoax or a beat-up by hysterical eco-doomers, you’re going to be easily persuaded that wind turbines are poisoning groundwater, causing blackouts, or, in Trump’s words, “driving [the whales] loco.”

Wind farms are fertile ground for such theories. They are highly visible symbols of climate policy, and complex enough to be mysterious to non-specialists. A row of wind turbines can become a target for fears about modernity, energy security, or government control.

This, say Winter and colleagues, “poses a challenge for communicators and institutions committed to accelerating the energy transition.” It’s harder to take on an entire worldview than to correct a few made-up talking points.

What is it all about?

Beneath the misinformation, often driven by money or political power, there’s a deeper issue. Some people—perhaps Trump among them—don’t want to deal with the fact that fossil technologies, which brought prosperity and a sense of control, are also causing environmental crises. And these are problems that aren’t solved with the addition of more technology. It offends their sense of invulnerability, of dominance. This “anti-reflexivity,” as some academics call it, is a refusal to reflect on the costs of past successes.

It is also bound up with identity. In some corners of the online “manosphere,” concerns over climate change are being painted as effeminate.

Many boomers, especially white heterosexual men like Trump, have felt disoriented as their world has shifted and changed around them. The clean energy transition symbolizes part of this change. Perhaps this is a good way to understand why Trump is lashing out at “windmills.”The Conversation

Marc Hudson, Visiting Fellow, SPRU, University of Sussex Business School, University of Sussex. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why wind farms attract so much misinformation and conspiracy theory Read More »

an-inner-speech-decoder-reveals-some-mental-privacy-issues

An inner-speech decoder reveals some mental privacy issues

But it struggled with more complex phrases.

Pushing the frontier

Once the mental privacy safeguard was in place, the team started testing their inner speech system with cued words first. The patients sat in front of the screen that displayed a short sentence and had to imagine saying it. The performance varied, reaching 86 percent accuracy with the best performing patient and on a limited vocabulary of 50 words, but dropping to 74 percent when the vocabulary was expanded to 125,000 words.

But when the team moved on to testing if the prosthesis could decode unstructured inner speech, the limitations of the BCI became quite apparent.

The first unstructured inner speech test involved watching arrows pointing up, right, or left in a sequence on a screen. The task was to repeat that sequence after a short delay using a joystick. The expectation was that the patients would repeat sequences like “up, right, up” in their heads to memorize them—the goal was to see if the prosthesis would catch it. It kind of did, but the performance was just above chance level.

Finally, Krasa and his colleagues tried decoding more complex phrases without explicit cues. They asked the participants to think of the name of their favorite food or recall their favorite quote from a movie. “This didn’t work,” Krasa says. “What came out of the decoder was kind of gibberish.”

In its current state, Krasa thinks, the inner speech neural prosthesis is a proof of concept. “We didn’t think this would be possible, but we did it and that’s exciting! The error rates were too high, though, for someone to use it regularly,” Krasa says. He suggested the key limitation might be in hardware—the number of electrodes implanted in the brain and precision with which we can record the signal from the neurons. Inner speech representations might also be stronger in other brain regions than they are in the motor cortex.

Krasa’s team is currently involved in two projects that stemmed from the inner speech neural prosthesis. “The first is asking the question [of] how much faster an inner speech BCI would be compared to an attempted speech alternative,” Krasa says. The second one is looking at people with a condition called aphasia, where people have motor control of their mouths but are unable to produce words. “We want to assess if inner speech decoding would help them,” Krasa adds.

Cell, 2025.  DOI: 10.1016/j.cell.2025.06.015

An inner-speech decoder reveals some mental privacy issues Read More »

google-says-it-dropped-the-energy-cost-of-ai-queries-by-33x-in-one-year

Google says it dropped the energy cost of AI queries by 33x in one year

To come up with typical numbers, the team that did the analysis tracked requests and the hardware that served them for a 24 hour period, as well as the idle time for that hardware. This gives them an energy per request estimate, which differs based on the model being used. For each day, they identify the median prompt and use that to calculate the environmental impact.

Going down

Using those estimates, they find that the impact of an individual text request is pretty small. “We estimate the median Gemini Apps text prompt uses 0.24 watt-hours of energy, emits 0.03 grams of carbon dioxide equivalent (gCO2e), and consumes 0.26 milliliters (or about five drops) of water,” they conclude. To put that in context, they estimate that the energy use is similar to about nine seconds of TV viewing.

The bad news is that the volume of requests is undoubtedly very high. The company has chosen to execute an AI operation with every single search request, a compute demand that simply didn’t exist a couple of years ago. So, while the individual impact is small, the cumulative cost is likely to be considerable.

The good news? Just a year ago, it would have been far, far worse.

Some of this is just down to circumstances. With the boom in solar power in the US and elsewhere, it has gotten easier for Google to arrange for renewable power. As a result, the carbon emissions per unit of energy consumed saw a 1.4x reduction over the past year. But the biggest wins have been on the software side, where different approaches have led to a 33x reduction in energy consumed per prompt.

A color bar showing the percentage of energy used by different hardware. AI accelerators are the largest use, followed by CPU and RAM. Idle machines and overhead account for about 10 percent each.

Most of the energy use in serving AI requests comes from time spent in the custom accelerator chips. Credit: Elsworth, et. al.

The Google team describes a number of optimizations the company has made that contribute to this. One is an approach termed Mixture-of-Experts, which involves figuring out how to only activate the portion of an AI model needed to handle specific requests, which can drop computational needs by a factor of 10 to 100. They’ve developed a number of compact versions of their main model, which also reduce the computational load. Data center management also plays a role, as the company can make sure that any active hardware is fully utilized, while allowing the rest to stay in a low-power state.

Google says it dropped the energy cost of AI queries by 33x in one year Read More »