Science

scientists-built-a-badminton-playing-robot-with-ai-powered-skills

Scientists built a badminton-playing robot with AI-powered skills

It also learned fall avoidance and determined how much risk was reasonable to take given its limited speed. The robot did not attempt impossible plays that would create the potential for serious damage—it was committed, but not suicidal.

But when it finally played humans, it turned out ANYmal, as a badminton player, was amateur at best.

The major leagues

The first problem was its reaction time. An average human reacts to visual stimuli in around 0.2–0.25 seconds. Elite badminton players with trained reflexes, anticipation, and muscle memory can cut this time down to 0.12–0.15 seconds. ANYmal needed roughly 0.35 seconds after the opponent hit the shuttlecock to register trajectories and figure out what to do.

Part of the problem was poor eyesight. “I think perception is still a big issue,” Ma said. “The robot localized the shuttlecock with the stereo camera and there could be a positioning error introduced at each timeframe.” The camera also had a limited field of view, which meant the robot could see the shuttlecock for only a limited time before it had to act. “Overall, it was suited for more friendly matches—when the human player starts to smash, the success rate goes way down for the robot,” Ma acknowledged.

But his team already has some ideas on how to make ANYmal better. Reaction time can be improved by predicting the shuttlecock trajectory based on the opponent’s body position rather than waiting to see the shuttlecock itself—a technique commonly used by elite badminton or tennis players. To improve ANYmal’s perception, the team wants to fit it with more advanced hardware, like event cameras—vision sensors that register movement with ultra-low latencies in the microseconds range. Other improvements might include faster, more capable actuators.

“I think the training framework we propose would be useful in any application where you need to balance perception and control—picking objects up, even catching and throwing stuff,” Ma suggested. Sadly, one thing that’s almost certainly off the table is taking ANYmal to major leagues in badminton or tennis. “Would I set up a company selling badminton-playing robots? Well, maybe not,” Ma said.

Science Robotics, 2025. DOI: 10.1126/scirobotics.adu3922

Scientists built a badminton-playing robot with AI-powered skills Read More »

5-things-in-trump’s-budget-that-won’t-make-nasa-great-again

5 things in Trump’s budget that won’t make NASA great again

If signed into law as written, the White House’s proposal to slash nearly 25 percent from NASA’s budget would have some dire consequences.

It would cut the agency’s budget from $24.8 billion to $18.8 billion. Adjusted for inflation, this would be the smallest NASA budget since 1961, when the first American launched into space.

The proposed funding plan would halve NASA’s funding for robotic science missions and technology development next year, scale back research on the International Space Station, turn off spacecraft already exploring the Solar System, and cancel NASA’s Space Launch System rocket and Orion spacecraft after two more missions in favor of procuring lower-cost commercial transportation to the Moon and Mars.

The SLS rocket and Orion spacecraft have been targets for proponents of commercial spaceflight for several years. They are single-use, and their costs are exorbitant, with Moon missions on SLS and Orion projected to cost more than $4 billion per flight. That price raises questions about whether these vehicles will ever be able to support a lunar space station or Moon base where astronauts can routinely rotate in and out on long-term expeditions, like researchers do in Antarctica today.

Reusable rockets and spaceships offer a better long-term solution, but they won’t be ready to ferry people to the Moon for a while longer. The Trump administration proposes flying SLS and Orion two more times on NASA’s Artemis II and Artemis III missions, then retiring the vehicles. Artemis II’s rocket is currently being assembled at Kennedy Space Center in Florida for liftoff next year, carrying a crew of four around the far side of the Moon. Artemis III would follow with the first attempt to land humans on the Moon since 1972.

The cuts are far from law

Every part of Trump’s budget proposal for fiscal year 2026 remains tentative. Lawmakers in each house of Congress will write their own budget bills, which must go to the White House for Trump’s signature. A Senate bill released last week includes language that would claw back funding for SLS and Orion to support the Artemis IV and Artemis V missions.

5 things in Trump’s budget that won’t make NASA great again Read More »

ocean-acidification-crosses-“planetary-boundaries”

Ocean acidification crosses “planetary boundaries”

A critical measure of the ocean’s health suggests that the world’s marine systems are in greater peril than scientists had previously realized and that parts of the ocean have already reached dangerous tipping points.

A study, published Monday in the journal Global Change Biology, found that ocean acidification—the process in which the world’s oceans absorb excess carbon dioxide from the atmosphere, becoming more acidic—crossed a “planetary boundary” five years ago.

“A lot of people think it’s not so bad,” said Nina Bednaršek, one of the study’s authors and a senior researcher at Oregon State University. “But what we’re showing is that all of the changes that were projected, and even more so, are already happening—in all corners of the world, from the most pristine to the little corner you care about. We have not changed just one bay, we have changed the whole ocean on a global level.”

The new study, also authored by researchers at the UK’s Plymouth Marine Laboratory and the National Oceanic and Atmospheric Administration (NOAA), finds that by 2020 the world’s oceans were already very close to the “danger zone” for ocean acidity, and in some regions had already crossed into it.

Scientists had determined that ocean acidification enters this danger zone or crosses this planetary boundary when the amount of calcium carbonate—which allows marine organisms to develop shells—is less than 20 percent compared to pre-industrial levels. The new report puts the figure at about 17 percent.

“Ocean acidification isn’t just an environmental crisis, it’s a ticking time bomb for marine ecosystems and coastal economies,” said Steve Widdicombe, director of science at the Plymouth lab, in a press release. “As our seas increase in acidity, we’re witnessing the loss of critical habitats that countless marine species depend on and this, in turn, has major societal and economic implications.”

Scientists have determined that there are nine planetary boundaries that, once breached, risk humans’ abilities to live and thrive. One of these is climate change itself, which scientists have said is already beyond humanity’s “safe operating space” because of the continued emissions of heat-trapping gases. Another is ocean acidification, also caused by burning fossil fuels.

Ocean acidification crosses “planetary boundaries” Read More »

ibm-is-now-detailing-what-its-first-quantum-compute-system-will-look-like

IBM is now detailing what its first quantum compute system will look like


Company is moving past focus on qubits, shifting to functional compute units.

A rendering of what IBM expects will be needed to house a Starling quantum computer. Credit: IBM

On Tuesday, IBM released its plans for building a system that should push quantum computing into entirely new territory: a system that can both perform useful calculations while catching and fixing errors and be utterly impossible to model using classical computing methods. The hardware, which will be called Starling, is expected to be able to perform 100 million operations without error on a collection of 200 logical qubits. And the company expects to have it available for use in 2029.

Perhaps just as significant, IBM is also committing to a detailed description of the intermediate steps to Starling. These include a number of processors that will be configured to host a collection of error-corrected qubits, essentially forming a functional compute unit. This marks a major transition for the company, as it involves moving away from talking about collections of individual hardware qubits and focusing instead on units of functional computational hardware. If all goes well, it should be possible to build Starling by chaining a sufficient number of these compute units together.

“We’re updating [our roadmap] now with a series of deliverables that are very precise,” IBM VP Jay Gambetta told Ars, “because we feel that we’ve now answered basically all the science questions associated with error correction and it’s becoming more of a path towards an engineering problem.”

New architectures

Error correction on quantum hardware involves entangling a groups of qubits in a way that distributes one or more quantum bit values among them and includes additional qubits that can be used to check the state of the system. It can be helpful to think of these as data and measurement qubits. Performing weak quantum measurements on the measurement qubits produces what’s called “syndrome data,” which can be interpreted to determine whether anything about the data qubits has changed (indicating an error) and how to correct it.

There are lots of potential ways to arrange different combinations of data and measurement qubits for this to work, each referred to as a code. But, as a general rule, the more hardware qubits committed to the code, the more robust it will be to errors, and the more logical qubits that can be distributed among its hardware qubits.

Some quantum hardware, like that based on trapped ions or neutral atoms, are relatively flexible when it comes to hosting error-correction codes. The hardware qubits can be moved around so that any two can be entangled, so it’s possible to adopt a huge range of configurations, albeit at the cost of the time spent moving atoms around. IBM’s technology is quite different. It relies on qubits made of superconducting electronics laid out on a chip, with entanglement mediated by wiring that runs between qubits. The layout of this wiring is set during the chip’s manufacture, and so the design of the chip commits it to a limited number of potential error-correction codes.

Unfortunately, this wiring can also enable cross-talk between neighboring qubits, causing them to lose their state. To avoid this, existing IBM processors have their qubits wired in what they term a “heavy hex” configuration, named for its hexagonal arrangements of connections among its qubits. This has worked well to keep the error rate of its hardware down, but it also poses a challenge, since IBM has decided to go with an error-correction code that’s incompatible with the heavy hex geometry.

A couple of years back, an IBM team described a compact error correction code called a low-density parity check (LDPC). This requires a square grid of nearest-neighbor connections among its qubits, as well as wiring to connect qubits that are relatively distant on the chip. To get its chips and error-correction scheme in sync, IBM has made two key advances. The first is in its chip packaging, which now uses several layers of wiring sitting above the hardware qubits to enable all of the connections needed for the LDPC code.

We’ll see that first in a processor called Loon that’s on the company’s developmental roadmap. “We’ve already demonstrated these three things: high connectivity, long-range couplers, and couplers that break the plane [of the chip] and connect to other qubits,” Gambetta said. “We have to combine them all as a single demonstration showing that all these parts of packaging can be done, and that’s what I want to achieve with Loon.” Loon will be made public later this year.

Two diagrams of blue objects linked by red lines. The one on the left is sparse and simple, while the one on the right is a complicated mesh of red lines.

On the left, the simple layout of the connections in a current-generation Heron processor. At right, the complicated web of connections that will be present in Loon. Credit: IBM

The second advance IBM has made is to eliminate the crosstalk that the heavy hex geometry was used to minimize, so heavy hex will be going away. “We are releasing this year a bird for near term experiments that is a square array that has almost zero crosstalk,” Gambetta said, “and that is Nighthawk.” The more densely connected qubits cut the overhead needed to perform calculations by a factor of 15, Gambetta told Ars.

Nighthawk is a 2025 release on a parallel roadmap that you can think of as user-facing. Iterations on its basic design will be released annually through 2028, each enabling more operations without error (going from 5,000 gate operations this year to 15,000 in 2028). Each individual Nighthawk processor will host 120 hardware qubits, but 2026 will see three of them chained together and operating as a unit, providing 360 hardware qubits. That will be followed in 2027 by a machine with nine linked Nighthawk processors, boosting the hardware qubit number over 1,000.

Riding the bicycle

The real future of IBM’s hardware, however, will be happening over on the developmental line of processors, where talk about hardware qubit counts will become increasingly irrelevant. In a technical document released today, IBM is describing the specific LDPC code it will be using, termed a bivariate bicycle code due to some cylindrical symmetries in its details that vaguely resemble bicycle wheels. The details of the connections matter less than the overall picture of what it takes to use this error code in practice.

IBM describes two implementations of this form of LDPC code. In the first, 144 hardware qubits are arranged so that they play host to 12 logical qubits and all of the measurement qubits needed to perform error checks. The standard measure of a code’s ability to catch and correct errors is called its distance, and in this case, the distance is 12. As an alternative, they also describe a code that uses 288 hardware qubits to host the same 12 logical qubits but boost the distance to 18, meaning it’s more resistant to errors. IBM will make one of these collections of logical qubits available as a Kookaburra processor in 2026, which will use them to enable stable quantum memory.

The follow-on will bundle these with a handful of additional qubits that can produce quantum states that are needed for some operations. Those, plus hardware needed for the quantum memory, form a single, functional computation unit, built on a single chip, that is capable of performing all the operations needed to implement any quantum algorithm.

That will appear with the Cockatoo chip, which will also enable multiple processing units to be linked on a single bus, allowing the logical qubit count to grow beyond 12. (The company says that one of the dozen logical qubits in each unit will be used to mediate entanglement with other units and so won’t be available for computation.) That will be followed by the first test versions of Starling, which will allow universal computations on a limited number of logical qubits spread across multiple chips.

Separately, IBM is releasing a document that describes a key component of the system that will run on classical computing hardware. Full error correction requires evaluating the syndrome data derived from the state of all the measurement qubits in order to determine the state of the logical qubits and whether any corrections need to be made. As the complexity of the logical qubits grows, the computational burden of evaluating grows with it. If this evaluation can’t be executed in real time, then it becomes impossible to perform error-corrected calculations.

To address this, IBM has developed a message-passing decoder that can perform parallel evaluations of the syndrome data. The system explores more of the solution space by a combination of randomizing the weight given to the memory of past solutions and by handing any seemingly non-optimal solutions on to new instances for additional evaluation. The key thing is that IBM estimates that this can be run in real time using FPGAs, ensuring that the system works.

A quantum architecture

There are a lot more details beyond those, as well. Gambetta described the linkage between each computational unit—IBM is calling it a Universal Bridge—which requires one microwave cable for each code distance of the logical qubits being linked. (In other words, a distance 12 code would need 12 microwave-carrying cables to connect each chip.) He also said that IBM is developing control hardware that can operate inside the refrigeration hardware, based on what they’re calling “cold CMOS,” which is capable of functioning at 4 Kelvin.

The company is also releasing renderings of what it expects Starling to look like: a series of dilution refrigerators, all connected by a single pipe that contains the Universal Bridge. “It’s an architecture now,” Gambetta said. “I have never put details in the roadmap that I didn’t feel we could hit, and now we’re putting a lot more details.”

The striking thing to me about this is that it marks a shift away from a focus on individual qubits, their connectivity, and their error rates. The error hardware rates are now good enough (4 x 10-4) for this to work, although Gambetta felt that a few more improvements should be expected. And connectivity will now be directed exclusively toward creating a functional computational unit.

That said, there’s still a lot of space beyond Starling on IBM’s roadmap. The 200 logical qubits it promises will be enough to handle some problems, but not enough to perform the complex algorithms needed to do things like break encryption. That will need to wait for something closer to Blue Jay, a 2033 system that IBM expects will have 2,000 logical qubits. And, as of right now, it’s the only thing listed beyond Starling.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

IBM is now detailing what its first quantum compute system will look like Read More »

cambridge-mapping-project-solves-a-medieval-murder

Cambridge mapping project solves a medieval murder


“A tale of shakedowns, sex, and vengeance that expose[s] tensions between the church and England’s elite.”

Location of the murder of John Forde, taken from the Medieval Murder Maps. Credit: Medieval Murder Maps. University of Cambridge: Institute of Criminology

In 2019, we told you about a new interactive digital “murder map” of London compiled by University of Cambridge criminologist Manuel Eisner. Drawing on data catalogued in the city coroners’ rolls, the map showed the approximate location of 142 homicide cases in late medieval London. The Medieval Murder Maps project has since expanded to include maps of York and Oxford homicides, as well as podcast episodes focusing on individual cases.

It’s easy to lose oneself down the rabbit hole of medieval murder for hours, filtering the killings by year, choice of weapon, and location. Think of it as a kind of 14th-century version of Clue: It was the noblewoman’s hired assassins armed with daggers in the streets of Cheapside near St. Paul’s Cathedral. And that’s just the juiciest of the various cases described in a new paper published in the journal Criminal Law Forum.

The noblewoman was Ela Fitzpayne, wife of a knight named Sir Robert Fitzpayne, lord of Stogursey. The victim was a priest and her erstwhile lover, John Forde, who was stabbed to death in the streets of Cheapside on May 3, 1337. “We are looking at a murder commissioned by a leading figure of the English aristocracy,” said University of Cambridge criminologist Manuel Eisner, who heads the Medieval Murder Maps project. “It is planned and cold-blooded, with a family member and close associates carrying it out, all of which suggests a revenge motive.”

Members of the mapping project geocoded all the cases after determining approximate locations for the crime scenes. Written in Latin, the coroners’ rolls are records of sudden or suspicious deaths as investigated by a jury of local men, called together by the coroner to establish facts and reach a verdict. Those records contain such relevant information as where the body was found and by whom; the nature of the wounds; the jury’s verdict on cause of death; the weapon used and how much it was worth; the time, location, and witness accounts; whether the perpetrator was arrested, escaped, or sought sanctuary; and any legal measures taken.

A brazen killing

The murder of Forde was one of several premeditated revenge killings recorded in the area of Westcheap. Forde was walking on the street when another priest, Hascup Neville, caught up to him, ostensibly for a casual chat, just after Vespers but before sunset. As they approached Foster Lane, Neville’s four co-conspirators attacked: Ela Fitzpayne’s brother, Hugh Lovell; two of her former servants, Hugh of Colne and John Strong; and a man called John of Tindale. One of them cut Ford’s throat with a 12-inch dagger, while two others stabbed him in the stomach with long fighting knives.

At the inquest, the jury identified the assassins, but that didn’t result in justice. “Despite naming the killers and clear knowledge of the instigator, when it comes to pursuing the perpetrators, the jury turn a blind eye,” said Eisner. “A household of the highest nobility, and apparently no one knows where they are to bring them to trial. They claim Ela’s brother has no belongings to confiscate. All implausible. This was typical of the class-based justice of the day.”

Colne, the former servant, was eventually charged and imprisoned for the crime some five years later in 1342, but the other perpetrators essentially got away with it.

Eisner et al. uncovered additional historical records that shed more light on the complicated history and ensuing feud between the Fitzpaynes and Forde. One was an indictment in the Calendar of Patent Rolls of Edward III, detailing how Ela and her husband, Forde, and several other accomplices raided a Benedictine priory in 1321. Among other crimes, the intruders “broke [the prior’s] houses, chests and gates, took away a horse, a colt and a boar… felled his trees, dug in his quarry, and carried away the stone and trees.” The gang also stole 18 oxen, 30 pigs, and about 200 sheep and lambs.

There were also letters that the Archbishop of Canterbury wrote to the Bishop of Winchester. Translations of the letters are published for the first time on the project’s website. The archbishop called out Ela by name for her many sins, including adultery “with knights and others, single and married, and even with clerics and holy orders,” and devised a punishment. This included not wearing any gold, pearls, or precious stones and giving money to the poor and to monasteries, plus a dash of public humiliation. Ela was ordered to perform a “walk of shame”—a tamer version than Cersei’s walk in Game of Thrones—every fall for seven years, carrying a four-pound wax candle to the altar of Salisbury Cathedral.

The London Archives. Inquest number 15 on 1336-7 City of London Coroner’s Rolls (

The London Archives. Inquest number 15 on 1336-7 City of London Coroner’s Rolls. Credit: The London Archives

Ela outright refused to do any of that, instead flaunting “her usual insolence.” Naturally, the archbishop had no choice but to excommunicate her. But Eisner speculates that this may have festered within Ela over the ensuing years, thereby sparking her desire for vengeance on Forde—who may have confessed to his affair with Ela to avoid being prosecuted for the 1321 raid. The archbishop died in 1333, four years before Forde’s murder, so Ela was clearly a formidable person with the patience and discipline to serve her revenge dish cold. Her marriage to Robert (her second husband) endured despite her seemingly constant infidelity, and she inherited his property when he died in 1354.

“Attempts to publicly humiliate Ela Fitzpayne may have been part of a political game, as the church used morality to stamp its authority on the nobility, with John Forde caught between masters,” said Eisner. “Taken together, these records suggest a tale of shakedowns, sex, and vengeance that expose tensions between the church and England’s elites, culminating in a mafia-style assassination of a fallen man of god by a gang of medieval hitmen.”

I, for one, am here for the Netflix true crime documentary on Ela Fitzpayne, “a woman in 14th century England who raided priories, openly defied the Archbishop of Canterbury, and planned the assassination of a priest,” per Eisner.

The role of public spaces

The ultimate objective of the Medieval Murder Maps project is to learn more about how public spaces shaped urban violence historically, the authors said. There were some interesting initial revelations back in 2019. For instance, the murders usually occurred in public streets or squares, and Eisner identified a couple of “hot spots” with higher concentrations than other parts of London. One was that particular stretch of Cheapside running from St Mary-le-Bow church to St. Paul’s Cathedral, where John Forde met his grisly end. The other was a triangular area spanning Gracechurch, Lombard, and Cornhill, radiating out from Leadenhall Market.

The perpetrators were mostly men (in only four cases were women the only suspects). As for weapons, knives and swords of varying types were the ones most frequently used, accounting for 68 percent of all the murders. The greatest risk of violent death in London was on weekends (especially Sundays), between early evening and the first few hours after curfew.

Eisner et al. have now extended their spatial analysis to include homicides committed in York and London in the 14th century with similar conclusions. Murders most often took place in markets, squares, and thoroughfares—all key nodes of medieval urban life—in the evenings or on weekends. Oxford had significantly higher murder rates than York or London and also more organized group violence, “suggestive of high levels of social disorganization and impunity.” London, meanwhile, showed distinct clusters of homicides, “which reflect differences in economic and social functions,” the authors wrote. “In all three cities, some homicides were committed in spaces of high visibility and symbolic significance.”

Criminal Law Forum, 2025. DOI: 10.1007/s10609-025-09512-7  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Cambridge mapping project solves a medieval murder Read More »

startup-puts-a-logical-qubit-in-a-single-piece-of-hardware

Startup puts a logical qubit in a single piece of hardware

This time around, the company is showing that it can get an actual logical qubit into a variant of the same hardware. In the earlier version of its equipment, the resonator cavity had a single post and supported a single frequency. In the newer iteration, there were two posts and two frequencies. Each of those frequencies creates its own quantum resonator in the same cavity, with its own set of modes. “It’s this ensemble of photons inside this cavity that creates the logical qubit,” Lemyre told Ars.

The additional quantum information that can now be stored in the system enables it to identify more complex errors than the loss of a photon.

Catching, but not fixing errors

The company did two experiments with this new hardware. First, it ran multiple rounds of error detection on data stored in the logical qubit, essentially testing its ability to act like a quantum memory and retain the information stored there. The system corrected for photon loss, but left other errors uncorrected. These occurred at a rate of a bit over two percent each round, so by the time the system reached the 25th measurement, many instances had already encountered an error.

The second time through, the company repeated the process, discarding any instances in which an error occurred. In almost every instance, that meant the results were discarded long before they got through two dozen rounds of measurement. But at these later stages, none of the remaining instances were in an erroneous state. That indicates that a successful correction of all of the errors—something the team didn’t try—would be able to fix all the detected problems.

“When we do this, we don’t have any errors left,” Lemyre said. “And this builds confidence into this approach, meaning that if we build the next generation of codes that is now able to correct these errors that were detected in this two-mode approach, we should have that flat line of no errors occurring over a long period of time.”

Startup puts a logical qubit in a single piece of hardware Read More »

the-nine-armed-octopus-and-the-oddities-of-the-cephalopod-nervous-system

The nine-armed octopus and the oddities of the cephalopod nervous system


A mix of autonomous and top-down control manage the octopus’s limbs.

With their quick-change camouflage and high level of intelligence, it’s not surprising that the public and scientific experts alike are fascinated by octopuses. Their abilities to recognize faces, solve puzzles, and learn behaviors from other octopuses make these animals a captivating study.

To perform these processes and others, like crawling or exploring, octopuses rely on their complex nervous system, one that has become a focus for neuroscientists. With about 500 million neurons—around the same number as dogs—octopuses’ nervous systems are the most complex of any invertebrate. But, unlike vertebrate organisms, the octopus’s nervous system is also decentralized, with around 350 million neurons, or 66 percent of it, located in its eight arms.

“This means each arm is capable of independently processing sensory input, initiating movement, and even executing complex behaviors—without direct instructions from the brain,” explains Galit Pelled, a professor of Mechanical Engineering, Radiology, and Neuroscience at Michigan State University who studies octopus neuroscience. “In essence, the arms have their own ‘mini-brains.’”

A decentralized nervous system is one factor that helps octopuses adapt to changes, such as injury or predation, as seen in the case of an Octopus vulgaris, or common octopus, that was observed with nine arms by researchers at the ECOBAR lab at the Institute of Marine Research in Spain between 2021 and 2022.

By studying outliers like this cephalopod, researchers can gain insight into how the animal’s detailed scaffolding of nerves changes and regrows over time, uncovering more about how octopuses have evolved over millennia in our oceans.

Brains, brains, and more brains

Because each arm of an octopus contains its own bundle of neurons, the limbs can operate semi-independently from the central brain, enabling faster responses since signals don’t always need to travel back and forth between the brain and the arms. In fact, Pelled and her team recently discovered that “neural signals recorded in the octopus arm can predict movement type within 100 milliseconds of stimulation, without central brain involvement.” She notes that “that level of localized autonomy is unprecedented in vertebrate systems.”

Though each limb moves on its own, the movements of the octopus’s body are smooth and conducted with a coordinated elegance that allows the animal to exhibit some of the broadest range of behaviors, adapting on the fly to changes in its surroundings.

“That means the octopus can react quickly to its environment, especially when exploring, hunting, or defending itself,” Pelled says. “For example, one arm can grab food while another is feeling around a rock, without needing permission from the brain. This setup also makes the octopus more resilient. If one arm is injured, the others still work just fine. And because so much decision-making happens at the arms, the central brain is freed up to focus on the bigger picture—like navigating or learning new tasks.”

As if each limb weren’t already buzzing with neural activity, things get even more intricate when researchers zoom in further—to the nerves within each individual sucker, a ring of muscular tissue, which octopuses use to sense and taste their surroundings.

“There is a sucker ganglion, or nerve center, located in the stalk of every sucker. For some species of octopuses, that’s over a thousand ganglia,” says Cassady Olson, a graduate student at the University of Chicago who works with Cliff Ragsdale, a leading expert in octopus neuroscience.

Given that each sucker has its own nerve centers—connected by a long axial nerve cord running down the limb—and each arm has hundreds of suckers, things get complicated very quickly, as researchers have historically struggled to study this peripheral nervous system, as it’s called, within the octopus’s body.

“The large size of the brain makes it both really exciting to study and really challenging,” says Z. Yan Wang, an assistant professor of biology and psychology at the University of Washington. “Many of the tools available for neuroscience have to be adjusted or customized specifically for octopuses and other cephalopods because of their unique body plans.”

While each limb acts independently, signals are transmitted back to the octopus’s central nervous system. The octopus’ brain sits between its eyes at the front of its mantle, or head, couched between its two optic lobes, large bean-shaped neural organs that help octopuses see the world around them. These optic lobes are just two of the over 30 lobes experts study within the animal’s centralized brain, as each lobe helps the octopus process its environment.

This elaborate neural architecture is critical given the octopus’s dual role in the ecosystem as both predator and prey. Without natural defenses like a hard shell, octopuses have evolved a highly adaptable nervous system that allows them to rapidly process information and adjust as needed, helping their chances of survival.

Some similarities remain

While the octopus’s decentralized nervous system makes it a unique evolutionary example, it does have some structures similar to or analogous to the human nervous system.

“The octopus has a central brain mass located between its eyes, and an axial nerve cord running down each arm (similar to a spinal cord),” says Wang. “The octopus has many sensory systems that we are familiar with, such as vision, touch (somatosensation), chemosensation, and gravity sensing.”

Neuroscientists have homed in on these similarities to understand how these structures may have evolved across the different branches in the tree of life. As the most recent common ancestor for humans and octopuses lived around 750 million years ago, experts believe that many similarities, from similar camera-like eyes to maps of neural activities, evolved separately in a process known as convergent evolution.

While these similarities shed light on evolution’s independent paths, they also offer valuable insights for fields like soft robotics and regenerative medicine.

Occasionally, unique individuals—like an octopus with an unexpected number of limbs—can provide even deeper clues into how this remarkable nervous system functions and adapts.

Nine arms, no problem

In 2021, researchers from the Institute of Marine Research in Spain used an underwater camera to follow a male Octopus vulgaris, or common octopus. On its left side, three arms were intact, while the others were reduced to uneven, stumpy lengths, sharply bitten off at varying points. Although the researchers didn’t witness the injury itself, they observed that the front right arm—known as R1—was regenerating unusually, splitting into two separate limbs and giving the octopus a total of nine arms.

“In this individual, we believe this condition was a result of abnormal regeneration [a genetic mutation] after an encounter with a predator,” explains Sam Soule, one of the researchers and the first author on the corresponding paper recently published in Animals.

The researchers named the octopus Salvador due to its bifurcated arm coiling up on itself like the two upturned ends of Salvador Dali’s moustache. For two years, the team studied the cephalopod’s behavior and found that it used its bifurcated arm less when doing “riskier” movements such as exploring or grabbing food, which would force the animal to stretch its arm out and expose it to further injury.

“One of the conclusions of our research is that the octopus likely retains a long-term memory of the original injury, as it tends to use the bifurcated arms for less risky tasks compared to the others,” elaborates Jorge Hernández Urcera, a lead author of the study. “This idea of lasting memory brought to mind Dalí’s famous painting The Persistence of Memory, which ultimately became the title of the paper we published on monitoring this particular octopus.”

While the octopus acted more protective of its extra limb, its nervous system had adapted to using the extra appendage, as the octopus was observed, after some time recovering from its injuries, using its ninth arm for probing its environment.

“That nine-armed octopus is a perfect example of just how adaptable these animals are,” Pelled adds. “Most animals would struggle with an unusual body part, but not the octopus. In this case, the octopus had a bifurcated (split) arm and still used it effectively, just like any other arm. That tells us the nervous system didn’t treat it as a mistake—it figured out how to make it work.”

Kenna Hughes-Castleberry is the science communicator at JILA (a joint physics research institute between the National Institute of Standards and Technology and the University of Colorado Boulder) and a freelance science journalist. Her main writing focuses are quantum physics, quantum technology, deep technology, social media, and the diversity of people in these fields, particularly women and people from minority ethnic and racial groups. Follow her on LinkedIn or visit her website.

The nine-armed octopus and the oddities of the cephalopod nervous system Read More »

simulations-find-ghostly-whirls-of-dark-matter-trailing-galaxy-arms

Simulations find ghostly whirls of dark matter trailing galaxy arms

“Basically what you do is you set up a bunch of particles that represent things like stars, gas, and dark matter, and you let them evolve for millions of years,” Bernet says. “Human lives are much too short to witness this happening in real time. We need simulations to help us see more than the present, which is like a single snapshot of the Universe.”

Several other groups already had galaxy simulations they were using to do other science, so the team asked one to see their data. When they found the dark matter imprint they were looking for, they checked for it in another group’s simulation. They found it again, and then in a third simulation as well.

The dark matter spirals are much less pronounced than their stellar counterparts, but the team noted a distinct imprint on the motions of dark matter particles in the simulations. The dark spiral arms lag behind the stellar arms, forming a sort of unseen shadow.

These findings add a new layer of complexity to our understanding of how galaxies evolve, suggesting that dark matter is more than a passive, invisible scaffolding holding galaxies together. Instead, it appears to react to the gravity from stars in galaxies’ spiral arms in a way that may even influence star formation or galactic rotation over cosmic timescales. It could also explain the relatively newfound excess mass along a nearby spiral arm in the Milky Way.

The fact that they saw the same effect in differently structured simulations suggests that these dark matter spirals may be common in galaxies like the Milky Way. But tracking them down in the real Universe may be tricky.

Bernet says scientists could measure dark matter in the Milky Way’s disk. “We can currently measure the density of dark matter close to us with a huge precision,” he says. “If we can extend these measurements to the entire disk with enough precision, spiral patterns should emerge if they exist.”

“I think these results are very important because it changes our expectations for where to search for dark matter signals in galaxies,” Brooks says. “I could imagine that this result might influence our expectation for how dense dark matter is near the solar neighborhood and could influence expectations for lab experiments that are trying to directly detect dark matter.” That’s a goal scientists have been chasing for nearly 100 years.

Ashley writes about space for a contractor for NASA’s Goddard Space Flight Center by day and freelances in her free time. She holds master’s degrees in space studies from the University of North Dakota and science writing from Johns Hopkins University. She writes most of her articles with a baby on her lap.

Simulations find ghostly whirls of dark matter trailing galaxy arms Read More »

a-japanese-lander-crashed-on-the-moon-after-losing-track-of-its-location

A Japanese lander crashed on the Moon after losing track of its location


“It’s not impossible, so how do we overcome our hurdles?”

Takeshi Hakamada, founder and CEO of ispace, attends a press conference in Tokyo on June 6, 2025, to announce the outcome of his company’s second lunar landing attempt. Credit: Kazuhiro Nogi/AFP via Getty Images

A robotic lander developed by a Japanese company named ispace plummeted to the Moon’s surface Thursday, destroying a small rover and several experiments intended to demonstrate how future missions could mine and harvest lunar resources.

Ground teams at ispace’s mission control center in Tokyo lost contact with the Resilience lunar lander moments before it was supposed to touch down in a region called Mare Frigoris, or the Sea of Cold, a basaltic plain in the Moon’s northern hemisphere.

A few hours later, ispace officials confirmed what many observers suspected. The mission was lost. It’s the second time ispace has failed to land on the Moon in as many tries.

“We wanted to make Mission 2 a success, but unfortunately we haven’t been able to land,” said Takeshi Hakamada, the company’s founder and CEO.

Ryo Ujiie, ispace’s chief technology officer, said the final data received from the Resilience lander—assuming it was correct—showed it at an altitude of approximately 630 feet (192 meters) and descending too fast for a safe landing. “The deceleration was not enough. That was a fact,” Ujiie told reporters in a press conference. “We failed to land, and we have to analyze the reasons.”

The company said in a press release that a laser rangefinder used to measure the lander’s altitude “experienced delays in obtaining valid measurement values.” The downward-facing laser fires light pulses toward the Moon during descent, and clocks the time it takes to receive a reflection. This time delay at light speed tells the lander’s guidance system how far it is above the lunar surface. But something went wrong in the altitude measurement system on Thursday.

“As a result, the lander was unable to decelerate sufficiently to reach the required speed for the planned lunar landing,” ispace said. “Based on these circumstances, it is currently assumed that the lander likely performed a hard landing on the lunar surface.”

Controllers sent a command to reboot the lander in hopes of reestablishing communication, but the Resilience spacecraft remained silent.

“Given that there is currently no prospect of a successful lunar landing, our top priority is to swiftly analyze the telemetry data we have obtained thus far and work diligently to identify the cause,” Hakamada said in a statement. “We will strive to restore trust by providing a report of the findings to our shareholders, payload customers, Hakuto-R partners, government officials, and all supporters of ispace.”

Overcoming obstacles

The Hakuto name harkens back to ispace’s origin in 2010 as a contender for the Google Lunar X-Prize, a sweepstakes that offered a $20 million grand prize to the first privately funded team to put a lander on the Moon. Hakamada’s group was called Hakuto, which means “white rabbit” in Japanese. The prize shut down in 2018 without a winner, leading some of the teams to dissolve or find new purpose. Hakamada stayed the course, raised more funding, and rebooted the program under the name Hakuto-R.

It’s a story of resilience, hence the name of ispace’s second lunar lander. The mission made it closer to the Moon than the ispace’s first landing attempt in 2023, but Thursday’s failure is a blow to Hakamada’s project.

“As a fact, we tried twice and we haven’t been able to land on the Moon,” Hakamada said through an interpreter. “So we have to say it’s hard to land on the Moon, technically. We know it’s not easy. It’s not something that everyone can do. We know it’s hard, but the important point is it’s not impossible. The US private companies have succeeded in landing, and also JAXA in Japan has succeeded in landing, so it’s not impossible. So how do we overcome our hurdles?”

The Resilience lander and Tenacious rover, seen mounted near the top of the spacecraft, inside a test facility at the Tsukuba Space Center in Tsukuba, Ibaraki Prefecture, on Thursday, Sept. 12, 2024. Credit: Toru Hanai/Bloomberg via Getty Images

In April 2023, ispace’s first lander crashed on the Moon due to a similar altitude measurement problem. The spacecraft thought it was on the surface of the Moon, but was actually firing its engine to hover at an altitude of 3 miles (5 kilometers). The spacecraft ran out of fuel and went into a free fall before impacting the Moon.

Engineers blamed software as the most likely reason for the altitude-measurement problem. During descent, ispace’s lander passed over a 10,000-foot-tall (3,000-meter) cliff, and the spacecraft’s computer interpreted the sudden altitude change as erroneous.

Ujiie, who leads ispace’s technical teams, said the failure mode Thursday was “similar” to that of the first mission two years ago. But at least in ispace’s preliminary data reviews, engineers saw different behavior from the Resilience lander, which flew with a new type of laser rangefinder after ispace’s previous supplier stopped producing the device.

“From Mission 1 to Mission 2, we improved the software,” Ujiie said. “Also, we improved how to approach the landing site… We see different phenomena from Mission 1, so we have to do more analysis to give you any concrete answers.”

If ispace landed smoothly on Thursday, the Resilience spacecraft would have deployed a small rover developed by ispace’s European subsidiary. The rover was partially funded by the Luxembourg Space Agency with support from the European Space Agency. It carried a shovel to scoop up a small amount of lunar soil and a camera to take a photo of the sample. NASA had a contract with ispace to purchase the lunar soil in a symbolic proof of concept to show how the government might acquire material from commercial mining companies in the future.

The lander also carried a water electrolyzer experiment to demonstrate technologies that could split water molecules into hydrogen and oxygen, critical resources for a future Moon base. Other payloads aboard the Resilience spacecraft included cameras, a food production experiment, a radiation monitor, and a Swedish art project called “MoonHouse.”

The spacecraft chassis used for ispace’s first two landing attempts was about the size of a compact car, with a mass of about 1 metric ton (2,200 pounds) when fully fueled. The company’s third landing attempt is scheduled for 2027 with a larger lander. Next time, ispace will fly to the Moon in partnership between the company’s US subsidiary and Draper Laboratory, which has a contract with NASA to deliver experiments to the lunar surface.

Track record

The Resilience lander launched in January on top of a SpaceX Falcon 9 rocket, riding to space in tandem with a commercial Moon lander named Blue Ghost from Firefly Aerospace. Firefly’s lander took a more direct journey to the Moon and achieved a soft landing on March 2. Blue Ghost operated on the lunar surface for two weeks and completed all of its objectives.

The trajectory of ispace’s lander was slower, following a lower-energy, more fuel-efficient path to the Moon before entering lunar orbit last month. Once in orbit, the lander made a few more course corrections to line up with its landing site, then commenced its final descent on Thursday.

Thursday’s landing attempt was the seventh time a privately developed Moon lander tried to conduct a controlled touchdown on the lunar surface.

Two Texas-based companies have had the most success. One of them, Houston-based Intuitive Machines, landed its Odysseus spacecraft on the Moon in February 2024, marking the first time a commercial lander reached the lunar surface intact. But the lander tipped over after touchdown, cutting its mission short after achieving some limited objectives. A second Intuitive Machines lander reached the Moon in one piece in March of this year, but it also fell over and didn’t last as long as the company’s first mission.

Firefly’s Blue Ghost operated for two weeks after reaching the lunar surface, accomplishing all of its objectives and becoming the first fully successful privately owned spacecraft to land and operate on the Moon.

Intuitive Machines, Firefly, and a third company—Astrobotic Technology—have launched their lunar missions under contract with a NASA program aimed at fostering a commercial marketplace for transportation to the Moon. Astrobotic’s first lander failed soon after its departure from Earth. The first two missions launched by ispace were almost fully private ventures, with limited participation from the Japanese space agency, Luxembourg, and NASA.

The Earth looms over the Moon’s horizon in this image from lunar orbit captured on May 27, 2025, by ispace’s Resilience lander. Credit: ispace

Commercial travel to the Moon only began in 2019, so there’s not much of a track record to judge the industry’s prospects. When NASA started signing contracts for commercial lunar missions, the then-chief of the agency’s science vision, Thomas Zurbuchen, estimated the initial landing attempts would have a 50-50 chance of success. On the whole, NASA’s experience with Intuitive Machines, Firefly, and Astrobotic isn’t too far off from Zurbuchen’s estimate, with one full success and a couple of partial successes.

The commercial track record worsens if you include private missions from ispace and Israel’s Beresheet lander.

But ispace and Hakamada haven’t given up on the dream. The company’s third mission will launch under the umbrella of the same NASA program that contracted with Intuitive Machines, Firefly, and Astrobotic. Hakamada cited the achievements of Firefly and Intuitive Machines as evidence that the commercial model for lunar missions is a valid one.

“The ones that have the landers, there are two companies I mentioned. Also, Blue Origin maybe coming up. Also, ispace is a possibility,” Hakamada said. “So, very few companies. We would like to catch up as soon as possible.”

It’s too early to know how the failure on Thursday might impact ispace’s next mission with Draper and NASA.

“I have to admit that we are behind,” said Jumpei Nozaki, director and chief financial officer at ispace. “But we do not really think we are behind from the leading group yet. It’s too early to decide that. The players in the world that can send landers to the Moon are very few, so we still have some competitive edge.”

“Honestly, there were some times I almost cried, but I need to lead this company, and I need to have a strong will to move forward, so it’s not time for me to cry,” Hakamada said.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

A Japanese lander crashed on the Moon after losing track of its location Read More »

us-science-is-being-wrecked,-and-its-leadership-is-fighting-the-last-war

US science is being wrecked, and its leadership is fighting the last war


Facing an extreme budget, the National Academies hosted an event that ignored it.

WASHINGTON, DC—The general outline of the Trump administration’s proposed 2026 budget was released a few weeks back, and it included massive cuts for most agencies, including every one that funds scientific research. Late last week, those agencies began releasing details of what the cuts would mean for the actual projects and people they support. And the results are as bad as the initial budget had suggested: one-of-a-kind scientific experiment facilities and hardware retired, massive cuts in supported scientists, and entire areas of research halted.

And this comes in an environment where previously funded grants are being terminated, funding is being held up for ideological screening, and universities have been subjected to arbitrary funding freezes. Collectively, things are heading for damage to US science that will take decades to recover from. It’s a radical break from the trajectory science had been on.

That’s the environment that the US’s National Academies of Science found itself in yesterday while hosting the State of the Science event in Washington, DC. It was an obvious opportunity for the nation’s leading scientific organization to warn the nation of the consequences of the path that the current administration has been traveling. Instead, the event largely ignored the present to worry about a future that may never exist.

The proposed cuts

The top-line budget numbers proposed earlier indicated things would be bad: nearly 40 percent taken off the National Institutes of Health’s budget, the National Science Foundation down by over half. But now, many of the details of what those cuts mean are becoming apparent.

NASA’s budget includes sharp cuts for planetary science, which would be cut in half and then stay flat for the rest of the decade, with the Mars Sample Return mission canceled. All other science budgets, including Earth Science and Astrophysics, take similar hits; one astronomer posted a graphic showing how many present and future missions that would mean. Active missions that have returned unprecedented data, like Juno and New Horizons, would go, as would two Mars orbiters. As described by Science magazine’s news team, “The plans would also kill off nearly every major science mission the agency has not yet begun to build.”

A NASA graphic showing different missions focused on astrophysics. Red Xs have been superimposed on most of them.

A chart prepared by astronomer Laura Lopez showing just how many astrophysics missions will be cancelled. Credit: Laura Lopez

The National Science Foundation, which funds much of the US’s fundamental research, is also set for brutal cuts. Biology, engineering, and education will all be slashed by over 70 percent; computer science, math and physical science, and social and behavioral science will all see cuts of over 60 percent. International programs will take an 80 percent cut. The funding rate of grant proposals is expected to drop from 26 percent to just 7 percent, meaning the vast majority of grants submitted to the NSF will be a waste of time. The number of people involved in NSF-funded activities will drop from over 300,000 to just 90,000. Almost every program to broaden participation in science will be eliminated.

As for specifics, they’re equally grim. The fleet of research ships will essentially become someone else’s problem: “The FY 2026 Budget Request will enable partial support of some ships.” We’ve been able to better pin down the nature and location of gravitational wave events as detectors in Japan and Italy joined the original two LIGO detectors; the NSF will reverse that progress by shutting one of the LIGOs. The NSF’s contributions to detectors at the Large Hadron Collider will be cut by over half, and one of the two very large telescopes it was helping fund will be cancelled (say goodbye to the Thirty Meter Telescope). “Access to the telescopes at Kitt Peak and Cerro Tololo will be phased out,” and the NSF will transfer the facilities to other organizations.

The Department of Health and Human Services has been less detailed about the specific cuts its divisions will see, largely focusing on the overall numbers, which are down considerably. The NIH, which is facing a cut of over 40 percent, will be reorganized, with its 19 institutes pared down to just eight. This will result in some odd pairings, such as the dental and eye institutes ending up in the same place; genomics and biomedical imaging will likewise end up under the same roof. Other groups like the Centers for Disease Control and Prevention and the Food and Drug Administration will also face major cuts.

Issues go well beyond the core science agencies, as well. In the Department of Energy, funding for wind, solar, and renewable grid integration has been zeroed out, essentially ending all programs in this area. Hydrogen and fuel cells face a similar fate. Collectively, these had gotten over $600 billion dollars in 2024’s budget. Other areas of science at the DOE, such as high-energy physics, fusion, and biology, receive relatively minor cuts that are largely in line with the ones faced by administration priorities like fossil and nuclear energy.

Will this happen?

It goes without saying that this would amount to an abandonment of US scientific leadership at a time when most estimates of China’s research spending show it approaching US-like levels of support. Not only would it eliminate many key facilities, instruments, and institutions that have helped make the US a scientific powerhouse, but it would also block the development of newer and additional ones. The harms are so widespread that even topics that the administration claims are priorities would see severe cuts.

And the damage is likely to last for generations, as support is cut at every stage of the educational pipeline that prepares people for STEM careers. This includes careers in high-tech industries, which may require relocation overseas due to a combination of staffing concerns and heightened immigration controls.

That said, we’ve been here before in the first Trump administration, when budgets were proposed with potentially catastrophic implications for US science. But Congress limited the damage and maintained reasonably consistent budgets for most agencies.

Can we expect that to happen again? So far, the signs are not especially promising. The House has largely adopted the Trump administration’s budget priorities, despite the fact that the budget they pass turns its back on decades of supposed concerns about deficit spending. While the Senate has yet to take up the budget, it has also been very pliant during the second Trump administration, approving grossly unqualified cabinet picks such as Robert F. Kennedy Jr.

All of which would seem to call for the leadership of US science organizations to press the case for the importance of science funding to the US and highlight the damage that these cuts would cause. But, if yesterday’s National Academies event is anything to judge by, the leadership is not especially interested.

Altered states

As the nation’s premier science organization, and one that performs lots of analyses for the government, the National Academies would seem to be in a position to have its concerns taken seriously by members of Congress. And, given that the present and future of science in the US is being set by policy choices, a meeting entitled the State of the Science would seem like the obvious place to address those concerns.

If so, it was not obvious to Marcia McNutt, the president of the NAS, who gave the presentation. She made some oblique references to current problems, saying, “We are embarking on a radical new experiment in what conditions promote science leadership, with the US being the treatment group, and China as the control,” and acknowledged that “uncertainties over the science budgets for next year, coupled with cancellations of billions of dollars of already hard-won research grants, is causing an exodus of researchers.”

But her primary focus was on the trends that have been operative in science funding and policy leading up to but excluding the second Trump administration. McNutt suggested this was needed to look beyond the next four years. However, that ignores the obvious fact that US science will be fundamentally different if the Trump administration can follow through on its plans and policies; the trends that have been present for the last two decades will be irrelevant.

She was also remarkably selective about her avoidance of discussing Trump administration priorities. After noting that faculty surveys have suggested they spend roughly 40 percent of their time handling regulatory requirements, she twice mentioned that the administration’s anti-regulatory stance could be a net positive here (once calling it “an opportunity to help”). Yet she neglected to note that many of the abandoned regulations represent a retreat from science-driven policy.

McNutt also acknowledged the problem of science losing the bipartisan support it has enjoyed, as trust in scientists among US conservatives has been on a downward trend. But she suggested it was scientists’ responsibility to fix the problem, even though it’s largely the product of one party deciding it can gain partisan advantage by raising doubts about scientific findings in fields like climate change and vaccine safety.

The panel discussion that came after largely followed McNutt’s lead in avoiding any mention of the current threats to science. The lone exception was Heather Wilson, president of the University of Texas at El Paso and a former Republican member of the House of Representatives and secretary of the Air Force during the first Trump administration. Wilson took direct aim at Trump’s cuts to funding for underrepresented groups, arguing, “Talent is evenly distributed, but opportunity is not.” After arguing that “the moral authority of science depends on the pursuit of truth,” she highlighted the cancellation of grants that had been used to study diseases that are more prevalent in some ethnic groups, saying “that’s not woke science—that’s genetics.”

Wilson was clearly the exception, however, as the rest of the panel largely avoided direct mention of either the damage already done to US science funding or the impending catastrophe on the horizon. We’ve asked the National Academies’ leadership a number of questions about how it perceives its role at a time when US science is clearly under threat. As of this article’s publication, however, we have not received a response.

At yesterday’s event, however, only one person showed a clear sense of what they thought that role should be—Wilson again, whose strongest words were directed at the National Academies themselves, which she said should “do what you’ve done since Lincoln was president,” and stand up for the truth.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

US science is being wrecked, and its leadership is fighting the last war Read More »

science-phds-face-a-challenging-and-uncertain-future

Science PhDs face a challenging and uncertain future


Smaller post-grad classes are likely due to research budget cuts.

Credit: Thomas Barwick/Stone via Getty Images

Since the National Science Foundation first started collecting postgraduation data nearly 70 years ago, the number of PhDs awarded in the United States has consistently risen. Last year, more than 45,000 students earned doctorates in science and engineering, about an eight-fold increase compared to 1958.

But this level of production of science and engineering PhD students is now in question. Facing significant cuts to federal science funding, some universities have reduced or paused their PhD admissions for the upcoming academic year. In response, experts are beginning to wonder about the short and long-term effects those shifts will have on the number of doctorates awarded and the consequent impact on science if PhD production does drop.

Such questions touch on longstanding debates about academic labor. PhD training is a crucial part of nurturing scientific expertise. At the same time, some analysts have worried about an oversupply of PhDs in some fields, while students have suggested that universities are exploiting them as low-cost labor.

Many budding scientists go into graduate school with the goal of staying in academia and ultimately establishing their own labs. For at least 30 years, there has been talk of a mismatch between the number of doctorates and the limited academic job openings. According to an analysis conducted in 2013, only 3,000 faculty positions in science and engineering are added each year—even though more than 35,000 PhDs are produced in these fields annually.

Decades of this asymmetrical dynamic has created a hypercompetitive and high-pressure environment in the academic world, said Siddhartha Roy, an environmental engineer at Rutgers University who co-authored a recent study on tenure-track positions in engineering. “If we look strictly at academic positions, we have a huge oversupply, and it’s not sustainable,” he said.

But while the academic job market remains challenging, experts point out that PhD training also prepares individuals for career paths in industry, government, and other science and technology fields. If fewer doctorates are awarded and funding continues to be cut, some argue, American science will weaken.

“The immediate impact is there’s going to be less science,” said Donna Ginther, a social researcher who studies scientific labor markets at the University of Kansas. In the long run, that could mean scientific innovations, such as new drugs or technological advances, will stall, she said: “We’re leaving that scientific discovery on the table.”

Historically, one of the main goals of training PhD students has been to retain those scientists as future researchers in their respective fields. “Academia has a tendency to want to produce itself, reproduce itself,” said Ginther. “Our training is geared towards creating lots of mini-mes.”

But it is no secret in the academic world that tenure-track faculty positions are scarce, and the road to obtaining tenure is difficult. Although it varies across different STEM fields, the number of doctorates granted each year consistently surpass the number of tenure-track positions available. A survey gathering data from the 2022-2023 academic year, conducted by the Computing Research Association, found that around 11 percent of PhD graduates in computational science (for which employment data was reported) moved on to tenure-track faculty positions.

Roy found a similar figure for engineering: Around one out of every eight individuals who obtain their doctorate—12.5 percent—will eventually land a tenure-track faculty position, a trend that remained stable between 2014 and 2021, the last year for which his team analyzed data. The bottleneck in faculty positions, according to one recent study, leads about 40 percent of postdoctoral researchers to leave academia.

However, in recent years, researchers who advise graduate students have begun to acknowledge careers beyond academia, including positions in industry, nonprofits, government, consulting, science communication, and policy. “We need, as academics, need to take a broader perspective on what and how we prepare our students,” said Ginther.

As opposed to faculty positions, some of these labor markets can be more robust and provide plenty of opportunities for those with a doctorate, said Daniel Larremore, a computer scientist at the University of Colorado Boulder who studies academic labor markets, among other topics. Whether there is a mismatch between the number of PhDs and employment opportunities will depend on the subject of study and which fields are growing or shrinking, he added. For example, he pointed out that there is currently a boom in machine learning and artificial intelligence, so there is a lot of demand from industry for computer science graduates. In fact, commitments to industry jobs after graduation seem to be at a 30-year high.

But not all newly minted PhDs immediately find work. According to the latest NSF data, students in biological and biomedical sciences experienced a decline in job offers in the past 20 years, with 68 percent having definite commitments after graduating in 2023, compared to 72 percent in 2003. “The dynamics in the labor market for PhDs depends very much on what subject the PhD is in,” said Larremore.

Still, employment rates reflect that postgraduates benefit from greater opportunities compared to the general population. In 2024, the unemployment rate for college graduates with a doctoral degree in the US was 1.2 percent, less than half the national average at the time, according to the Bureau of Labor Statistics. In NSF’s recent survey, 74 percent of science and engineering graduating doctorates had definite commitments for employment or postdoctoral study or training positions, three points higher than it was in 2003.

“Overproducing for the number of academic jobs available? Absolutely,” said Larremore. “But overproducing for the economy in general? I don’t think so.”

The experts who spoke with Undark described science PhDs as a benefit for society: Ultimately, scientists with PhDs contribute to the economy of a nation, be it through academia or alternative careers. Many are now concerned about the impact that cuts to scientific research may have on that contribution. Already, there are reports of universities scaling back graduate student admissions in light of funding uncertainties, worried that they might not be able to cover students’ education and training costs. Those changes could result in smaller graduating classes in future years.

Smaller classes of PhD students might not be a bad thing for academia, given the limited faculty positions, said Roy. And for most non-academic jobs, Roy said, a master’s degree is more than sufficient. However, people with doctorates do contribute to other sectors like industry, government labs, and entrepreneurship, he added.

In Ginther’s view, fewer scientists with doctoral training could deal a devastating blow for the broader scientific enterprise. “Science is a long game, and the discoveries now take a decade or two to really hit the market, so it’s going to impinge on future economic growth.”

These long-term impacts of reductions in funding might be hard to reverse and could lead to the withering of the scientific endeavor in the United States, Larremore said: “If you have a thriving ecosystem and you suddenly halve the sunlight coming into it, it simply cannot thrive in the way that it was.”

This article was originally published on Undark. Read the original article.

Science PhDs face a challenging and uncertain future Read More »

some-parts-of-trump’s-proposed-budget-for-nasa-are-literally-draconian

Some parts of Trump’s proposed budget for NASA are literally draconian


“That’s exactly the kind of thing that NASA should be concentrating its resources on.”

Artist’s illustration of the DRACO nuclear rocket engine in space. Credit: Lockheed Martin

New details of the Trump administration’s plans for NASA, released Friday, revealed the White House’s desire to end the development of an experimental nuclear thermal rocket engine that could have shown a new way of exploring the Solar System.

Trump’s NASA budget request is rife with spending cuts. Overall, the White House proposes reducing NASA’s budget by about 24 percent, from $24.8 billion this year to $18.8 billion in fiscal year 2026. In previous stories, Ars has covered many of the programs impacted by the proposed cuts, which would cancel the Space Launch System rocket and Orion spacecraft and terminate numerous robotic science missions, including the Mars Sample Return, probes to Venus, and future space telescopes.

Instead, the leftover funding for NASA’s human exploration program would go toward supporting commercial projects to land on the Moon and Mars.

NASA’s initiatives to pioneer next-generation space technologies are also hit hard in the White House’s budget proposal. If the Trump administration gets its way, NASA’s Space Technology Mission Directorate, or STMD, will see its budget cut nearly in half, from $1.1 billion to $568 million.

Trump’s budget request isn’t final. Both Republican-controlled houses of Congress will write their own versions of the NASA budget, which must be reconciled before going to the White House for President Trump’s signature.

“The budget reduces Space Technology by approximately half, including eliminating failing space propulsion projects,” the White House wrote in an initial overview of the NASA budget request released May 2. “The reductions also scale back or eliminate technology projects that are not needed by NASA or are better suited to private sector research and development.”

Breathing fire

Last week, the White House and NASA put a finer point on these “failing space propulsion projects.”

“This budget provides no funding for Nuclear Thermal Propulsion and Nuclear Electric Propulsion projects,” officials wrote in a technical supplement released Friday detailing Trump’s NASA budget proposal. “These efforts are costly investments, would take many years to develop, and have not been identified as the propulsion mode for deep space missions. The nuclear propulsion projects are terminated to achieve cost savings and because there are other nearer-term propulsion alternatives for Mars transit.”

Foremost among these cuts, the White House proposes to end NASA’s participation in the Demonstration Rocket for Agile Cislunar Operations (DRACO) project. NASA said this proposal “reflects the decision by our partner to cancel” the DRACO mission, which would have demonstrated a nuclear thermal rocket engine in space for the first time.

NASA’s partner on the DRACO mission was the Defense Advanced Research Projects Agency, or DARPA, the Pentagon’s research and development arm. A DARPA spokesperson confirmed the agency was closing out the project.

“DARPA has completed the agency’s involvement in the Demonstration Rocket for Agile Cislunar Orbit (DRACO) program and is transitioning its knowledge to our DRACO mission partner, the National Aeronautics and Space Administration (NASA), and to other potential DOD programs,” the spokesperson said in a response to written questions.

A nuclear rocket engine, which was to be part of NASA’s aborted NERVA program, is tested at Jackass Flats, Nevada, in 1967. Credit: Corbis via Getty Images)

Less than two years ago, NASA and DARPA announced plans to move forward with the roughly $500 million DRACO project, targeting a launch into Earth orbit aboard a traditional chemical rocket in 2027. “With the help of this new technology, astronauts could journey to and from deep space faster than ever, a major capability to prepare for crewed missions to Mars,” former NASA administrator Bill Nelson said at the time.

The DRACO mission would have consisted of several elements, including a nuclear reactor to rapidly heat up super-cold liquid hydrogen fuel stored in an insulated tank onboard the spacecraft. Temperatures inside the engine would reach nearly 5,000° Fahrenheit, boiling the hydrogen and driving the resulting gas through a nozzle, generating thrust. From the outside, the spacecraft’s design looks a lot like the upper stage of a traditional rocket. However, theoretically, a nuclear thermal rocket engine like DRACO’s would offer twice the efficiency of the highest-performing conventional rocket engines. That translates to significantly less fuel that a mission to Mars would have to carry across the Solar System.

Essentially, a nuclear thermal rocket engine combines the high-thrust capability of a chemical engine with some of the fuel efficiency benefits of low-thrust solar-electric engines. With DRACO, engineers sought hard data to verify their understanding of nuclear propulsion and wanted to make sure the nuclear engine’s challenging design actually worked. DRACO would have used high-assay low-enriched uranium to power its nuclear reactor.

Nuclear electric propulsion uses an onboard nuclear reactor to power plasma thrusters that create thrust by accelerating an ionized gas, like xenon, through a magnetic field. Nuclear electric propulsion would provide another leap in engine efficiency beyond the capabilities of a system like DRACO and may ultimately offer the most attractive option for enduring deep space transportation.

NASA led the development of DRACO’s nuclear rocket engine, while DARPA was responsible for the overall spacecraft design, operations, and the thorny problem of securing regulatory approval to launch a nuclear reactor into orbit. The reactor on DRACO would have launched in “cold” mode before activating in space, reducing the risk to people on the ground in the event of a launch accident. The Space Force agreed to pay for DRACO’s launch on a United Launch Alliance Vulcan rocket.

DARPA and NASA selected Lockheed Martin as the lead contractor for the DRACO spacecraft in 2023. BWX Technologies, a leader in the US nuclear industry, won the contract to develop the mission’s reactor.

“We received the notice from DARPA that it ended the DRACO program,” a Lockheed Martin spokesperson said. “While we’re disappointed with the decision, it doesn’t change our vision of how nuclear power influences how we will explore and operate in the vastness of space.”

Mired in the lab

More than 60 years have passed since a US-built nuclear reactor launched into orbit. Aviation Week reported in January that one problem facing DRACO engineers involved questions about how to safely test the nuclear thermal engine on the ground while adhering to nuclear safety protocols.

“We’re bringing two things together—space mission assurance and nuclear safety—and there’s a fair amount of complexity,” said Matthew Sambora, a DRACO program manager at DARPA, in an interview with Aviation Week. At the time, DARPA and NASA had already given up on a 2027 launch to concentrate on developing a prototype engine using helium as a propellant before moving on to an operational engine with more energetic liquid hydrogen fuel, Aviation Week reported.

Greg Meholic, an engineer at the Aerospace Corporation, highlighted the shortfall in ground testing capability in a presentation last year. Nuclear thermal propulsion testing “requires that engine exhaust be scrubbed of radiologics before being released,” he wrote. This requirement “could result in substantially large, prohibitively expensive facilities that take years to build and qualify.”

These safety protocols weren’t as stringent when NASA and the Air Force first pursued nuclear propulsion in the 1960s. Now, the first serious 21st-century effort to fly a nuclear rocket engine in space is grinding to a halt.

“Given that our near-term human exploration and science needs do not require nuclear propulsion, current demonstration projects will end,” wrote Janet Petro, NASA’s acting administrator, in a letter accompanying the Trump administration’s budget release last week.

This figure illustrates the major elements of a typical nuclear thermal rocket engine. Credit: NASA/Glenn Research Center

NASA’s 2024 budget allocated $117 million for nuclear propulsion work, an increase from $91 million the previous year. Congress added more funding for NASA’s nuclear propulsion programs over the Biden administration’s proposed budget in recent years, signaling support on Capitol Hill that may save at least some nuclear propulsion initiatives next year.

It’s true that nuclear propulsion isn’t required for any NASA missions currently on the books. Today’s rockets are good at hurling cargo and people off planet Earth, but once a spacecraft arrives in orbit, there are several ways to propel it toward more distant destinations.

NASA’s existing architecture for sending astronauts to the Moon uses the SLS rocket and Orion spacecraft, both of which are proposed for cancellation and look a lot like the vehicles NASA used to fly astronauts to the Moon more than 50 years ago. SpaceX’s reusable Starship, designed with an eye toward settling Mars, uses conventional chemical propulsion, with methane and liquid oxygen propellants that SpaceX one day hopes to generate on the surface of the Red Planet.

So NASA, SpaceX, and other companies don’t need nuclear propulsion to beat China back to the Moon or put the first human footprints on Mars. But there’s a broad consensus that in the long run, nuclear rockets offer a better way of moving around the Solar System.

The military’s motive for funding nuclear thermal propulsion was its potential for becoming a more efficient means of maneuvering around the Earth. Many of the military’s most important spacecraft are limited by fuel, and the Space Force is investigating orbital refueling and novel propulsion methods to extend the lifespan of satellites.

NASA’s nuclear power program is not finished. The Trump administration’s budget proposal calls for continued funding for the agency’s fission surface power program, with the goal of fielding a nuclear reactor that could power a base on the surface of the Moon or Mars. Lockheed and BWXT, the contractors involved in the DRACO mission, are part of the fission surface power program.

There is some funding in the White House’s budget request for tech demos using other methods of in-space propulsion. NASA would continue funding experiments in long-term storage and transfer of cryogenic propellants like liquid methane, liquid hydrogen, and liquid oxygen. These joint projects between NASA and industry could pave the way for orbital refueling and orbiting propellant depots, aligning with the direction of companies like SpaceX, Blue Origin, and United Launch Alliance.

But many scientists and engineers believe nuclear propulsion offers the only realistic path for a sustainable campaign ferrying people between the Earth and Mars. A report commissioned by NASA and the National Academies concluded in 2021 that an aggressive tech-development program could advance nuclear thermal propulsion enough for a human expedition to Mars in 2039. The prospects for nuclear electric propulsion were murkier.

This would have required NASA to substantially increase its budget for nuclear propulsion immediately, likely by an order of magnitude beyond the agency’s baseline funding level, or to an amount exceeding $1 billion per year, said Bobby Braun, co-chair of the National Academies report, in a 2021 interview with Ars. That didn’t happen.

Going nuclear

The interplanetary transportation architectures envisioned by NASA and SpaceX will, at least initially, primarily use chemical propulsion for the cruise between Earth and Mars.

Kurt Polzin, chief engineer of NASA’s space nuclear propulsion projects, said significant technical hurdles stand in the way of any propulsion system selected to power heavy cargo and humans to Mars.

“Anybody who says that they’ve solved the problem, you don’t know that because you don’t have enough data,” Polzin said last week at the Humans to the Moon and Mars Summit in Washington.

“We know that to do a Mars mission with a Starship, you need lots of refuelings at Earth, you need lots of refuelings at Mars, which you have to send in advance,” Polzin said. “You either need to send that propellant in advance or send a bunch of material and hardware to the surface to be set up and robotically make your propellant in situ while you’re there.”

Elon Musk’s SpaceX is betting on chemical propulsion for round-trip flights to Mars with its Starship rocket. This will require assembly of propellant-generation plants on the Martian surface. Credit: SpaceX

Last week, SpaceX founder Elon Musk outlined how the company plans to land its first Starships on Mars. His roadmap includes more than 100 cargo flights to deliver equipment to produce methane and liquid oxygen propellants on the surface of Mars. This is necessary for any Starship to launch off the Red Planet and return to Earth.

“You can start to see that this starts to become a Rube Goldberg way to do Mars,” Polzin said. “Will I say it can’t work? No, but will I say that it’s really, really difficult and challenging. Are there a lot of miracles to make it work? Absolutely. So the notion that SpaceX has solved Mars or is going to do Mars with Starship, I would challenge that on its face. I don’t think the analysis and the data bear that out.”

Engineers know how methane-fueled rocket engines perform in space. Scientists have created liquid oxygen and liquid methane since the late 1800s. Scaling up a propellant plant on Mars to produce thousands of tons of cryogenic liquids is another matter. In the long run, this might be a suitable solution for Musk’s vision of creating a city on Mars, but it comes with immense startup costs and risks. Still, nuclear propulsion is an entirely untested technology as well.

“The thing with nuclear is there are challenges to making it work, too,” Polzin said. “However, all of my challenges get solved here at Earth and in low-Earth orbit before I leave. Nuclear is nice. It has a higher specific impulse, especially when we’re talking about nuclear thermal propulsion. It has high thrust, which means it will get our astronauts there and back quickly, but I can carry all the fuel I need to get back with me, so I don’t need to do any complicated refueling at Mars. I can return without having to make propellant or send any pre-positioned propellant to get back.”

The tug of war over nuclear propulsion is nothing new. The Air Force started a program to develop reactors for nuclear thermal rockets at the height of the Cold War. NASA took over the Air Force’s role a few years later, and the project proceeded into the next phase, called the Nuclear Engine for Rocket Vehicle Application (NERVA). President Richard Nixon ultimately canceled the NERVA project in 1973 after the government had spent $1.4 billion on it, equivalent to about $10 billion in today’s dollars. Despite nearly two decades of work, NERVA never flew in space.

Doing the hard things

The Pentagon and NASA studied several more nuclear thermal and nuclear electric propulsion initiatives before DRACO. Today, there’s a nascent commercial business case for compact nuclear reactors beyond just the government. But there’s scant commercial interest in mounting a full-scale nuclear propulsion demonstration solely with private funding.

Fred Kennedy, co-founder and CEO of a space nuclear power company called Dark Fission, said most venture capital investors lack the appetite to wait for financial returns in nuclear propulsion that they may see in 15 or 20 years.

“It’s a truism: Space is hard,” said Kennedy, a former DARPA program manager. “Nuclear turns out to be hard for reasons we can all understand. So space-nuclear is hard-squared, folks. As a result, you give this to your average associate at a VC firm and they get scared quick. They see the moles all over your face, and they run away screaming.”

But commercial launch costs are coming down. With sustained government investment and streamlined regulations, “this is the best chance we’ve had in a long time” to get a nuclear propulsion system into space, Kennedy said.

Technicians prepare a nozzle for a prototype nuclear thermal rocket engine in 1964. Credit: NASA

“I think, right now, we’re in this transitional period where companies like mine are going have to rely on some government largesse, as well as hopefully both commercial partnerships and honest private investment,” Kennedy said. “Three years ago, I would have told you I thought I could have done the whole thing with private investment, but three years have turned my hair white.”

Those who share Kennedy’s view thought they were getting an ally in the Trump administration. Jared Isaacman, the billionaire commercial astronaut Trump nominated to become the next NASA administrator, promised to prioritize nuclear propulsion in his tenure as head of the nation’s space agency.

During his Senate confirmation hearing in April, Isaacman said NASA should turn over management of heavy-lift rockets, human-rated spacecraft, and other projects to commercial industry. This change, he said, would allow NASA to focus on the “near-impossible challenges that no company, organization, or agency anywhere in the world would be able to undertake.”

The example Isaacman gave in his confirmation hearing was nuclear propulsion. “That’s something that no company would ever embark upon,” he told lawmakers. “There is no obvious economic return. There are regulatory challenges. That’s exactly the kind of thing that NASA should be concentrating its resources on.”

But the White House suddenly announced on Saturday that it was withdrawing Isaacman’s nomination days before the Senate was expected to confirm him for the NASA post. While there’s no indication that Trump’s withdrawal of Isaacman had anything to do with any specific part of the White House’s funding plan, his removal leaves NASA without an advocate for nuclear propulsion and a number of other projects falling under the White House’s budget ax.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Some parts of Trump’s proposed budget for NASA are literally draconian Read More »