Science

these-va-tech-scientists-are-building-a-better-fog-harp

These VA Tech scientists are building a better fog harp

Unlike standard fog harvesting technologies, “We’re trying to use clever geometric designs in place of chemistry,” Boreyko told Ars. “When I first came into this field, virtually everyone was using nets, but they were just trying to make more and more clever chemical coatings to put on the nets to try to reduce the clogging. We found that simply going from a net to a harp, with no chemicals or coatings whatsoever—just the change in geometry solved the clogging problem much better.”

Jimmy Kaindu inspects a new collecting prototype beside the original fog harp.

Jimmy Kaindu inspects a new collecting prototype beside the original fog harp. Credit: Alex Parrish for Virginia Tech

For their scale prototypes in the lab, Boreyko’s team 3D printed their harp “strings” out of a weakly hydrophobic plastic. “But in general, the harp works fantastic with uncoated stainless steel wires and definitely doesn’t require any kind of fancy coating,” said Boreyko. And the hybrid harp can be scaled up with relative ease, just like classic nets. It just means stringing together a bunch of harps of smaller heights, meter by meter, to get the desired size. “There is no limit to how big this thing could be,” he said.

Scaling up the model is the next obvious step, along with testing larger prototypes outdoors. Boreyko would also like to test an electric version of the hybrid fog harp. “If you apply a voltage, it turns out you can catch even more water,” he said. “Because our hybrid’s non-clogging, you can have the best of both worlds: using an electric field to boost the harvesting amount in real-life systems and at the same time preventing clogging.”

While the hybrid fog harp is well-suited for harvesting water in any coastal region that receives a lot of fog, Boreyko also envisions other, less obvious potential applications for high-efficiency fog harvesters, such as roadways, highways, or airport landing strips that are prone to fog that can pose safety hazards. “There’s even industrial chemical supply manufacturers creating things like pressurized nitrogen gas,” he said. “The process cools the surrounding air into an ice fog that can drift across the street and wreak havoc on city blocks.”

Journal of Materials Chemistry A, 2025. DOI: 10.1039/d5ta02686e  (About DOIs).

These VA Tech scientists are building a better fog harp Read More »

rocket-report:-new-delay-for-europe’s-reusable-rocket;-spacex-moves-in-at-slc-37

Rocket Report: New delay for Europe’s reusable rocket; SpaceX moves in at SLC-37


Canada is the only G7 nation without a launch program. Quebec wants to do something about that.

This graphic illustrates the elliptical shape of a geosynchronous transfer orbit in green, and the circular shape of a geosynchronous orbit in blue. In a first, SpaceX recently de-orbited a Falcon 9 upper stage from GTO after deploying a communications satellite. Credit: European Space Agency

Welcome to Edition 7.48 of the Rocket Report! The shock of last week’s public spat between President Donald Trump and SpaceX founder Elon Musk has worn off, and Musk expressed regret for some of his comments going after Trump on social media. Musk also backtracked from his threat to begin decommissioning the Dragon spacecraft, currently the only way for the US government to send people to the International Space Station. Nevertheless, there are many people who think Musk’s attachment to Trump could end up putting the US space program at risk, and I’m not convinced that danger has passed.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Quebec invests in small launch company. The government of Quebec will invest CA$10 million ($7.3 million) into a Montreal-area company that is developing a system to launch small satellites into space, The Canadian Press reports. Quebec Premier François Legault announced the investment into Reaction Dynamics at the company’s facility in Longueuil, a Montreal suburb. The province’s economy minister, Christine Fréchette, said the investment will allow the company to begin launching microsatellites into orbit from Canada as early as 2027.

Joining its peers … Canada is the only G7 nation without a domestic satellite launch capability, whether it’s through an independent national or commercial program or through membership in the European Space Agency, which funds its own rockets. The Canadian Space Agency has long eschewed any significant spending on developing a Canadian satellite launcher, and a handful of commercial launch startups in Canada haven’t gotten very far. Reaction Dynamics was founded in 2017 by Bachar Elzein, formerly a researcher in multiphase and reactive flows at École Polytechnique de Montréal, where he specialized in propulsion and combustion dynamics. Reaction Dynamic plans to launch its first suborbital rocket later this year, before attempting an orbital flight with its Aurora rocket as soon as 2027. (submitted by Joey S-IVB)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Another year, another delay for Themis. The European Space Agency’s Themis program has suffered another setback, with the inaugural flight of its reusable booster demonstrator now all but certain to slip to 2026, European Spaceflight reports. It has been nearly six years since the European Space Agency kicked off the Themis program to develop and mature key technologies for future reusable rocket stages. Themis is analogous to SpaceX’s Grasshopper reusable rocket prototype tested more than a decade ago, with progressively higher hop tests to demonstrate vertical takeoff and vertical landing techniques. When the program started, an initial hop test of the first Themis demonstrator was expected to take place in 2022.

Tethered to terra firma … ArianeGroup, which manufactures Europe’s Ariane rockets, is leading the Themis program under contract to ESA, which recently committed an additional 230 million euros ($266 million) to the effort. This money is slated to go toward development of a single-engine variant of the Themis program, continued development of the rocket’s methane-fueled engine, and upgrades to a test stand at ArianeGroup’s propulsion facility in Vernon, France. Two months ago, an official update on the Themis program suggested the first Themis launch campaign would begin before the end of the year. Citing sources close to the program, European Spaceflight reports the first Themis integration tests at the Esrange Space Center in Sweden are now almost certain to slip from late 2025 to 2026.

French startup tests a novel rocket engine. While Europe’s large government-backed rocket initiatives face delays, the continent’s space industry startups are moving forward on their own. One of these companies, a French startup named Alpha Impulsion, recently completed a short test-firing of an autophage rocket engine, European Spaceflight reports. These aren’t your normal rocket engines that burn conventional kerosene, methane, or hydrogen fuel. An autophage engine literally consumes itself as it burns, using heat from the combustion process to melt its plastic fuselage and feed the molten plastic into the combustion chamber in a controlled manner. Alpha Impulsion called the May 27 ground firing a successful test of the “largest autophage rocket engine in the world.”

So, why hasn’t this been done before? … The concept of a self-consuming rocket engine sounds like an idea that’s so crazy it just might work. But the idea remained conceptual from when it was first patented in 1938 until an autophage engine was fired in a controlled manner for the first time in 2018. The autophage design offers several advantages, including its relative simplicity compared to the complex plumbing of liquid and hybrid rockets. But there are serious challenges associated with autophage engines, including how to feed molten fuel into the combustion chamber and how to scale it up to be large enough to fly on a viable rocket. (submitted by trimeta and EllPeaTea)

Rocket trouble delays launch of private crew mission. A propellant leak in a Falcon 9 booster delayed the launch of a fourth Axiom Space private astronaut mission to the International Space Station this week, Space News reports. SpaceX announced the delay Tuesday, saying it needed more time to fix a liquid oxygen leak found in the Falcon 9 booster during inspections following a static-fire test Sunday. “Once complete–and pending Range availability–we will share a new launch date,” the company stated. The Ax-4 mission will ferry four commercial astronauts, led by retired NASA commander Peggy Whitson, aboard a Dragon spacecraft to the ISS for an approximately 14-day stay. Whitson will be joined by crewmates from India, Poland, and Hungary.

Another problem, too … While SpaceX engineers worked on resolving the propellant leak on the ground, a leak of another kind in orbit forced officials to order a longer delay to the Ax-4 mission. In a statement Thursday, NASA said it is working with the Russian space agency to understand a “new pressure signature” in the space station’s Russian service module. For several years, ground teams have monitored a slow air leak in the aft part of the service module, and NASA officials have identified it as a safety risk. NASA’s statement on the matter was vague, only saying that cosmonauts on the station recently inspected the module’s interior surfaces and sealed additional “areas of interest.” The segment is now holding pressure, according to NASA. (submitted by EllPeaTea)

SpaceX tries something new with Falcon 9. With nearly 500 launches under its belt, SpaceX’s Falcon 9 rocket isn’t often up to new tricks. But the company tried something new following a launch June 7 with a radio broadcasting satellite for SiriusXM. The Falcon 9’s upper stage placed the SXM-10 satellite into an elongated, high-altitude transfer orbit, as is typical for payloads destined to operate in geosynchronous orbit more than 22,000 miles (nearly 36,000 kilometers) over the equator. When a rocket releases a satellite in this type of high-energy orbit, the upper stage has usually burned almost all of its propellant, leaving little fuel left over to steer itself back into Earth’s atmosphere for a destructive reentry. This means these upper stages often remain in space for decades, becoming a piece of space junk transiting across the orbits of many other satellites.

Now, a solution … SpaceX usually deorbits rockets after they deploy payloads like Starlink satellites into low-Earth orbit, but deorbiting a rocket from a much higher geosynchronous transfer orbit is a different matter. “Last week, SpaceX successfully completed a controlled deorbit of the SiriusXM-10 upper stage after GTO payload deployment,” wrote Jon Edwards, SpaceX’s vice president of Falcon and Dragon programs. “While we routinely do controlled deorbits for LEO stages (e.g., Starlink), deorbiting from GTO is extremely difficult due to the high energy needed to alter the orbit, making this a rare and remarkable first for us. This was only made possible due to the hard work and brilliance of the Falcon GNC (guidance, navigation, and control) team and exemplifies SpaceX’s commitment to leading in both space exploration and public safety.”

New Glenn gets a tentative launch date. Five months have passed since Blue Origin’s New Glenn rocket made its mostly successful debut in January. At one point the company targeted “late spring” for the second launch of the rocket. However, on Monday, Blue Origin’s CEO, Dave Limp, acknowledged on social media that the rocket’s next flight will now no longer take place until at least August 15, Ars reports. Although he did not say so, this may well be the only other New Glenn launch this year. The mission, with an undesignated payload, will be named “Never Tell Me the Odds,” due to the attempt to land the booster. “One of our key mission objectives will be to land and recover the booster,” Limp wrote. “This will take a little bit of luck and a lot of excellent execution. We’re on track to produce eight GS2s [second stages] this year, and the one we’ll fly on this second mission was hot-fired in April.”

Falling shortBefore 2025 began, Limp set expectations alongside Blue Origin founder Jeff Bezos: New Glenn would launch eight times this year. That’s not going to happen. It’s common for launch companies to take a while ramping up the flight rate for a new rocket, but Bezos told Ars in January that his priority for Blue Origin this year was to hit a higher cadence with New Glenn. Elon Musk’s rift with President Donald Trump could open a pathway for Blue Origin to capture more government business if the New Glenn rocket is able to establish a reliable track record. Meanwhile, Limp told Blue Origin employees last month that Jarrett Jones, the manager running the New Glenn program, is taking a sabbatical. Although it appears Jones’ leave may have been planned, the timing is curious.

Making way for Starship at Cape Canaveral. The US Air Force is moving closer to authorizing SpaceX to move into one of the largest launch pads at Cape Canaveral Space Force Station in Florida, with plans to use the facility for up to 76 launches of the company’s Starship rocket each year, Ars reports. A draft Environmental Impact Statement (EIS) released by the Department of the Air Force, which includes the Space Force, found SpaceX’s planned use of Space Launch Complex 37 (SLC-37) at Cape Canaveral would have no significant negative impacts on local environmental, historical, social, and cultural interests. The Air Force also found SpaceX’s plans at SLC-37 will have no significant impact on the company’s competitors in the launch industry.

Bringing the rumble … SLC-37 was the previous home to United Launch Alliance’s Delta IV rocket, which last flew from the site in April 2024, a couple of months after the military announced SpaceX was interested in using the launch pad. While it doesn’t have a lease for full use of the launch site, SpaceX has secured a “right of limited entry” from the Space Force to begin preparatory work. This included the explosive demolition of the launch pad’s Delta IV-era service towers and lightning masts Thursday, clearing the way for eventual construction of two Starship launch towers inside the perimeter of SLC-37. The new Starship launch towers at SLC-37 will join other properties in SpaceX’s Starship empire, including nearby Launch Complex 39A at NASA’s Kennedy Space Center, and SpaceX’s privately owned facility at Starbase, Texas.

Preps continue for Starship Flight 10. Meanwhile, at Starbase, SpaceX is moving forward with preparations for the next Starship test flight, which could happen as soon as next month following three consecutive flights that fell short of expectations. This next launch will be the 10th full-scale test flight of Starship. Last Friday, June 6, SpaceX test-fired the massive Super Heavy booster designated to launch on Flight 10. All 33 of its Raptor engines ignited on the launch pad in South Texas. This is a new Super Heavy booster. On Flight 9 last month, SpaceX flew a reused Super Heavy booster that launched and was recovered on a flight in January.

FAA signs off on SpaceX investigation … The Federal Aviation Administration said Thursday it has closed the investigation into Starship Flight 8 in March, which spun out of control minutes after liftoff, showering debris along a corridor of ocean near the Bahamas and the Turks and Caicos Islands. “The FAA oversaw and accepted the findings of the SpaceX-led investigation,” an agency spokesperson said. “The final mishap report cites the probable root cause for the loss of the Starship vehicle as a hardware failure in one of the Raptor engines that resulted in inadvertent propellant mixing and ignition. SpaceX identified eight corrective actions to prevent a reoccurrence of the event.” SpaceX implemented the corrective actions prior to Flight 9 last month, when Starship progressed further into its mission before starting to tumble in space. It eventually reentered the atmosphere over the Indian Ocean. The FAA has mandated a fresh investigation into Flight 9, and that inquiry remains open.

Next three launches

June 13: Falcon 9 | Starlink 12-26 | Cape Canaveral Space Force Station, Florida | 15: 21 UTC

June 14: Long March 2D | Unknown Payload | Jiuquan Satellite Launch Center, China | 07: 55 UTC

June 16: Atlas V | Project Kuiper KA-02| Cape Canaveral Space Force Station, Florida | 17: 25 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: New delay for Europe’s reusable rocket; SpaceX moves in at SLC-37 Read More »

experimental-retina-implants-give-mice-infrared-vision

Experimental retina implants give mice infrared vision

Finally, the tellurium meshes, especially the infrared vision capability they offered, were tested on healthy macaques, an animal model that’s much closer to humans than mice. It turned out implanted macaques could perceive infrared light, and their normal vision remained unchanged.

However, there are still a few roadblocks before we go all Cyberpunk with eye implants.

Sensitivity issues

Tellurium meshes, as the Fudan team admits in their paper, are far less sensitive to light than natural photoreceptors, and it’s hard to say if they really are a good candidate for retinal prostheses. The problem with using animal models in vision science is that it’s hard to ask a mouse or a macaque what they actually see with the implants and figure out how the electrical signals from their tellurium meshes are converted into perception in the brain.

Based on the Fudan experiments, we know the implanted animals reacted to light, albeit a bit less effectively than those with healthy vision. We also know they needed an adaptation period; the implanted mice didn’t score their impressive results on their first try. They needed to learn what the sudden signals coming from their eyes meant, just like humans who had used electrode arrays in the past. Finally, shapes in the shape recognition tests were projected with lasers, which makes it difficult to tell how the implant would perform in normal daylight.

There are also risks that come with the implantation procedure itself. The surgery involves making a local retina detachment, followed by a small retinal incision to insert the implant. According to Eduardo Fernández, a Spanish bioengineer who published a commentary to Fudan’s work in Science, doing this in fragile, diseased retinas poses a risk of fibrosis and scarring. Still, Fernández found the Chinese implants “promising.” The Fudan team is currently working on long-term safety assessments of their implants in non-human primates and on improving the coupling between the retina and the implant.

The Fudan team’s work on tellurium retinal implants is published in Science.

Science, 2025. DOI: 10.1126/science.ady4439

Experimental retina implants give mice infrared vision Read More »

scientists-built-a-badminton-playing-robot-with-ai-powered-skills

Scientists built a badminton-playing robot with AI-powered skills

It also learned fall avoidance and determined how much risk was reasonable to take given its limited speed. The robot did not attempt impossible plays that would create the potential for serious damage—it was committed, but not suicidal.

But when it finally played humans, it turned out ANYmal, as a badminton player, was amateur at best.

The major leagues

The first problem was its reaction time. An average human reacts to visual stimuli in around 0.2–0.25 seconds. Elite badminton players with trained reflexes, anticipation, and muscle memory can cut this time down to 0.12–0.15 seconds. ANYmal needed roughly 0.35 seconds after the opponent hit the shuttlecock to register trajectories and figure out what to do.

Part of the problem was poor eyesight. “I think perception is still a big issue,” Ma said. “The robot localized the shuttlecock with the stereo camera and there could be a positioning error introduced at each timeframe.” The camera also had a limited field of view, which meant the robot could see the shuttlecock for only a limited time before it had to act. “Overall, it was suited for more friendly matches—when the human player starts to smash, the success rate goes way down for the robot,” Ma acknowledged.

But his team already has some ideas on how to make ANYmal better. Reaction time can be improved by predicting the shuttlecock trajectory based on the opponent’s body position rather than waiting to see the shuttlecock itself—a technique commonly used by elite badminton or tennis players. To improve ANYmal’s perception, the team wants to fit it with more advanced hardware, like event cameras—vision sensors that register movement with ultra-low latencies in the microseconds range. Other improvements might include faster, more capable actuators.

“I think the training framework we propose would be useful in any application where you need to balance perception and control—picking objects up, even catching and throwing stuff,” Ma suggested. Sadly, one thing that’s almost certainly off the table is taking ANYmal to major leagues in badminton or tennis. “Would I set up a company selling badminton-playing robots? Well, maybe not,” Ma said.

Science Robotics, 2025. DOI: 10.1126/scirobotics.adu3922

Scientists built a badminton-playing robot with AI-powered skills Read More »

5-things-in-trump’s-budget-that-won’t-make-nasa-great-again

5 things in Trump’s budget that won’t make NASA great again

If signed into law as written, the White House’s proposal to slash nearly 25 percent from NASA’s budget would have some dire consequences.

It would cut the agency’s budget from $24.8 billion to $18.8 billion. Adjusted for inflation, this would be the smallest NASA budget since 1961, when the first American launched into space.

The proposed funding plan would halve NASA’s funding for robotic science missions and technology development next year, scale back research on the International Space Station, turn off spacecraft already exploring the Solar System, and cancel NASA’s Space Launch System rocket and Orion spacecraft after two more missions in favor of procuring lower-cost commercial transportation to the Moon and Mars.

The SLS rocket and Orion spacecraft have been targets for proponents of commercial spaceflight for several years. They are single-use, and their costs are exorbitant, with Moon missions on SLS and Orion projected to cost more than $4 billion per flight. That price raises questions about whether these vehicles will ever be able to support a lunar space station or Moon base where astronauts can routinely rotate in and out on long-term expeditions, like researchers do in Antarctica today.

Reusable rockets and spaceships offer a better long-term solution, but they won’t be ready to ferry people to the Moon for a while longer. The Trump administration proposes flying SLS and Orion two more times on NASA’s Artemis II and Artemis III missions, then retiring the vehicles. Artemis II’s rocket is currently being assembled at Kennedy Space Center in Florida for liftoff next year, carrying a crew of four around the far side of the Moon. Artemis III would follow with the first attempt to land humans on the Moon since 1972.

The cuts are far from law

Every part of Trump’s budget proposal for fiscal year 2026 remains tentative. Lawmakers in each house of Congress will write their own budget bills, which must go to the White House for Trump’s signature. A Senate bill released last week includes language that would claw back funding for SLS and Orion to support the Artemis IV and Artemis V missions.

5 things in Trump’s budget that won’t make NASA great again Read More »

ocean-acidification-crosses-“planetary-boundaries”

Ocean acidification crosses “planetary boundaries”

A critical measure of the ocean’s health suggests that the world’s marine systems are in greater peril than scientists had previously realized and that parts of the ocean have already reached dangerous tipping points.

A study, published Monday in the journal Global Change Biology, found that ocean acidification—the process in which the world’s oceans absorb excess carbon dioxide from the atmosphere, becoming more acidic—crossed a “planetary boundary” five years ago.

“A lot of people think it’s not so bad,” said Nina Bednaršek, one of the study’s authors and a senior researcher at Oregon State University. “But what we’re showing is that all of the changes that were projected, and even more so, are already happening—in all corners of the world, from the most pristine to the little corner you care about. We have not changed just one bay, we have changed the whole ocean on a global level.”

The new study, also authored by researchers at the UK’s Plymouth Marine Laboratory and the National Oceanic and Atmospheric Administration (NOAA), finds that by 2020 the world’s oceans were already very close to the “danger zone” for ocean acidity, and in some regions had already crossed into it.

Scientists had determined that ocean acidification enters this danger zone or crosses this planetary boundary when the amount of calcium carbonate—which allows marine organisms to develop shells—is less than 20 percent compared to pre-industrial levels. The new report puts the figure at about 17 percent.

“Ocean acidification isn’t just an environmental crisis, it’s a ticking time bomb for marine ecosystems and coastal economies,” said Steve Widdicombe, director of science at the Plymouth lab, in a press release. “As our seas increase in acidity, we’re witnessing the loss of critical habitats that countless marine species depend on and this, in turn, has major societal and economic implications.”

Scientists have determined that there are nine planetary boundaries that, once breached, risk humans’ abilities to live and thrive. One of these is climate change itself, which scientists have said is already beyond humanity’s “safe operating space” because of the continued emissions of heat-trapping gases. Another is ocean acidification, also caused by burning fossil fuels.

Ocean acidification crosses “planetary boundaries” Read More »

ibm-is-now-detailing-what-its-first-quantum-compute-system-will-look-like

IBM is now detailing what its first quantum compute system will look like


Company is moving past focus on qubits, shifting to functional compute units.

A rendering of what IBM expects will be needed to house a Starling quantum computer. Credit: IBM

On Tuesday, IBM released its plans for building a system that should push quantum computing into entirely new territory: a system that can both perform useful calculations while catching and fixing errors and be utterly impossible to model using classical computing methods. The hardware, which will be called Starling, is expected to be able to perform 100 million operations without error on a collection of 200 logical qubits. And the company expects to have it available for use in 2029.

Perhaps just as significant, IBM is also committing to a detailed description of the intermediate steps to Starling. These include a number of processors that will be configured to host a collection of error-corrected qubits, essentially forming a functional compute unit. This marks a major transition for the company, as it involves moving away from talking about collections of individual hardware qubits and focusing instead on units of functional computational hardware. If all goes well, it should be possible to build Starling by chaining a sufficient number of these compute units together.

“We’re updating [our roadmap] now with a series of deliverables that are very precise,” IBM VP Jay Gambetta told Ars, “because we feel that we’ve now answered basically all the science questions associated with error correction and it’s becoming more of a path towards an engineering problem.”

New architectures

Error correction on quantum hardware involves entangling a groups of qubits in a way that distributes one or more quantum bit values among them and includes additional qubits that can be used to check the state of the system. It can be helpful to think of these as data and measurement qubits. Performing weak quantum measurements on the measurement qubits produces what’s called “syndrome data,” which can be interpreted to determine whether anything about the data qubits has changed (indicating an error) and how to correct it.

There are lots of potential ways to arrange different combinations of data and measurement qubits for this to work, each referred to as a code. But, as a general rule, the more hardware qubits committed to the code, the more robust it will be to errors, and the more logical qubits that can be distributed among its hardware qubits.

Some quantum hardware, like that based on trapped ions or neutral atoms, are relatively flexible when it comes to hosting error-correction codes. The hardware qubits can be moved around so that any two can be entangled, so it’s possible to adopt a huge range of configurations, albeit at the cost of the time spent moving atoms around. IBM’s technology is quite different. It relies on qubits made of superconducting electronics laid out on a chip, with entanglement mediated by wiring that runs between qubits. The layout of this wiring is set during the chip’s manufacture, and so the design of the chip commits it to a limited number of potential error-correction codes.

Unfortunately, this wiring can also enable cross-talk between neighboring qubits, causing them to lose their state. To avoid this, existing IBM processors have their qubits wired in what they term a “heavy hex” configuration, named for its hexagonal arrangements of connections among its qubits. This has worked well to keep the error rate of its hardware down, but it also poses a challenge, since IBM has decided to go with an error-correction code that’s incompatible with the heavy hex geometry.

A couple of years back, an IBM team described a compact error correction code called a low-density parity check (LDPC). This requires a square grid of nearest-neighbor connections among its qubits, as well as wiring to connect qubits that are relatively distant on the chip. To get its chips and error-correction scheme in sync, IBM has made two key advances. The first is in its chip packaging, which now uses several layers of wiring sitting above the hardware qubits to enable all of the connections needed for the LDPC code.

We’ll see that first in a processor called Loon that’s on the company’s developmental roadmap. “We’ve already demonstrated these three things: high connectivity, long-range couplers, and couplers that break the plane [of the chip] and connect to other qubits,” Gambetta said. “We have to combine them all as a single demonstration showing that all these parts of packaging can be done, and that’s what I want to achieve with Loon.” Loon will be made public later this year.

Two diagrams of blue objects linked by red lines. The one on the left is sparse and simple, while the one on the right is a complicated mesh of red lines.

On the left, the simple layout of the connections in a current-generation Heron processor. At right, the complicated web of connections that will be present in Loon. Credit: IBM

The second advance IBM has made is to eliminate the crosstalk that the heavy hex geometry was used to minimize, so heavy hex will be going away. “We are releasing this year a bird for near term experiments that is a square array that has almost zero crosstalk,” Gambetta said, “and that is Nighthawk.” The more densely connected qubits cut the overhead needed to perform calculations by a factor of 15, Gambetta told Ars.

Nighthawk is a 2025 release on a parallel roadmap that you can think of as user-facing. Iterations on its basic design will be released annually through 2028, each enabling more operations without error (going from 5,000 gate operations this year to 15,000 in 2028). Each individual Nighthawk processor will host 120 hardware qubits, but 2026 will see three of them chained together and operating as a unit, providing 360 hardware qubits. That will be followed in 2027 by a machine with nine linked Nighthawk processors, boosting the hardware qubit number over 1,000.

Riding the bicycle

The real future of IBM’s hardware, however, will be happening over on the developmental line of processors, where talk about hardware qubit counts will become increasingly irrelevant. In a technical document released today, IBM is describing the specific LDPC code it will be using, termed a bivariate bicycle code due to some cylindrical symmetries in its details that vaguely resemble bicycle wheels. The details of the connections matter less than the overall picture of what it takes to use this error code in practice.

IBM describes two implementations of this form of LDPC code. In the first, 144 hardware qubits are arranged so that they play host to 12 logical qubits and all of the measurement qubits needed to perform error checks. The standard measure of a code’s ability to catch and correct errors is called its distance, and in this case, the distance is 12. As an alternative, they also describe a code that uses 288 hardware qubits to host the same 12 logical qubits but boost the distance to 18, meaning it’s more resistant to errors. IBM will make one of these collections of logical qubits available as a Kookaburra processor in 2026, which will use them to enable stable quantum memory.

The follow-on will bundle these with a handful of additional qubits that can produce quantum states that are needed for some operations. Those, plus hardware needed for the quantum memory, form a single, functional computation unit, built on a single chip, that is capable of performing all the operations needed to implement any quantum algorithm.

That will appear with the Cockatoo chip, which will also enable multiple processing units to be linked on a single bus, allowing the logical qubit count to grow beyond 12. (The company says that one of the dozen logical qubits in each unit will be used to mediate entanglement with other units and so won’t be available for computation.) That will be followed by the first test versions of Starling, which will allow universal computations on a limited number of logical qubits spread across multiple chips.

Separately, IBM is releasing a document that describes a key component of the system that will run on classical computing hardware. Full error correction requires evaluating the syndrome data derived from the state of all the measurement qubits in order to determine the state of the logical qubits and whether any corrections need to be made. As the complexity of the logical qubits grows, the computational burden of evaluating grows with it. If this evaluation can’t be executed in real time, then it becomes impossible to perform error-corrected calculations.

To address this, IBM has developed a message-passing decoder that can perform parallel evaluations of the syndrome data. The system explores more of the solution space by a combination of randomizing the weight given to the memory of past solutions and by handing any seemingly non-optimal solutions on to new instances for additional evaluation. The key thing is that IBM estimates that this can be run in real time using FPGAs, ensuring that the system works.

A quantum architecture

There are a lot more details beyond those, as well. Gambetta described the linkage between each computational unit—IBM is calling it a Universal Bridge—which requires one microwave cable for each code distance of the logical qubits being linked. (In other words, a distance 12 code would need 12 microwave-carrying cables to connect each chip.) He also said that IBM is developing control hardware that can operate inside the refrigeration hardware, based on what they’re calling “cold CMOS,” which is capable of functioning at 4 Kelvin.

The company is also releasing renderings of what it expects Starling to look like: a series of dilution refrigerators, all connected by a single pipe that contains the Universal Bridge. “It’s an architecture now,” Gambetta said. “I have never put details in the roadmap that I didn’t feel we could hit, and now we’re putting a lot more details.”

The striking thing to me about this is that it marks a shift away from a focus on individual qubits, their connectivity, and their error rates. The error hardware rates are now good enough (4 x 10-4) for this to work, although Gambetta felt that a few more improvements should be expected. And connectivity will now be directed exclusively toward creating a functional computational unit.

That said, there’s still a lot of space beyond Starling on IBM’s roadmap. The 200 logical qubits it promises will be enough to handle some problems, but not enough to perform the complex algorithms needed to do things like break encryption. That will need to wait for something closer to Blue Jay, a 2033 system that IBM expects will have 2,000 logical qubits. And, as of right now, it’s the only thing listed beyond Starling.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

IBM is now detailing what its first quantum compute system will look like Read More »

cambridge-mapping-project-solves-a-medieval-murder

Cambridge mapping project solves a medieval murder


“A tale of shakedowns, sex, and vengeance that expose[s] tensions between the church and England’s elite.”

Location of the murder of John Forde, taken from the Medieval Murder Maps. Credit: Medieval Murder Maps. University of Cambridge: Institute of Criminology

In 2019, we told you about a new interactive digital “murder map” of London compiled by University of Cambridge criminologist Manuel Eisner. Drawing on data catalogued in the city coroners’ rolls, the map showed the approximate location of 142 homicide cases in late medieval London. The Medieval Murder Maps project has since expanded to include maps of York and Oxford homicides, as well as podcast episodes focusing on individual cases.

It’s easy to lose oneself down the rabbit hole of medieval murder for hours, filtering the killings by year, choice of weapon, and location. Think of it as a kind of 14th-century version of Clue: It was the noblewoman’s hired assassins armed with daggers in the streets of Cheapside near St. Paul’s Cathedral. And that’s just the juiciest of the various cases described in a new paper published in the journal Criminal Law Forum.

The noblewoman was Ela Fitzpayne, wife of a knight named Sir Robert Fitzpayne, lord of Stogursey. The victim was a priest and her erstwhile lover, John Forde, who was stabbed to death in the streets of Cheapside on May 3, 1337. “We are looking at a murder commissioned by a leading figure of the English aristocracy,” said University of Cambridge criminologist Manuel Eisner, who heads the Medieval Murder Maps project. “It is planned and cold-blooded, with a family member and close associates carrying it out, all of which suggests a revenge motive.”

Members of the mapping project geocoded all the cases after determining approximate locations for the crime scenes. Written in Latin, the coroners’ rolls are records of sudden or suspicious deaths as investigated by a jury of local men, called together by the coroner to establish facts and reach a verdict. Those records contain such relevant information as where the body was found and by whom; the nature of the wounds; the jury’s verdict on cause of death; the weapon used and how much it was worth; the time, location, and witness accounts; whether the perpetrator was arrested, escaped, or sought sanctuary; and any legal measures taken.

A brazen killing

The murder of Forde was one of several premeditated revenge killings recorded in the area of Westcheap. Forde was walking on the street when another priest, Hascup Neville, caught up to him, ostensibly for a casual chat, just after Vespers but before sunset. As they approached Foster Lane, Neville’s four co-conspirators attacked: Ela Fitzpayne’s brother, Hugh Lovell; two of her former servants, Hugh of Colne and John Strong; and a man called John of Tindale. One of them cut Ford’s throat with a 12-inch dagger, while two others stabbed him in the stomach with long fighting knives.

At the inquest, the jury identified the assassins, but that didn’t result in justice. “Despite naming the killers and clear knowledge of the instigator, when it comes to pursuing the perpetrators, the jury turn a blind eye,” said Eisner. “A household of the highest nobility, and apparently no one knows where they are to bring them to trial. They claim Ela’s brother has no belongings to confiscate. All implausible. This was typical of the class-based justice of the day.”

Colne, the former servant, was eventually charged and imprisoned for the crime some five years later in 1342, but the other perpetrators essentially got away with it.

Eisner et al. uncovered additional historical records that shed more light on the complicated history and ensuing feud between the Fitzpaynes and Forde. One was an indictment in the Calendar of Patent Rolls of Edward III, detailing how Ela and her husband, Forde, and several other accomplices raided a Benedictine priory in 1321. Among other crimes, the intruders “broke [the prior’s] houses, chests and gates, took away a horse, a colt and a boar… felled his trees, dug in his quarry, and carried away the stone and trees.” The gang also stole 18 oxen, 30 pigs, and about 200 sheep and lambs.

There were also letters that the Archbishop of Canterbury wrote to the Bishop of Winchester. Translations of the letters are published for the first time on the project’s website. The archbishop called out Ela by name for her many sins, including adultery “with knights and others, single and married, and even with clerics and holy orders,” and devised a punishment. This included not wearing any gold, pearls, or precious stones and giving money to the poor and to monasteries, plus a dash of public humiliation. Ela was ordered to perform a “walk of shame”—a tamer version than Cersei’s walk in Game of Thrones—every fall for seven years, carrying a four-pound wax candle to the altar of Salisbury Cathedral.

The London Archives. Inquest number 15 on 1336-7 City of London Coroner’s Rolls (

The London Archives. Inquest number 15 on 1336-7 City of London Coroner’s Rolls. Credit: The London Archives

Ela outright refused to do any of that, instead flaunting “her usual insolence.” Naturally, the archbishop had no choice but to excommunicate her. But Eisner speculates that this may have festered within Ela over the ensuing years, thereby sparking her desire for vengeance on Forde—who may have confessed to his affair with Ela to avoid being prosecuted for the 1321 raid. The archbishop died in 1333, four years before Forde’s murder, so Ela was clearly a formidable person with the patience and discipline to serve her revenge dish cold. Her marriage to Robert (her second husband) endured despite her seemingly constant infidelity, and she inherited his property when he died in 1354.

“Attempts to publicly humiliate Ela Fitzpayne may have been part of a political game, as the church used morality to stamp its authority on the nobility, with John Forde caught between masters,” said Eisner. “Taken together, these records suggest a tale of shakedowns, sex, and vengeance that expose tensions between the church and England’s elites, culminating in a mafia-style assassination of a fallen man of god by a gang of medieval hitmen.”

I, for one, am here for the Netflix true crime documentary on Ela Fitzpayne, “a woman in 14th century England who raided priories, openly defied the Archbishop of Canterbury, and planned the assassination of a priest,” per Eisner.

The role of public spaces

The ultimate objective of the Medieval Murder Maps project is to learn more about how public spaces shaped urban violence historically, the authors said. There were some interesting initial revelations back in 2019. For instance, the murders usually occurred in public streets or squares, and Eisner identified a couple of “hot spots” with higher concentrations than other parts of London. One was that particular stretch of Cheapside running from St Mary-le-Bow church to St. Paul’s Cathedral, where John Forde met his grisly end. The other was a triangular area spanning Gracechurch, Lombard, and Cornhill, radiating out from Leadenhall Market.

The perpetrators were mostly men (in only four cases were women the only suspects). As for weapons, knives and swords of varying types were the ones most frequently used, accounting for 68 percent of all the murders. The greatest risk of violent death in London was on weekends (especially Sundays), between early evening and the first few hours after curfew.

Eisner et al. have now extended their spatial analysis to include homicides committed in York and London in the 14th century with similar conclusions. Murders most often took place in markets, squares, and thoroughfares—all key nodes of medieval urban life—in the evenings or on weekends. Oxford had significantly higher murder rates than York or London and also more organized group violence, “suggestive of high levels of social disorganization and impunity.” London, meanwhile, showed distinct clusters of homicides, “which reflect differences in economic and social functions,” the authors wrote. “In all three cities, some homicides were committed in spaces of high visibility and symbolic significance.”

Criminal Law Forum, 2025. DOI: 10.1007/s10609-025-09512-7  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Cambridge mapping project solves a medieval murder Read More »

startup-puts-a-logical-qubit-in-a-single-piece-of-hardware

Startup puts a logical qubit in a single piece of hardware

This time around, the company is showing that it can get an actual logical qubit into a variant of the same hardware. In the earlier version of its equipment, the resonator cavity had a single post and supported a single frequency. In the newer iteration, there were two posts and two frequencies. Each of those frequencies creates its own quantum resonator in the same cavity, with its own set of modes. “It’s this ensemble of photons inside this cavity that creates the logical qubit,” Lemyre told Ars.

The additional quantum information that can now be stored in the system enables it to identify more complex errors than the loss of a photon.

Catching, but not fixing errors

The company did two experiments with this new hardware. First, it ran multiple rounds of error detection on data stored in the logical qubit, essentially testing its ability to act like a quantum memory and retain the information stored there. The system corrected for photon loss, but left other errors uncorrected. These occurred at a rate of a bit over two percent each round, so by the time the system reached the 25th measurement, many instances had already encountered an error.

The second time through, the company repeated the process, discarding any instances in which an error occurred. In almost every instance, that meant the results were discarded long before they got through two dozen rounds of measurement. But at these later stages, none of the remaining instances were in an erroneous state. That indicates that a successful correction of all of the errors—something the team didn’t try—would be able to fix all the detected problems.

“When we do this, we don’t have any errors left,” Lemyre said. “And this builds confidence into this approach, meaning that if we build the next generation of codes that is now able to correct these errors that were detected in this two-mode approach, we should have that flat line of no errors occurring over a long period of time.”

Startup puts a logical qubit in a single piece of hardware Read More »

the-nine-armed-octopus-and-the-oddities-of-the-cephalopod-nervous-system

The nine-armed octopus and the oddities of the cephalopod nervous system


A mix of autonomous and top-down control manage the octopus’s limbs.

With their quick-change camouflage and high level of intelligence, it’s not surprising that the public and scientific experts alike are fascinated by octopuses. Their abilities to recognize faces, solve puzzles, and learn behaviors from other octopuses make these animals a captivating study.

To perform these processes and others, like crawling or exploring, octopuses rely on their complex nervous system, one that has become a focus for neuroscientists. With about 500 million neurons—around the same number as dogs—octopuses’ nervous systems are the most complex of any invertebrate. But, unlike vertebrate organisms, the octopus’s nervous system is also decentralized, with around 350 million neurons, or 66 percent of it, located in its eight arms.

“This means each arm is capable of independently processing sensory input, initiating movement, and even executing complex behaviors—without direct instructions from the brain,” explains Galit Pelled, a professor of Mechanical Engineering, Radiology, and Neuroscience at Michigan State University who studies octopus neuroscience. “In essence, the arms have their own ‘mini-brains.’”

A decentralized nervous system is one factor that helps octopuses adapt to changes, such as injury or predation, as seen in the case of an Octopus vulgaris, or common octopus, that was observed with nine arms by researchers at the ECOBAR lab at the Institute of Marine Research in Spain between 2021 and 2022.

By studying outliers like this cephalopod, researchers can gain insight into how the animal’s detailed scaffolding of nerves changes and regrows over time, uncovering more about how octopuses have evolved over millennia in our oceans.

Brains, brains, and more brains

Because each arm of an octopus contains its own bundle of neurons, the limbs can operate semi-independently from the central brain, enabling faster responses since signals don’t always need to travel back and forth between the brain and the arms. In fact, Pelled and her team recently discovered that “neural signals recorded in the octopus arm can predict movement type within 100 milliseconds of stimulation, without central brain involvement.” She notes that “that level of localized autonomy is unprecedented in vertebrate systems.”

Though each limb moves on its own, the movements of the octopus’s body are smooth and conducted with a coordinated elegance that allows the animal to exhibit some of the broadest range of behaviors, adapting on the fly to changes in its surroundings.

“That means the octopus can react quickly to its environment, especially when exploring, hunting, or defending itself,” Pelled says. “For example, one arm can grab food while another is feeling around a rock, without needing permission from the brain. This setup also makes the octopus more resilient. If one arm is injured, the others still work just fine. And because so much decision-making happens at the arms, the central brain is freed up to focus on the bigger picture—like navigating or learning new tasks.”

As if each limb weren’t already buzzing with neural activity, things get even more intricate when researchers zoom in further—to the nerves within each individual sucker, a ring of muscular tissue, which octopuses use to sense and taste their surroundings.

“There is a sucker ganglion, or nerve center, located in the stalk of every sucker. For some species of octopuses, that’s over a thousand ganglia,” says Cassady Olson, a graduate student at the University of Chicago who works with Cliff Ragsdale, a leading expert in octopus neuroscience.

Given that each sucker has its own nerve centers—connected by a long axial nerve cord running down the limb—and each arm has hundreds of suckers, things get complicated very quickly, as researchers have historically struggled to study this peripheral nervous system, as it’s called, within the octopus’s body.

“The large size of the brain makes it both really exciting to study and really challenging,” says Z. Yan Wang, an assistant professor of biology and psychology at the University of Washington. “Many of the tools available for neuroscience have to be adjusted or customized specifically for octopuses and other cephalopods because of their unique body plans.”

While each limb acts independently, signals are transmitted back to the octopus’s central nervous system. The octopus’ brain sits between its eyes at the front of its mantle, or head, couched between its two optic lobes, large bean-shaped neural organs that help octopuses see the world around them. These optic lobes are just two of the over 30 lobes experts study within the animal’s centralized brain, as each lobe helps the octopus process its environment.

This elaborate neural architecture is critical given the octopus’s dual role in the ecosystem as both predator and prey. Without natural defenses like a hard shell, octopuses have evolved a highly adaptable nervous system that allows them to rapidly process information and adjust as needed, helping their chances of survival.

Some similarities remain

While the octopus’s decentralized nervous system makes it a unique evolutionary example, it does have some structures similar to or analogous to the human nervous system.

“The octopus has a central brain mass located between its eyes, and an axial nerve cord running down each arm (similar to a spinal cord),” says Wang. “The octopus has many sensory systems that we are familiar with, such as vision, touch (somatosensation), chemosensation, and gravity sensing.”

Neuroscientists have homed in on these similarities to understand how these structures may have evolved across the different branches in the tree of life. As the most recent common ancestor for humans and octopuses lived around 750 million years ago, experts believe that many similarities, from similar camera-like eyes to maps of neural activities, evolved separately in a process known as convergent evolution.

While these similarities shed light on evolution’s independent paths, they also offer valuable insights for fields like soft robotics and regenerative medicine.

Occasionally, unique individuals—like an octopus with an unexpected number of limbs—can provide even deeper clues into how this remarkable nervous system functions and adapts.

Nine arms, no problem

In 2021, researchers from the Institute of Marine Research in Spain used an underwater camera to follow a male Octopus vulgaris, or common octopus. On its left side, three arms were intact, while the others were reduced to uneven, stumpy lengths, sharply bitten off at varying points. Although the researchers didn’t witness the injury itself, they observed that the front right arm—known as R1—was regenerating unusually, splitting into two separate limbs and giving the octopus a total of nine arms.

“In this individual, we believe this condition was a result of abnormal regeneration [a genetic mutation] after an encounter with a predator,” explains Sam Soule, one of the researchers and the first author on the corresponding paper recently published in Animals.

The researchers named the octopus Salvador due to its bifurcated arm coiling up on itself like the two upturned ends of Salvador Dali’s moustache. For two years, the team studied the cephalopod’s behavior and found that it used its bifurcated arm less when doing “riskier” movements such as exploring or grabbing food, which would force the animal to stretch its arm out and expose it to further injury.

“One of the conclusions of our research is that the octopus likely retains a long-term memory of the original injury, as it tends to use the bifurcated arms for less risky tasks compared to the others,” elaborates Jorge Hernández Urcera, a lead author of the study. “This idea of lasting memory brought to mind Dalí’s famous painting The Persistence of Memory, which ultimately became the title of the paper we published on monitoring this particular octopus.”

While the octopus acted more protective of its extra limb, its nervous system had adapted to using the extra appendage, as the octopus was observed, after some time recovering from its injuries, using its ninth arm for probing its environment.

“That nine-armed octopus is a perfect example of just how adaptable these animals are,” Pelled adds. “Most animals would struggle with an unusual body part, but not the octopus. In this case, the octopus had a bifurcated (split) arm and still used it effectively, just like any other arm. That tells us the nervous system didn’t treat it as a mistake—it figured out how to make it work.”

Kenna Hughes-Castleberry is the science communicator at JILA (a joint physics research institute between the National Institute of Standards and Technology and the University of Colorado Boulder) and a freelance science journalist. Her main writing focuses are quantum physics, quantum technology, deep technology, social media, and the diversity of people in these fields, particularly women and people from minority ethnic and racial groups. Follow her on LinkedIn or visit her website.

The nine-armed octopus and the oddities of the cephalopod nervous system Read More »

simulations-find-ghostly-whirls-of-dark-matter-trailing-galaxy-arms

Simulations find ghostly whirls of dark matter trailing galaxy arms

“Basically what you do is you set up a bunch of particles that represent things like stars, gas, and dark matter, and you let them evolve for millions of years,” Bernet says. “Human lives are much too short to witness this happening in real time. We need simulations to help us see more than the present, which is like a single snapshot of the Universe.”

Several other groups already had galaxy simulations they were using to do other science, so the team asked one to see their data. When they found the dark matter imprint they were looking for, they checked for it in another group’s simulation. They found it again, and then in a third simulation as well.

The dark matter spirals are much less pronounced than their stellar counterparts, but the team noted a distinct imprint on the motions of dark matter particles in the simulations. The dark spiral arms lag behind the stellar arms, forming a sort of unseen shadow.

These findings add a new layer of complexity to our understanding of how galaxies evolve, suggesting that dark matter is more than a passive, invisible scaffolding holding galaxies together. Instead, it appears to react to the gravity from stars in galaxies’ spiral arms in a way that may even influence star formation or galactic rotation over cosmic timescales. It could also explain the relatively newfound excess mass along a nearby spiral arm in the Milky Way.

The fact that they saw the same effect in differently structured simulations suggests that these dark matter spirals may be common in galaxies like the Milky Way. But tracking them down in the real Universe may be tricky.

Bernet says scientists could measure dark matter in the Milky Way’s disk. “We can currently measure the density of dark matter close to us with a huge precision,” he says. “If we can extend these measurements to the entire disk with enough precision, spiral patterns should emerge if they exist.”

“I think these results are very important because it changes our expectations for where to search for dark matter signals in galaxies,” Brooks says. “I could imagine that this result might influence our expectation for how dense dark matter is near the solar neighborhood and could influence expectations for lab experiments that are trying to directly detect dark matter.” That’s a goal scientists have been chasing for nearly 100 years.

Ashley writes about space for a contractor for NASA’s Goddard Space Flight Center by day and freelances in her free time. She holds master’s degrees in space studies from the University of North Dakota and science writing from Johns Hopkins University. She writes most of her articles with a baby on her lap.

Simulations find ghostly whirls of dark matter trailing galaxy arms Read More »

a-japanese-lander-crashed-on-the-moon-after-losing-track-of-its-location

A Japanese lander crashed on the Moon after losing track of its location


“It’s not impossible, so how do we overcome our hurdles?”

Takeshi Hakamada, founder and CEO of ispace, attends a press conference in Tokyo on June 6, 2025, to announce the outcome of his company’s second lunar landing attempt. Credit: Kazuhiro Nogi/AFP via Getty Images

A robotic lander developed by a Japanese company named ispace plummeted to the Moon’s surface Thursday, destroying a small rover and several experiments intended to demonstrate how future missions could mine and harvest lunar resources.

Ground teams at ispace’s mission control center in Tokyo lost contact with the Resilience lunar lander moments before it was supposed to touch down in a region called Mare Frigoris, or the Sea of Cold, a basaltic plain in the Moon’s northern hemisphere.

A few hours later, ispace officials confirmed what many observers suspected. The mission was lost. It’s the second time ispace has failed to land on the Moon in as many tries.

“We wanted to make Mission 2 a success, but unfortunately we haven’t been able to land,” said Takeshi Hakamada, the company’s founder and CEO.

Ryo Ujiie, ispace’s chief technology officer, said the final data received from the Resilience lander—assuming it was correct—showed it at an altitude of approximately 630 feet (192 meters) and descending too fast for a safe landing. “The deceleration was not enough. That was a fact,” Ujiie told reporters in a press conference. “We failed to land, and we have to analyze the reasons.”

The company said in a press release that a laser rangefinder used to measure the lander’s altitude “experienced delays in obtaining valid measurement values.” The downward-facing laser fires light pulses toward the Moon during descent, and clocks the time it takes to receive a reflection. This time delay at light speed tells the lander’s guidance system how far it is above the lunar surface. But something went wrong in the altitude measurement system on Thursday.

“As a result, the lander was unable to decelerate sufficiently to reach the required speed for the planned lunar landing,” ispace said. “Based on these circumstances, it is currently assumed that the lander likely performed a hard landing on the lunar surface.”

Controllers sent a command to reboot the lander in hopes of reestablishing communication, but the Resilience spacecraft remained silent.

“Given that there is currently no prospect of a successful lunar landing, our top priority is to swiftly analyze the telemetry data we have obtained thus far and work diligently to identify the cause,” Hakamada said in a statement. “We will strive to restore trust by providing a report of the findings to our shareholders, payload customers, Hakuto-R partners, government officials, and all supporters of ispace.”

Overcoming obstacles

The Hakuto name harkens back to ispace’s origin in 2010 as a contender for the Google Lunar X-Prize, a sweepstakes that offered a $20 million grand prize to the first privately funded team to put a lander on the Moon. Hakamada’s group was called Hakuto, which means “white rabbit” in Japanese. The prize shut down in 2018 without a winner, leading some of the teams to dissolve or find new purpose. Hakamada stayed the course, raised more funding, and rebooted the program under the name Hakuto-R.

It’s a story of resilience, hence the name of ispace’s second lunar lander. The mission made it closer to the Moon than the ispace’s first landing attempt in 2023, but Thursday’s failure is a blow to Hakamada’s project.

“As a fact, we tried twice and we haven’t been able to land on the Moon,” Hakamada said through an interpreter. “So we have to say it’s hard to land on the Moon, technically. We know it’s not easy. It’s not something that everyone can do. We know it’s hard, but the important point is it’s not impossible. The US private companies have succeeded in landing, and also JAXA in Japan has succeeded in landing, so it’s not impossible. So how do we overcome our hurdles?”

The Resilience lander and Tenacious rover, seen mounted near the top of the spacecraft, inside a test facility at the Tsukuba Space Center in Tsukuba, Ibaraki Prefecture, on Thursday, Sept. 12, 2024. Credit: Toru Hanai/Bloomberg via Getty Images

In April 2023, ispace’s first lander crashed on the Moon due to a similar altitude measurement problem. The spacecraft thought it was on the surface of the Moon, but was actually firing its engine to hover at an altitude of 3 miles (5 kilometers). The spacecraft ran out of fuel and went into a free fall before impacting the Moon.

Engineers blamed software as the most likely reason for the altitude-measurement problem. During descent, ispace’s lander passed over a 10,000-foot-tall (3,000-meter) cliff, and the spacecraft’s computer interpreted the sudden altitude change as erroneous.

Ujiie, who leads ispace’s technical teams, said the failure mode Thursday was “similar” to that of the first mission two years ago. But at least in ispace’s preliminary data reviews, engineers saw different behavior from the Resilience lander, which flew with a new type of laser rangefinder after ispace’s previous supplier stopped producing the device.

“From Mission 1 to Mission 2, we improved the software,” Ujiie said. “Also, we improved how to approach the landing site… We see different phenomena from Mission 1, so we have to do more analysis to give you any concrete answers.”

If ispace landed smoothly on Thursday, the Resilience spacecraft would have deployed a small rover developed by ispace’s European subsidiary. The rover was partially funded by the Luxembourg Space Agency with support from the European Space Agency. It carried a shovel to scoop up a small amount of lunar soil and a camera to take a photo of the sample. NASA had a contract with ispace to purchase the lunar soil in a symbolic proof of concept to show how the government might acquire material from commercial mining companies in the future.

The lander also carried a water electrolyzer experiment to demonstrate technologies that could split water molecules into hydrogen and oxygen, critical resources for a future Moon base. Other payloads aboard the Resilience spacecraft included cameras, a food production experiment, a radiation monitor, and a Swedish art project called “MoonHouse.”

The spacecraft chassis used for ispace’s first two landing attempts was about the size of a compact car, with a mass of about 1 metric ton (2,200 pounds) when fully fueled. The company’s third landing attempt is scheduled for 2027 with a larger lander. Next time, ispace will fly to the Moon in partnership between the company’s US subsidiary and Draper Laboratory, which has a contract with NASA to deliver experiments to the lunar surface.

Track record

The Resilience lander launched in January on top of a SpaceX Falcon 9 rocket, riding to space in tandem with a commercial Moon lander named Blue Ghost from Firefly Aerospace. Firefly’s lander took a more direct journey to the Moon and achieved a soft landing on March 2. Blue Ghost operated on the lunar surface for two weeks and completed all of its objectives.

The trajectory of ispace’s lander was slower, following a lower-energy, more fuel-efficient path to the Moon before entering lunar orbit last month. Once in orbit, the lander made a few more course corrections to line up with its landing site, then commenced its final descent on Thursday.

Thursday’s landing attempt was the seventh time a privately developed Moon lander tried to conduct a controlled touchdown on the lunar surface.

Two Texas-based companies have had the most success. One of them, Houston-based Intuitive Machines, landed its Odysseus spacecraft on the Moon in February 2024, marking the first time a commercial lander reached the lunar surface intact. But the lander tipped over after touchdown, cutting its mission short after achieving some limited objectives. A second Intuitive Machines lander reached the Moon in one piece in March of this year, but it also fell over and didn’t last as long as the company’s first mission.

Firefly’s Blue Ghost operated for two weeks after reaching the lunar surface, accomplishing all of its objectives and becoming the first fully successful privately owned spacecraft to land and operate on the Moon.

Intuitive Machines, Firefly, and a third company—Astrobotic Technology—have launched their lunar missions under contract with a NASA program aimed at fostering a commercial marketplace for transportation to the Moon. Astrobotic’s first lander failed soon after its departure from Earth. The first two missions launched by ispace were almost fully private ventures, with limited participation from the Japanese space agency, Luxembourg, and NASA.

The Earth looms over the Moon’s horizon in this image from lunar orbit captured on May 27, 2025, by ispace’s Resilience lander. Credit: ispace

Commercial travel to the Moon only began in 2019, so there’s not much of a track record to judge the industry’s prospects. When NASA started signing contracts for commercial lunar missions, the then-chief of the agency’s science vision, Thomas Zurbuchen, estimated the initial landing attempts would have a 50-50 chance of success. On the whole, NASA’s experience with Intuitive Machines, Firefly, and Astrobotic isn’t too far off from Zurbuchen’s estimate, with one full success and a couple of partial successes.

The commercial track record worsens if you include private missions from ispace and Israel’s Beresheet lander.

But ispace and Hakamada haven’t given up on the dream. The company’s third mission will launch under the umbrella of the same NASA program that contracted with Intuitive Machines, Firefly, and Astrobotic. Hakamada cited the achievements of Firefly and Intuitive Machines as evidence that the commercial model for lunar missions is a valid one.

“The ones that have the landers, there are two companies I mentioned. Also, Blue Origin maybe coming up. Also, ispace is a possibility,” Hakamada said. “So, very few companies. We would like to catch up as soon as possible.”

It’s too early to know how the failure on Thursday might impact ispace’s next mission with Draper and NASA.

“I have to admit that we are behind,” said Jumpei Nozaki, director and chief financial officer at ispace. “But we do not really think we are behind from the leading group yet. It’s too early to decide that. The players in the world that can send landers to the Moon are very few, so we still have some competitive edge.”

“Honestly, there were some times I almost cried, but I need to lead this company, and I need to have a strong will to move forward, so it’s not time for me to cry,” Hakamada said.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

A Japanese lander crashed on the Moon after losing track of its location Read More »