Science

we’re-about-to-find-many-more-interstellar-interlopers—here’s-how-to-visit-one

We’re about to find many more interstellar interlopers—here’s how to visit one


“You don’t have to claim that they’re aliens to make these exciting.”

The Hubble Space Telescope captured this image of the interstellar comet 3I/ATLAS on July 21, when the comet was 277 million miles from Earth. Hubble shows that the comet has a teardrop-shaped cocoon of dust coming off its solid, icy nucleus. Credit: NASA, ESA, David Jewitt (UCLA); Image Processing: Joseph DePasquale (STScI)

The Hubble Space Telescope captured this image of the interstellar comet 3I/ATLAS on July 21, when the comet was 277 million miles from Earth. Hubble shows that the comet has a teardrop-shaped cocoon of dust coming off its solid, icy nucleus. Credit: NASA, ESA, David Jewitt (UCLA); Image Processing: Joseph DePasquale (STScI)

A few days ago, an inscrutable interstellar interloper made its closest approach to Mars, where a fleet of international spacecraft seek to unravel the red planet’s ancient mysteries.

Several of the probes encircling Mars took a break from their usual activities and turned their cameras toward space to catch a glimpse of an object named 3I/ATLAS, a rogue comet that arrived in our Solar System from interstellar space and is now barreling toward perihelion—its closest approach to the Sun—at the end of this month.

This is the third interstellar object astronomers have detected within our Solar System, following 1I/ʻOumuamua and 2I/Borisov discovered in 2017 and 2019. Scientists think interstellar objects routinely transit among the planets, but telescopes have only recently had the ability to find one. For example, the telescope that discovered Oumuamua only came online in 2010.

Detectable but still unreachable

Astronomers first reported observations of 3I/ATLAS on July 1, just four months before reaching its deepest penetration into the Solar System. Unfortunately for astronomers, the particulars of this object’s trajectory will bring it to perihelion when the Earth is on the opposite side of the Sun. The nearest 3I/ATLAS will come to Earth is about 170 million miles (270 million kilometers) in December, eliminating any chance for high-resolution imaging. The viewing geometry also means the Sun’s glare will block all direct views of the comet from Earth until next month.

The James Webb Space Telescope observed interstellar comet 3I/ATLAS on August 6 with its Near-Infrared Spectrograph instrument. Credit: NASA/James Webb Space Telescope

Because of that, the closest any active spacecraft will get to 3I/ATLAS happened Friday, when it passed less than 20 million miles (30 million kilometers) from Mars. NASA’s Perseverance rover and Mars Reconnaissance Orbiter were expected to make observations of 3I/ATLAS, along with Europe’s Mars Express and ExoMars Trace Gas Orbiter missions.

The best views of the object so far have been captured by the James Webb Space Telescope and the Hubble Space Telescope, positioned much closer to Earth. Those observations helped astronomers narrow down the object’s size, but the estimates remain imprecise. Based on Hubble’s images, the icy core of 3I/ATLAS is somewhere between the size of the Empire State Building to something a little larger than Central Park.

That may be the most we’ll ever know about the dimensions of 3I/ATLAS. The spacecraft at Mars lack the exquisite imaging sensitivity of Webb and Hubble, so don’t expect spectacular views from Friday’s observations. But scientists hope to get a better handle on the cloud of gas and dust surrounding the object, giving it the appearance of a comet. Spectroscopic observations have shown the coma around 3I/ATLAS contains water vapor and an unusually strong signature of carbon dioxide extending out nearly a half-million miles.

On Tuesday, the European Space Agency released the first grainy images of 3I/ATLAS captured at Mars. The best views will come from a small telescope called HiRISE on NASA’s Mars Reconnaissance Orbiter. The images from NASA won’t be released until after the end of the ongoing federal government shutdown, according to a member of the HiRISE team.

Europe’s ExoMars Trace Gas Orbiter turned its eyes toward interstellar comet 3I/ATLAS as it passed close to Mars on Friday, October 3. The comet’s coma is visible as a fuzzy blob surrounding its nucleus, which was not resolved by the spacecraft’s camera. Credit: ESA/TGO/CaSSIS

Studies of 3I/ATLAS suggest it was probably kicked out of another star system, perhaps by an encounter with a giant planet. Comets in our Solar System sometimes get ejected into the Milky Way galaxy when they come too close to Jupiter. It roamed the galaxy for billions of years before arriving in the Sun’s galactic neighborhood.

The rogue comet is now gaining speed as gravity pulls it toward perihelion, when it will max out at a relative velocity of 152,000 mph (68 kilometers per second), much too fast to be bound into a closed orbit around the Sun. Instead, the comet will catapult back into the galaxy, never to be seen again.

We need to talk about aliens

Anyone who studies planetary formation would relish the opportunity to get a close-up look at an interstellar object. Sending a mission to one would undoubtedly yield a scientific payoff. There’s a good chance that many of these interlopers have been around longer than our own 4.5 billion-year-old Solar System.

One study from the University of Oxford suggests that 3I/ATLAS came from the “thick disk” of the Milky Way, which is home to a dense population of ancient stars. This origin story would mean the comet is probably more than 7 billion years old, holding clues about cosmic history that are simply inaccessible among the planets, comets, and asteroids that formed with the birth of the Sun.

This is enough reason to mount a mission to explore one of these objects, scientists said. It doesn’t need justification from unfounded theories that 3I/ATLAS might be an artifact of alien technology, as proposed by Harvard University astrophysicist Avi Loeb. The scientific consensus is that the object is of natural origin.

Loeb shared a similar theory about the first interstellar object found wandering through our Solar System. His statements have sparked questions in popular media about why the world’s space agencies don’t send a probe to actually visit one. Loeb himself proposed redirecting NASA’s Juno spacecraft in orbit around Jupiter on a mission to fly by 3I/ATLAS, and his writings prompted at least one member of Congress to write a letter to NASA to “rejuvenate” the Juno mission by breaking out of Jupiter’s orbit and taking aim at 3I/ATLAS for a close-up inspection.

The problem is that Juno simply doesn’t have enough fuel to reach the comet, and its main engine is broken. In fact, the total boost required to send Juno from Jupiter to 3I/ATLAS (roughly 5,800 mph or 2.6 kilometers per second) would surpass the fuel capacity of most interplanetary probes.

Ars asked Scott Bolton, lead scientist on the Juno mission, and he confirmed that the spacecraft lacks the oomph required for the kind of maneuvers proposed by Loeb. “We had no role in that paper,” Bolton told Ars. “He assumed propellant that we don’t really have.”

Avi Loeb, a Harvard University astrophysicist. Credit: Anibal Martel/Anadolu Agency via Getty Images

So Loeb’s exercise was moot, but his talk of aliens has garnered public attention. Loeb appeared on the conservative network Newsmax last week to discuss his theory of 3I/ATLAS alongside Rep. Tim Burchett (R-Tenn.). Predictably, conspiracy theories abounded. But as of Tuesday, the segment has 1.2 million views on YouTube. Maybe it’s a good thing that people who approve government budgets, especially those without a preexisting interest in NASA, are eager to learn more about the Universe. We will leave it to the reader to draw their own conclusions on that matter.

Loeb’s calculations also help illustrate the difficulty of pulling off a mission to an interstellar object. So far, we’ve only known about an incoming interstellar intruder a few months before it comes closest to Earth. That’s not to mention the enormous speeds at which these objects move through the Solar System. It’s just not feasible to build a spacecraft and launch it on such short notice.

Now, some scientists are working on ways to overcome these limitations.

So you’re saying there’s a chance?

One of these people is Colin Snodgrass, an astronomer and planetary scientist at the University of Edinburgh. A few years ago, he helped propose to the European Space Agency a mission concept that would have very likely been laughed out of the room a generation ago. Snodgrass and his team wanted a commitment from ESA of up to $175 million (150 million euros) to launch a mission with no idea of where it would go.

ESA officials called Snodgrass in 2019 to say the agency would fund his mission, named Comet Interceptor, for launch in the late 2020s. The goal of the mission is to perform the first detailed observations of a long-period comet. So far, spacecraft have only visited short-period comets that routinely dip into the inner part of the Solar System.

A long-period comet is an icy visitor from the farthest reaches of the Solar System that has spent little time getting blasted by the Sun’s heat and radiation, freezing its physical and chemical properties much as they were billions of years ago.

Long-period comets are typically discovered a year or two before coming near the Sun, still not enough time to develop a mission from scratch. With Comet Interceptor, ESA will launch a probe to loiter in space a million miles from Earth, wait for the right comet to come along, then fire its engines to pursue it.

Odds are good that the right comet will come from within the Solar System. “That is the point of the mission,” Snodgrass told Ars.

ESA’s Comet Interceptor will be the first mission to visit a comet coming directly from the outer reaches of the Sun’s realm, carrying material untouched since the dawn of the Solar System. Credit: European Space Agency

But if astronomers detect an interstellar object coming toward us on the right trajectory, there’s a chance Comet Interceptor could reach it.

“I think that the entire science team would agree, if we get really lucky and there’s an interstellar object that we could reach, then to hell with the normal plan, let’s go and do this,” Snodgrass said. “It’s an opportunity you couldn’t just leave sitting there.”

But, he added, it’s “very unlikely” that an interstellar object will be in the right place at the right time. “Although everyone’s always very excited about the possibility, and we’re excited about the possibility, we kind of try and keep the expectations to a realistic level.”

For example, if Comet Interceptor were in space today, there’s no way it could reach 3I/ATLAS. “It’s an unfortunate one,” Snodgrass said. “Its closest point to the Sun, it reaches that on the other side of the Sun from where the Earth is. Just bad timing.” If an interceptor were parked somewhere else in the Solar System, it might be able to get itself in position for an encounter with 3I/ATLAS. “There’s only so much fuel aboard,” Snodgrass said. “There’s only so fast we can go.”

It’s even harder to send a spacecraft to encounter an interstellar object than it is to visit one of the Solar System’s homegrown long-period comets. The calculation of whether Comet Interceptor could reach one of these galactic visitors boils down to where it’s heading and when astronomers discover it.

Snodgrass is part of a team using big telescopes to observe 3I/ATLAS from a distance. “As it’s getting closer to the Sun, it is getting brighter,” he said in an interview.

“You don’t have to claim that they’re aliens to make these exciting,” Snodgrass said. “They’re interesting because they are a bit of another solar system that you can actually feasibly get an up-close view of, even the sort of telescopic views we’re getting now.”

Colin Snodgrass, a professor at the University of Edinburgh, leads the Comet Interceptor science team. Credit: University of Edinburgh

Comets and asteroids are the linchpins for understanding the formation of the Solar System. These modest worlds are the leftover building blocks from the debris that coalesced into the planets. Today, direct observations have only allowed scientists to study the history of one planetary system. An interstellar comet would grow the sample size to two.

Still, Snodgrass said his team prefers to keep their energy focused on reaching a comet originating from the frontier of our own Solar System. “We’re not going to let a very lovely Solar System comet go by, waiting to see ‘what if there’s an interstellar thing?'” he said.

Snodgrass sees Comet Interceptor as a proof of concept for scientists to propose a future mission specially designed to travel to an interstellar object. “You need to figure out how do you build the souped-up version that could really get to an interstellar object? I think that’s five or 10 years away, but [it’s] entirely realistic.”

An American answer

Scientists in the United States are working on just such a proposal. A team from the Southwest Research Institute completed a concept study showing how a mission could fly by one of these interstellar visitors. What’s more, the US scientists say their proposed mission could have actually reached 3I/ATLAS had it already been in space.

The American concept is similar to Europe’s Comet Interceptor in that it will park a spacecraft somewhere in deep space and wait for the right target to come along. The study was led by Alan Stern, the chief scientist on NASA’s New Horizons mission that flew by Pluto a decade ago. “These new kinds of objects offer humankind the first feasible opportunity to closely explore bodies formed in other star systems,” he said.

An animation of the trajectory of 3I/ATLAS through the inner Solar System. Credit: NASA/JPL

It’s impossible with current technology to send a spacecraft to match orbits and rendezvous with a high-speed interstellar comet. “We don’t have to catch it,” Stern recently told Ars. “We just have to cross its orbit. So it does carry a fair amount of fuel in order to get out of Earth’s orbit and onto the comet’s path to cross that path.”

Stern said his team developed a cost estimate for such a mission, and while he didn’t disclose the exact number, he said it would fall under NASA’s cost cap for a Discovery-class mission. The Discovery program is a line of planetary science missions that NASA selects through periodic competitions within the science community. The cost cap for NASA’s next Discovery competition is expected to be $800 million, not including the launch vehicle.

A mission to encounter an interstellar comet requires no new technologies, Stern said. Hopes for such a mission are bolstered by the activation of the US-funded Vera Rubin Observatory, a state-of-the-art facility high in the mountains of Chile set to begin deep surveys of the entire southern sky later this year. Stern predicts Rubin will discover “one or two” interstellar objects per year. The new observatory should be able to detect the faint light from incoming interstellar bodies sooner, providing missions with more advance warning.

“If we put a spacecraft like this in space for a few years, while it’s waiting, there should be five or 10 to choose from,” he said.

Alan Stern speaks onstage during Day 1 of TechCrunch Disrupt SF 2018 in San Francisco. Credit: Photo by Kimberly White/Getty Images for TechCrunch

Winning NASA funding for a mission like Stern’s concept will not be easy. It must compete with dozens of other proposals, and NASA’s next Discovery competition is probably at least two or three years away. The timing of the competition is more uncertain than usual due to swirling questions about NASA’s budget after the Trump administration announced it wants to cut the agency’s science funding in half.

Comet Interceptor, on the other hand, is already funded in Europe. ESA has become a pioneer in comet exploration. The Giotto probe flew by Halley’s Comet in 1986, becoming the first spacecraft to make close-up observations of a comet. ESA’s Rosetta mission became the first spacecraft to orbit a comet in 2014, and later that year, it deployed a German-built lander to return the first data from the surface of a comet. Both of those missions explored short-period comets.

“Each time that ESA has done a comet mission, it’s done something very ambitious and very new,” Snodgrass said. “The Giotto mission was the first time ESA really tried to do anything interplanetary… And then, Rosetta, putting this thing in orbit and landing on a comet was a crazy difficult thing to attempt to do.”

“They really do push the envelope a bit, which is good because ESA can be quite risk averse, I think it’s fair to say, with what they do with missions,” he said. “But the comet missions, they are things where they’ve really gone for that next step, and Comet Interceptor is the same. The whole idea of trying to design a space mission before you know where you’re going is a slightly crazy way of doing things. But it’s the only way to do this mission. And it’s great that we’re trying it.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

We’re about to find many more interstellar interlopers—here’s how to visit one Read More »

chemistry-nobel-prize-awarded-for-building-ordered-polymers-with-metal

Chemistry Nobel prize awarded for building ordered polymers with metal

Unlike traditional polymers, this structure allows MOFs to have open internal spaces with a well-defined size, which can allow some molecules to pass through while filtering out others. In addition, the presence of metals provides for interesting chemistry. The metals can serve as catalysts or preferentially bind to one molecule within a mixture.

Knowing what we know now, it all seems kind of obvious that this would work. But when Robson started his work at the University of Melbourne, the few people who thought about the issue at all expected that the molecules he was building would be unstable and collapse.

The first MOF Robson built used copper as its metal of choice. It was linked to an organic molecule that retained its rigid structure through the presence of a benzene ring, which doesn’t bend. Both the organic molecule and the copper could form four different bonds, allowing the structure to grow by doing the rough equivalent of stacking a bunch of three-sided pyramids—a conscious choice by Robson.

Image of multiple triangular chemicals stacked on top of each other, forming a structure with lots of open internal spaces.

The world’s first MOF, synthesized by Robson and his colleagues. Credit: Johan Jarnestad/The Royal Swedish Academy of Sciences

In this case, however, the internal cavities remained filled by the solvent in which the MOF was formed. But the solvent could move freely through the material. Still, based on this example, Robson predicted many of the properties that have since been engineered into different MOFs: the ability to retain their structure even after solvents are removed, the presence of catalytic sites, and the ability of MOFs to act as filters.

Expanding the concept

All of that might seem a very optimistic take for someone’s first effort. But the measure of Robson’s success is that he convinced other chemists of the potential. One was Susumu Kitagawa of Kyoto University. Kitagawa and his collaborators built a MOF that had large internal channels that extended the entire length of the material. Made in a watery solution, the MOF could be dried out and have gas flow through it, with the structure retaining molecules like oxygen, nitrogen, and methane.

Chemistry Nobel prize awarded for building ordered polymers with metal Read More »

2025-nobel-prize-in-physics-awarded-for-macroscale-quantum-tunneling

2025 Nobel Prize in Physics awarded for macroscale quantum tunneling


John Clarke, Michel H. Devoret, and John Martinis built an electrical circuit-based oscillator on a microchip.

A device consisting of four transmon qubits, four quantum buses, and four readout resonators fabricated by IBM in 2017. Credit: ay M. Gambetta, Jerry M. Chow & Matthias Steffen/CC BY 4.0

The 2025 Nobel Prize in Physics has been awarded to John Clarke, Michel H. Devoret, and John M. Martinis “for the discovery of macroscopic quantum tunneling and energy quantization in an electrical circuit.” The Nobel committee said during a media briefing that the laureates’ work provides opportunities to develop “the next generation of quantum technology, including quantum cryptography, quantum computers, and quantum sensors.” The three men will split the $1.1 million (11 million Swedish kroner) prize money. The presentation ceremony will take place in Stockholm on December 10, 2025.

“To put it mildly, it was the surprise of my life,” Clarke told reporters by phone during this morning’s press conference. “Our discovery in some ways is the basis of quantum computing. Exactly at this moment where this fits in is not entirely clear to me. One of the underlying reasons that cellphones work is because of all this work.”

When physicists began delving into the strange new realm of subatomic particles in the early 20th century, they discovered a realm where the old, deterministic laws of classical physics no longer apply. Instead, uncertainty reigns supreme. It is a world governed not by absolutes, but by probabilities, where events that would seem impossible on the macroscale occur on a regular basis.

For instance, subatomic particles can “tunnel” through seemingly impenetrable energy barriers. Imagine that an electron is a water wave trying to surmount a tall barrier. Unlike water, if the electron’s wave is shorter than the barrier, there is still a small probability that it will seep through to the other side.

This neat little trick has been experimentally verified many times. In the 1950s, physicists devised a system in which the flow of electrons would hit an energy barrier and stop because they lacked sufficient energy to surmount that obstacle. But some electrons didn’t follow the established rules of behavior. They simply tunneled right through the energy barrier.

(l-r): John Clarke, Michel H. Devoret and John M. Martinis

(l-r): John Clarke, Michel H. Devoret, and John M. Martinis. Credit: Niklas Elmehed/Nobel Prize Outreach

From subatomic to the macroscale

Clarke, Devoret, and Martinis were the first to demonstrate that quantum effects, such as quantum tunneling and energy quantization, can operate on macroscopic scales, not just one particle at a time.

After earning his PhD from University of Cambridge, Clarke came to the University of California, Berkeley, as a postdoc, eventually joining the faculty in 1969. By the mid-1980s, Devoret and Martinis had joined Clarke’s lab as a postdoc and graduate student, respectively. The trio decided to look for evidence of macroscopic quantum tunneling using a specialized circuit called a Josephson junction—a macroscopic device that takes advantage of a tunneling effect that is now widely used in quantum computing, quantum sensing, and cryptography.

A Josephson junction—named after British physicist Brian Josephson, who won the 1973 Nobel Prize in physics—is basically two semiconductor pieces separated by an insulating barrier. Despite this small gap between two conductors, electrons can still tunnel through the insulator and create a current. That occurs at sufficiently low temperatures, when the junction becomes superconducting as electrons form so-called “Cooper pairs.”

The team built an electrical circuit-based oscillator on a microchip measuring about one centimeter in size—essentially a quantum version of the classic pendulum. Their biggest challenge was figuring out how to reduce the noise in their experimental apparatus. For their experiments, they first fed a weak current into the junction and measured the voltage—initially zero. Then they increased the current and measured how long it took for the system to tunnel out of its enclosed state to produce a voltage.

Credit: Johan Jarnestad/The Royal Swedish Academy of Sciences

They took many measurements and found that the average current increased as the device’s temperature falls, as expected. But at some point, the temperature got so low that the device became superconducting and the average current became independent of the device’s temperature—a telltale signature of macroscopic quantum tunneling.

The team also demonstrated that the Josephson junction exhibited quantized energy levels—meaning the energy of the system was limited to only certain allowed values, just like subatomic particles can gain or lose energy only in fixed, discrete amounts—confirming the quantum nature of the system. Their discovery effectively revolutionized quantum science, since other scientists could now test precise quantum physics on silicon chips, among other applications.

Lasers, superconductors, and superfluid liquids exhibit quantum mechanical effects at the macroscale, but these arise by combining the behavior of microscopic components. Clarke, Devoret, and Martinis were able to create a macroscopic effect—a measurable voltage—from a macroscopic state. Their system contained billions of Cooper pairs filling the entire superconductor on the chip, yet all of them were described by a single wave function. They behave like a large-scale artificial atom.

In fact, their circuit was basically a rudimentary qubit. Martinis showed in a subsequent experiment that such a circuit could be an information-bearing unit, with the lowest energy state and the first step upward functioning as a 0 and a 1, respectively. This paved the way for such advances as the transmon in 2007: a superconducting charge qubit with reduced sensitivity to noise.

“That quantization of the energy levels is the source of all qubits,” said Irfan Siddiqi, chair of UC Berkeley’s Department of Physics and one of Devoret’s former postdocs. “This was the grandfather of qubits. Modern qubit circuits have more knobs and wires and things, but that’s just how to tune the levels, how to couple or entangle them. The basic idea that Josephson circuits could be quantized and were quantum was really shown in this experiment. The fact that you can see the quantum world in an electrical circuit in this very direct way was really the source of the prize.”

So perhaps it is not surprising that Martinis left academia in 2014 to join Google’s quantum computing efforts, helping to build a quantum computer the company claimed had achieved “quantum supremacy” in 2019. Martinis left in 2020 and co-founded a quantum computing startup, Qolab, in 2022. His fellow Nobel laureate, Devoret, now leads Google’s quantum computing division and is also a faculty member at the University of California, Santa Barbara. As for Clarke, he is now a professor emeritus at UC Berkeley.

“These systems bridge the gap between microscopic quantum behavior and macroscopic devices that form the basis for quantum engineering,” Gregory Quiroz, an expert in quantum information science and quantum algorithms at Johns Hopkins University, said in a statement. “The rapid progress in this field over the past few decades—in part fueled by their critical results—has allowed superconducting qubits to go from small-scale laboratory experiments to large-scale, multi-qubit devices capable of realizing quantum computation. While we are still on the hunt for undeniable quantum advantage, we would not be where we are today without many of their key contributions to the field.”

As is often the case with fundamental research, none of the three physicists realized at the time how significant their discovery would be in terms of its impact on quantum computing and other applications.

“This prize really demonstrates what the American system of science has done best,” Jonathan Bagger, CEO of the American Physical Society, told the New York Times. “It really showed the importance of the investment in research for which we do not yet have an application, because we know that sooner or later, there will be an application.”

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

2025 Nobel Prize in Physics awarded for macroscale quantum tunneling Read More »

natural-disasters-are-a-rising-burden-for-the-national-guard

Natural disasters are a rising burden for the National Guard


New Pentagon data show climate impacts shaping reservists’ mission.

National Guard soldiers search for people stranded by flooding in the aftermath of Hurricane Helene on September 27, 2024, in Steinhatchee, Florida. Credit: Sean Rayford/Getty Images

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

The National Guard logged more than 400,000 member service days per year over the past decade responding to hurricanes, wildfires, and other natural disasters, the Pentagon has revealed in a report to Congress.

The numbers mean that on any given day, 1,100 National Guard troops on average have been deployed on disaster response in the United States.

Congressional investigators believe this is the first public accounting by the Pentagon of the cumulative burden of natural disaster response on the nation’s military reservists.

The data reflect greater strain on the National Guard and show the potential stakes of the escalating conflict between states and President Donald Trump over use of the troops. Trump’s drive to deploy the National Guard in cities as an auxiliary law enforcement force—an effort curbed by a federal judge over the weekend—is playing out at a time when governors increasingly rely on reservists for disaster response.

In the legal battle over Trump’s efforts to deploy the National Guard in Portland, Oregon, that state’s attorney general, Dan Rayfield, argued in part that Democratic Gov. Tina Kotek needed to maintain control of the Guard in case they were needed to respond to wildfire—including a complex of fires now burning along the Rogue River in southwest Oregon.

The Trump administration, meanwhile, rejects the science showing that climate change is worsening natural disasters and has ceased Pentagon efforts to plan for such impacts or reduce its own carbon footprint.

The Department of Defense recently provided the natural disaster figures to four Democratic senators as part of a response to their query in March to Defense Secretary Pete Hegseth regarding planned cuts to the military’s climate programs. Sen. Elizabeth Warren of Massachusetts, who led the query on behalf of herself and three other members of the Senate Committee on Armed Services, shared the response with Inside Climate News.

“The effects of climate change are destroying the military’s infrastructure—Secretary Hegseth should take that threat seriously,” Warren told ICN in an email. “This data shows just how costly this threat already is for the National Guard to respond to natural disasters. Failing to act will only make these costs skyrocket.”

Neither the Department of Defense nor the White House immediately responded to a request for comment.

Last week, Hegseth doubled down on his vow to erase climate change from the military’s agenda. “No more climate change worship,” Hegseth exhorted, before an audience of senior officials he summoned to Marine Corps Base Quantico in Virginia on October 1. “No more division, distraction, or gender delusions. No more debris,” he said. Departing from the prepared text released by the Pentagon, he added, “As I’ve said before, and will say again, we are done with that shit.”

But the data released by the Pentagon suggest that the impacts of climate change are shaping the military’s duties, even if the department ceases acknowledging the science or planning for a warming future. In 2024, National Guard paid duty days on disaster response—445,306—had nearly tripled compared to nine years earlier, with significant fluctuations in between. (The Pentagon provided the figures in terms of “mandays,” or paid duty days over and above reservists’ required annual training days.)

Demand for reservist deployment on disaster assistance over those years peaked at 1.25 million duty days in 2017, when Hurricanes Harvey, Irma, and Maria unleashed havoc in Texas, Florida, and Puerto Rico.

The greatest deployment of National Guard members in response to wildfire over the past decade came in 2023, when wind-driven wildfires tore across Maui, leaving more than 100 people dead. Called into action by Gov. Josh Green, the Hawaii National Guard performed aerial water drops in CH-47 Chinook helicopters. On the ground, they helped escort fleeing residents, aided in search and recovery, distributed potable water, and performed other tasks.

Sen. Mazie Hirono of Hawaii, Sen. Richard Blumenthal of Connecticut, and Sen. Tammy Duckworth of Illinois joined Warren in seeking numbers on National Guard natural disaster deployment from the Pentagon.

It was not immediately possible to compare National Guard disaster deployment over the last decade to prior decades, since the Pentagon has not published a similar accounting for years prior to 2015.

But last year, a study by the Rand Corporation, a research firm, on stressors for the National Guard said that service leaders believed that natural disaster response missions were growing in scale and intensity.

“Seasons for these events are lasting longer, the extent of areas that are seeing these events is bigger, and the storms that occur are seemingly more intense and therefore more destructive,” noted the Rand study, produced for the Pentagon. “Because of the population density changes that have occurred, the devastation that can result and the population that can be affected are bigger as well.”

A history of the National Guard published by the Pentagon in 2001 describes the 1990s as a turning point for the service, marked by increasing domestic missions in part to “a nearly continuous string” of natural disasters.

One of those disasters was Hurricane Andrew, which ripped across southern Florida on August 23, 1992, causing more property damage than any storm in US history to that point. The crisis led to conflict between President George H.W. Bush’s administration and Florida’s Democratic governor, Lawton Chiles, over control of the National Guard and who should bear the blame for a lackluster initial response.

The National Guard, with 430,000 civilian soldiers, is a unique military branch that serves under both state and federal command. In Iraq and Afghanistan, for example, the president called on reservists to serve alongside the active-duty military. But state governors typically are commanders-in-chief for Guard units, calling on them in domestic crises, including natural disasters. The president only has limited legal authority to deploy the National Guard domestically, and such powers nearly always have been used in coordination with state governors.

But Trump has broken that norm and tested the boundaries of the law. In June, he deployed the National Guard for law and immigration enforcement in Los Angeles in defiance of Democratic Gov. Gavin Newsom. (Trump also deployed the Guard in Washington, DC, where members already are under the president’s command.) Over the weekend, Trump’s plans to deploy the Guard in Portland, Oregon, were put on hold by US District Judge Karin J. Immergut, a Trump appointee. She issued a second, broader stay on Sunday to block Trump from an attempt to deploy California National Guard members to Oregon. Nevertheless, the White House moved forward with an effort to deploy the Guard to Chicago in defiance of Illinois Gov. J.B. Pritzker, a Democrat. In that case, Trump is calling on Guard members from a politically friendly state, Texas, and a federal judge has rejected a bid by both the city of Chicago and the state of Illinois to block the move.

The conflicts could escalate should a natural disaster occur in a state where Trump has called the Guard into service on law enforcement, one expert noted.

“At the end of the day, it’s a political problem,” said Mark Nevitt, a professor at Emory University School of Law and a Navy veteran who specializes in the national security implications of climate change. “If, God forbid, there’s a massive wildfire in Oregon and there’s 2,000 National Guard men and women who are federalized, the governor would have to go hat-in-hand to President Trump” to get permission to redeploy the service members for disaster response, he said.

“The state and the federal government, most times it works—they are aligned,” Nevitt said. “But you can imagine a world where the president essentially refuses to give up the National Guard because he feels as though the crime-fighting mission has primacy over whatever other mission the governor wants.”

That scenario may already be unfolding in Oregon. On September 27, the same day that Trump announced his intent to send the National Guard into Portland, Kotek was mobilizing state resources to fight the Moon Complex Fire on the Rogue River, which had tripled in size due to dry winds. That fire is now 20,000 acres and only 10 percent contained. Pointing to that fire, Oregon Attorney General Rayfield told the court the Guard should remain ready to respond if needed, noting the role reservists played in responding to major Oregon fires in 2017 and 2020.

“Wildfire response is one of the most significant functions the Oregon National Guard performs in the State,” Rayfield argued in a court filing Sunday.

Although Oregon won a temporary stay, the Trump administration is appealing that order. And given the increasing role of the National Guard in natural disaster response, according to the Pentagon’s figures, the legal battle will have implications far beyond Portland. It will determine whether governors like Kotek will be forced to negotiate with Trump for control of the National Guard amid a crisis that his administration is seeking to downplay.

Photo of Inside Climate News

Natural disasters are a rising burden for the National Guard Read More »

the-neurons-that-let-us-see-what-isn’t-there

The neurons that let us see what isn’t there

Earlier work had hinted at such cells, but Shin and colleagues show systematically that they’re not rare oddballs—they’re a well-defined, functionally important subpopulation. “What we didn’t know is that these neurons drive local pattern completion within primary visual cortex,” says Shin. “We showed that those cells are causally involved in this pattern completion process that we speculate is likely involved in the perceptual process of illusory contours,” adds Adesnik.

Behavioral tests still to come

That doesn’t mean the mice “saw” the illusory contours when the neurons were artificially activated. “We didn’t actually measure behavior in this study,” says Adesnik. “It was about the neural representation.” All we can say at this point is that the IC-encoders could induce neural activity patterns that matched what imaging shows during normal perception of illusory contours.

“It’s possible that the mice weren’t seeing them,” admits Shin, “because the technique has involved a relatively small number of neurons, for technical limitations. But in the future, one could expand the number of neurons and also introduce behavioral tests.”

That’s the next frontier, Adesnik says: “What we would do is photo-stimulate these neurons and see if we can generate an animal’s behavioral response even without any stimulus on the screen.” Right now, optogenetics can only drive a small number of neurons, and IC-encoders are relatively rare and scattered. “For now, we have only stimulated a small number of these detectors, mainly because of technical limitations. IC-encoders are a rare population, probably distributed through the layers [of the visual system], but we could imagine an experiment where we recruit three, four, five, maybe even 10 times as many neurons,” he says. “In this case, I think we might be able to start getting behavioral responses. We’d definitely very much like to do this test.”

Nature Neuroscience, 2025. DOI: 10.1038/s41593-025-02055-5

Federica Sgorbissa is a science journalist; she writes about neuroscience and cognitive science for Italian and international outlets.

The neurons that let us see what isn’t there Read More »

here’s-the-real-reason-endurance-sank

Here’s the real reason Endurance sank


The ship wasn’t designed to withstand the powerful ice compression forces—and Shackleton knew it.

The Endurance, frozen and keeled over in the ice of the Weddell Sea. Credit: BF/Frank Hurley

In 1915, intrepid British explorer Sir Ernest Shackleton and his crew were stranded for months in the Antarctic after their ship, Endurance, was trapped by pack ice, eventually sinking into the freezing depths of the Weddell Sea. Miraculously, the entire crew survived. The prevailing popular narrative surrounding the famous voyage features two key assumptions: that Endurance was the strongest polar ship of its time, and that the ship ultimately sank after ice tore away the rudder.

However, a fresh analysis reveals that Endurance would have sunk even with an intact rudder; it was crushed by the cumulative compressive forces of the Antarctic ice with no single cause for the sinking. Furthermore, the ship wasn’t designed to withstand those forces, and Shackleton was likely well aware of that fact, according to a new paper published in the journal Polar Record. Yet he chose to embark on the risky voyage anyway.

Author Jukka Tuhkuri of Aalto University is a polar explorer and one of the leading researchers on ice worldwide. He was among the scientists on the Endurance22 mission that discovered the Endurance shipwreck in 2022, documented in a 2024 National Geographic documentary. The ship was in pristine condition partly because of the lack of wood-eating microbes in those waters. In fact, the Endurance22 expedition’s exploration director, Mensun Bound, told The New York Times at the time that the shipwreck was the finest example he’s ever seen; Endurance was “in a brilliant state of preservation.”

As previously reported, Endurance set sail from Plymouth on August 6, 1914, with Shackleton joining his crew in Buenos Aires, Argentina. By the time they reached the Weddell Sea in January 1915, accumulating pack ice and strong gales slowed progress to a crawl. Endurance became completely icebound on January 24, and by mid-February, Shackleton ordered the boilers to be shut off so that the ship would drift with the ice until the weather warmed sufficiently for the pack to break up. It would be a long wait. For 10 months, the crew endured the freezing conditions. In August, ice floes pressed into the ship with such force that the ship’s decks buckled.

The ship’s structure nonetheless remained intact, but by October 25, Shackleton realized Endurance was doomed. He and his men opted to camp out on the ice some two miles (3.2 km) away, taking as many supplies as they could with them. Compacted ice and snow continued to fill the ship until a pressure wave hit on November 13, crushing the bow and splitting the main mast—all of which was captured on camera by crew photographer Frank Hurley. Another pressure wave hit in the late afternoon on November 21, lifting the ship’s stern. The ice floes parted just long enough for Endurance to finally sink into the ocean before closing again to erase any trace of the wreckage.

Once the wreck had been found, the team recorded as much as they could with high-resolution cameras and other instruments. Vasarhelyi, particularly, noted the technical challenge of deploying a remote digital 4K camera with lighting at 9,800 feet underwater, and the first deployment at that depth of photogrammetric and laser technology. This resulted in a millimeter-scale digital reconstruction of the entire shipwreck to enable close study of the finer details.

Challenging the narrative

The ice and wave tank at Aalto University

The ice and wave tank at Aalto University. Credit: Aalto University

It was shortly after the Endurance22 mission found the shipwreck that Tuhkuri realized that there had never been a thorough structural analysis conducted of the vessel to confirm the popular narrative. Was Endurance truly the strongest polar ship of that time, and was a broken rudder the actual cause of the sinking? He set about conducting his own investigation to find out, analyzing Shackleton’s diaries and personal correspondence, as well as the diaries and correspondence of several Endurance crew members.

Tuhkuri also conducted a naval architectural analysis of the vessel under the conditions of compressive ice, which had never been done before. He then compared those results with the underwater images of the Endurance shipwreck. He also looked at comparable wooden polar expedition ships and steel icebreakers built in the late 1800s and early 1900s.

Endurance was originally named Polaris; Shackleton renamed it when he purchased the ship in 1914 for his doomed expedition. Per Tuhkuri, the ship had a lower (tween) deck, a main deck, and a short bridge deck above them that stopped at the machine room in order to make space for the steam engine and boiler. There were no beams in the machine room area, nor any reinforcing diagonal beams, which weakened this significant part of the ship’s hull.

This is because Endurance was originally built for polar tourism and for hunting polar bears and walruses in the Arctic; at the ice edge, ships only needed sufficiently strong planking and frames to withstand the occasional collision from ice floes. However, “In pack ice conditions, where compression from the ice needs to be taken into account, deck beams become of key importance,” Tuhkuri wrote. “It is the deck beams that keep the two ship sides apart and maintain the shape of a ship. Without strong enough deck beams, a vessel gets crushed by compressive ice, more or less irrespective of the thickness of planking and frames.”

The Endurance was nonetheless sturdy enough to withstand five serious ice compression events before her final sinking. On April 4, 1915, one of the scientists on board reported hearing loud rumbling noises from a 3-meter-high ice ridge that formed near the ship, causing the ship to vibrate. Tuhkuri believes this was due to a “compressive failure process” as ice crushed against the hull. On July 14, a violent snowstorm hit, and crew members could hear the ice breaking beneath the ship. The ice ridges that formed over the next few days were sufficiently concerning that Shackleton instituted four-hour watches on deck and insisted on having everything packed in case they had to abandon ship.

Crushed by the ice

Idealized cross sections of early Antarctic ships. Endurance was type (a); Fram and Deutschland were type (b).

Idealized cross sections of early Antarctic ships. Endurance was type (a); Deutschland was type (b). Credit: J. Tuhkuri, 2025

On August 1, an ice floe fractured and grinding noises were heard beneath the ship as the floe piled underneath it, lifting Endurance and causing her to first heel starboard and then heel to port, as several deck beams began to buckle. Similar compression events kept happening until there was a sudden escalation on September 30. The hull began vibrating hard enough to shake the whole rigging as even more ice crushed against the hull. Even the linoleum on the floors buckled; Harry McNish wrote in his diary that it looked like Endurance “was going to pieces.”

Yet another ice compression event occurred on October 17, pushing the vessel one meter into the air as the iron plates on the engine room’s floor buckled and slid over each other. Ship scientist Reginald James wrote that “for a time things were not good as the pressure was mostly along the region of the engine room where there are no beams of any strength,” while Captain Worsley described the engine room as “the weakest part of the ship.”

By the afternoon, Endurance was heeled almost 30 degrees to port, so much so that the keel was visible from the starboard side, per Tuhkuri, although the ice started to fracture in the evening so that the ship could shift upright again. The crew finally abandoned ship on October 27 after an even more severe compression event hit a few days before. Endurance finally sank below the ice on November 21.

Tuhkuri’s analysis of the structural damage to Endurance revealed that the rudder and the stern post were indeed torn off, confirmed by crew correspondence and diaries and by the underwater images taken of the wreck. The keel was also ripped off, with McNish noting in his diary that the ship broke into two halves as a result. The underwater images are less clear on this point, but Tuhkuri writes that there is something “some distance forward from the rudder, on the port side” that “could be the end of a displaced part of the keel sticking up from under the ship.”

All the diaries mentioned the buckling and breaking of deck beams, and there was much structural damage to the ship’s sides; for instance, Worsley writes of “great spikes of ice… forcing their way through the ship’s sides.” There are no visible holes in the wreck’s sides in the underwater images, but Tuhkuri posits that the damage is likely buried in the mud on the sea bed, given that by late October, Endurance “was heavily listed and the bottom was exposed.”

Jukka Tuhkari on the polar ice

Jukka Tuhkuri on the ice. Credit: Aalto University

Based on his analysis, Tuhkuri concluded that the rudder wasn’t the sole or primary reason for the ship’s sinking. “Endurance would have sunk even if it did not have a rudder at all,” Tuhkuri wrote; it was crushed by the ice, with no single reason for its eventual sinking. Shackleton himself described the process as ice floes “simply annihilating the ship.”

Perhaps the most surprising finding is that Shackleton knew of Endurance‘s structural shortcomings even before undertaking the voyage. Per Tuhkuri, the devastating effects of compressive ice on ships were known to shipbuilders in the early 1900s. An early Swedish expedition was forced to abandon its ship Antarctic in February 1903 when it became trapped in the ice. Things progressed much like Endurance: the ice lifted Antarctic up so that the ship heeled over, with ice-crushed sides, buckling beams, broken planking, and a damaged rudder and stern post. The final sinking occurred when an advancing ice floe ripped off the keel.

Shackleton knew of Antarctic‘s fate and had even been involved in the rescue operation. He also helped Wilhelm Filchner make final preparations for Filchner’s 1911–1913 polar expedition with a ship named Deutschland; he even advised his colleague to strengthen the ship’s hull by adding diagonal beams, the better to withstand the Weddell Sea ice. Filchner did so, and as a result, Deutschland survived eight months of being trapped in compressive ice until the ship was finally able to break free and sail home. (It took a torpedo attack in 1917 to sink the good ship Deutschland.)

The same shipyard that modified Deutschland had also just signed a contract to build Endurance (then called Polaris). So both Shackleton and the shipbuilders knew how destructive compressive ice could be and how to bolster a ship against it. Yet Endurance was not outfitted with diagonal beams to strengthen its hull. And knowing this, Shackleton bought Endurance anyway for his 1914–1915 voyage. In a 1914 letter to his wife, he even compared the strength of its construction unfavorably with that of the Nimrod, the ship he used for his 1907–1909 expedition. So Shackleton had to know he was taking a big risk.

“Even simple structural analysis shows that the ship was not designed for the compressive pack ice conditions that eventually sank it,” said Tuhkuri. “The danger of moving ice and compressive loads—and how to design a ship for such conditions—was well understood before the ship sailed south. So we really have to wonder why Shackleton chose a vessel that was not strengthened for compressive ice. We can speculate about financial pressures or time constraints, but the truth is, we may never know. At least we now have more concrete findings to flesh out the stories.”

Polar Record, 2025. DOI: 10.1017/S0032247425100090 (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Here’s the real reason Endurance sank Read More »

pentagon-contract-figures-show-ula’s-vulcan-rocket-is-getting-more-expensive

Pentagon contract figures show ULA’s Vulcan rocket is getting more expensive

A SpaceX Falcon Heavy rocket with NASA’s Psyche spacecraft launches from NASA’s Kennedy Space Center in Florida on October 13, 2023. Credit: Chandan Khanna/AFP via Getty Images

The launch orders announced Friday comprise the second batch of NSSL Phase 3 missions the Space Force has awarded to SpaceX and ULA.

It’s important to remember that these prices aren’t what ULA or SpaceX would charge a commercial satellite customer. The US government pays a premium for access to space. The Space Force, the National Reconnaissance Office, and NASA don’t insure their launches like a commercial customer would do. Instead, government agencies have more insight into their launch contractors, including inspections, flight data reviews, risk assessments, and security checks. Government missions also typically get priority on ULA and SpaceX’s launch schedules. All of this adds up to more money.

A heavy burden

Four of the five launches awarded to SpaceX Friday will use the company’s larger Falcon Heavy rocket, according to Lt. Col. Kristina Stewart at Space Systems Command. One will fly on SpaceX’s workhorse Falcon 9. This is the first time a majority of the Space Force’s annual launch orders has required the lift capability of a Falcon Heavy, with three Falcon 9 booster cores combining to heave larger payloads into space.

All versions of ULA’s Vulcan rocket use a single core booster, with varying numbers of strap-on solid-fueled rocket motors to provide extra thrust off the launch pad.

Here’s a breakdown of the seven new missions assigned to SpaceX and ULA:

USSF-149: Classified payload on a SpaceX Falcon 9 from Florida

USSF-63: Classified payload on a SpaceX Falcon Heavy from Florida

USSF-155: Classified payload SpaceX Falcon Heavy from Florida

USSF-205: WGS-12 communications satellite on a SpaceX Falcon Heavy from Florida

NROL-86: Classified payload on a SpaceX Falcon Heavy from Florida

USSF-88: GPS IIIF-4 navigation satellite on a ULA Vulcan VC2S (two solid rocket boosters) from Florida

NROL-88: Classified payload on a ULA Vulcan VC4S (four solid rocket boosters) from Florida

Pentagon contract figures show ULA’s Vulcan rocket is getting more expensive Read More »

how-different-mushrooms-learned-the-same-psychedelic trick

How different mushrooms learned the same psychedelic trick

Magic mushrooms have been used in traditional ceremonies and for recreational purposes for thousands of years. However, a new study has found that mushrooms evolved the ability to make the same psychoactive substance twice. The discovery has important implications for both our understanding of these mushrooms’ role in nature and their medical potential.

Magic mushrooms produce psilocybin, which your body converts into its active form, psilocin, when you ingest it. Psilocybin rose in popularity in the 1960s and was eventually classed as a Schedule 1 drug in the US in 1970, and as a Class A drug in 1971 in the UK, the designations given to drugs that have high potential for abuse and no accepted medical use. This put a stop to research on the medical use of psilocybin for decades.

But recent clinical trials have shown that psilocybin can reduce depression severity, suicidal thoughts, and chronic anxiety. Given its potential for medical treatments, there is renewed interest in understanding how psilocybin is made in nature and how we can produce it sustainably.

The new study, led by pharmaceutical microbiology researcher Dirk Hoffmeister, from Friedrich Schiller University Jena, discovered that mushrooms can make psilocybin in two different ways, using different types of enzymes. This also helped the researchers discover a new way to make psilocybin in a lab.

Based on the work led by Hoffmeister, enzymes from two types of unrelated mushrooms under study appear to have evolved independently from each other and take different routes to create the exact same compound.

This is a process known as convergent evolution, which means that unrelated living organisms evolve two distinct ways to produce the same trait. One example is that of caffeine, where different plants including coffee, tea, cacao, and guaraná have independently evolved the ability to produce the stimulant.

This is the first time that convergent evolution has been observed in two organisms that belong to the fungal kingdom. Interestingly, the two mushrooms in question have very different lifestyles. Inocybe corydalina, also known as the greenflush fibrecap and the object of Hoffmeister’s study, grows in association with the roots of different kinds of trees. Psilocybe mushrooms, on the other hand, traditionally known as magic mushrooms, live on nutrients that they acquire by decomposing dead organic matter, such as decaying wood, grass, roots, or dung.

How different mushrooms learned the same psychedelic trick Read More »

blue-origin-aims-to-land-next-new-glenn-booster,-then-reuse-it-for-moon-mission

Blue Origin aims to land next New Glenn booster, then reuse it for Moon mission


“We fully intend to recover the New Glenn first stage on this next launch.”

New Glenn lifts off on its debut flight on January 16, 2025. Credit: Blue Origin

There’s a good bit riding on the second launch of Blue Origin’s New Glenn rocket.

Most directly, the fate of a NASA science mission to study Mars’ upper atmosphere hinges on a successful launch. The second flight of Blue Origin’s heavy-lifter will send two NASA-funded satellites toward the red planet to study the processes that drove Mars’ evolution from a warmer, wetter world to the cold, dry planet of today.

A successful launch would also nudge Blue Origin closer to winning certification from the Space Force to begin launching national security satellites.

But there’s more on the line. If Blue Origin plans to launch its first robotic Moon lander early next year—as currently envisioned—the company needs to recover the New Glenn rocket’s first stage booster. Crews will again dispatch Blue Origin’s landing platform into the Atlantic Ocean, just as they did for the first New Glenn flight in January.

The debut launch of New Glenn successfully reached orbit, a difficult feat for the inaugural flight of any rocket. But the booster fell into the Atlantic Ocean after three of the rocket’s engines failed to reignite to slow down for landing. Engineers identified seven changes to resolve the problem, focusing on what Blue Origin calls “propellant management and engine bleed control improvements.”

Relying on reuse

Pat Remias, Blue Origin’s vice president of space systems development, said Thursday that the company is confident in nailing the landing on the second flight of New Glenn. That launch, with NASA’s next set of Mars probes, is likely to occur no earlier than November from Cape Canaveral Space Force Station, Florida.

“We fully intend to recover the New Glenn first stage on this next launch,” Remias said in a presentation at the International Astronautical Congress in Sydney. “Fully intend to do it.”

Blue Origin, owned by billionaire Jeff Bezos, nicknamed the booster stage for the next flight “Never Tell Me The Odds.” It’s not quite fair to say the company’s leadership has gone all-in with their bet that the next launch will result in a successful booster landing. But the difference between a smooth touchdown and another crash landing will have a significant effect on Bezos’ Moon program.

That’s because the third New Glenn launch, penciled in for no earlier than January of next year, will reuse the same booster flown on the upcoming second flight. The payload on that launch will be Blue Origin’s first Blue Moon lander, aiming to become the largest spacecraft to reach the lunar surface. Ars has published a lengthy feature on the Blue Moon lander’s role in NASA’s effort to return astronauts to the Moon.

“We will use that first stage on the next New Glenn launch,” Remias said. “That is the intent. We’re pretty confident this time. We knew it was going to be a long shot [to land the booster] on the first launch.”

A long shot, indeed. It took SpaceX 20 launches of its Falcon 9 rocket over five years before pulling off the first landing of a booster. It was another 15 months before SpaceX launched a previously flown Falcon 9 booster for the first time.

With New Glenn, Blue’s engineers hope to drastically shorten the learning curve. Going into the second launch, the company’s managers anticipate refurbishing the first recovered New Glenn booster to launch again within 90 days. That would be a remarkable accomplishment.

Dave Limp, Blue Origin’s CEO, wrote earlier this year on social media that recovering the booster on the second New Glenn flight will “take a little bit of luck and a lot of excellent execution.”

On September 26, Blue Origin shared this photo of the second New Glenn booster on social media.

Blue Origin’s production of second stages for the New Glenn rocket has far outpaced manufacturing of booster stages. The second stage for the second flight was test-fired in April, and Blue completed a similar static-fire test for the third second stage in August. Meanwhile, according to a social media post written by Limp last week, the body of the second New Glenn booster is assembled, and installation of its seven BE-4 engines is “well underway” at the company’s rocket factory in Florida.

The lagging production of New Glenn boosters, known as GS1s (Glenn Stage 1s), is partly by design. Blue Origin’s strategy with New Glenn has been to build a small number of GS1s, each of which is more expensive and labor-intensive than SpaceX’s Falcon 9. This approach counts on routine recoveries and rapid refurbishment of boosters between missions.

However, this strategy comes with risks, as it puts the booster landings in the critical path for ramping up New Glenn’s launch rate. At one time, Blue aimed to launch eight New Glenn flights this year; it will probably end the year with two.

Laura Maginnis, Blue Origin’s vice president of New Glenn mission management, said last month that the company was building a fleet of “several boosters” and had eight upper stages in storage. That would bode well for a quick ramp-up in launch cadence next year.

However, Blue’s engineers haven’t had a chance to inspect or test a recovered New Glenn booster. Even if the next launch concludes with a successful landing, the rocket could come back to Earth with some surprises. SpaceX’s initial development of Falcon 9 and Starship was richer in hardware, with many boosters in production to decouple successful landings from forward progress.

Blue Moon

All of this means a lot is riding on an on-target landing of the New Glenn booster on the next flight. Separate from Blue Origin’s ambitions to fly many more New Glenn rockets next year, a good recovery would also mean an earlier demonstration of the company’s first lunar lander.

The lander set to launch on the third New Glenn mission is known as Blue Moon Mark 1, an unpiloted vehicle designed to robotically deliver up to 3 metric tons (about 6,600 pounds) of cargo to the lunar surface. The spacecraft will have a height of about 26 feet (8 meters), taller than the lunar lander used for NASA’s Apollo astronaut missions.

The first Blue Moon Mark 1 is funded from Blue Origin’s coffers. It is now fully assembled and will soon ship to NASA’s Johnson Space Center in Houston for vacuum chamber testing. Then, it will travel to Florida’s Space Coast for final launch preparations.

“We are building a series, not a singular lander, but multiple types and sizes and scales of landers to go to the Moon,” Remias said.

The second Mark 1 lander will carry NASA’s VIPER rover to prospect for water ice at the Moon’s south pole in late 2027. Around the same time, Blue will use a Mark 1 lander to deploy two small satellites to orbit the Moon, flying as low as a few miles above the surface to scout for resources like water, precious metals, rare Earth elements, and helium-3 that could be extracted and exploited by future explorers.

A larger lander, Blue Moon Mark 2, is in an earlier stage of development. It will be human-rated to land astronauts on the Moon for NASA’s Artemis program.

Blue Origin’s Blue Moon MK1 lander, seen in the center, is taller than NASA’s Apollo lunar lander, currently the largest spacecraft to have landed on the Moon. Blue Moon MK2 is even larger, but all three landers are dwarfed in size by SpaceX’s Starship. Credit: Blue Origin

NASA’s other crew-rated lander will be derived from SpaceX’s Starship rocket. But Starship and Blue Moon Mark 2 are years away from being ready to accommodate a human crew, and both require orbital cryogenic refueling—something never before attempted in space—to transit out to the Moon.

This has led to a bit of a dilemma at NASA. China is also working on a lunar program, eyeing a crew landing on the Moon by 2030. Many experts say that, as of today, China is on pace to land astronauts on the Moon before the United States.

Of course, 12 US astronauts walked on the Moon in the Apollo program. But no one has gone back since 1972, and NASA and China are each planning to return to the Moon to stay.

One way to speed up a US landing on the Moon might be to use a modified version of Blue Origin’s Mark 1 lander, Ars reported Thursday.

If this is the path NASA takes, the stakes for the next New Glenn launch and landing will soar even higher.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Blue Origin aims to land next New Glenn booster, then reuse it for Moon mission Read More »

a-biological-0-day?-threat-screening-tools-may-miss-ai-designed-proteins.

A biological 0-day? Threat-screening tools may miss AI-designed proteins.


Ordering DNA for AI-designed toxins doesn’t always raise red flags.

Designing variations of the complex, three-dimensional structures of proteins has been made a lot easier by AI tools. Credit: Historical / Contributor

On Thursday, a team of researchers led by Microsoft announced that they had discovered, and possibly patched, what they’re terming a biological zero-day—an unrecognized security hole in a system that protects us from biological threats. The system at risk screens purchases of DNA sequences to determine when someone’s ordering DNA that encodes a toxin or dangerous virus. But, the researchers argue, it has become increasingly vulnerable to missing a new threat: AI-designed toxins.

How big of a threat is this? To understand, you have to know a bit more about both existing biosurveillance programs and the capabilities of AI-designed proteins.

Catching the bad ones

Biological threats come in a variety of forms. Some are pathogens, such as viruses and bacteria. Others are protein-based toxins, like the ricin that was sent to the White House in 2003. Still others are chemical toxins that are produced through enzymatic reactions, like the molecules associated with red tide. All of them get their start through the same fundamental biological process: DNA is transcribed into RNA, which is then used to make proteins.

For several decades now, starting the process has been as easy as ordering the needed DNA sequence online from any of a number of companies, which will synthesize a requested sequence and ship it out. Recognizing the potential threat here, governments and industry have worked together to add a screening step to every order: the DNA sequence is scanned for its ability to encode parts of proteins or viruses considered threats. Any positives are then flagged for human intervention to evaluate whether they or the people ordering them truly represent a danger.

Both the list of proteins and the sophistication of the scanning have been continually updated in response to research progress over the years. For example, initial screening was done based on similarity to target DNA sequences. But there are many DNA sequences that can encode the same protein, so the screening algorithms have been adjusted accordingly, recognizing all the DNA variants that pose an identical threat.

The new work can be thought of as an extension of that threat. Not only can multiple DNA sequences encode the same protein; multiple proteins can perform the same function. To form a toxin, for example, typically requires the protein to adopt the correct three-dimensional structure, which brings a handful of critical amino acids within the protein into close proximity. Outside of those critical amino acids, however, things can often be quite flexible. Some amino acids may not matter at all; other locations in the protein could work with any positively charged amino acid, or any hydrophobic one.

In the past, it could be extremely difficult (meaning time-consuming and expensive) to do the experiments that would tell you what sorts of changes a string of amino acids could tolerate while remaining functional. But the team behind the new analysis recognized that AI protein design tools have now gotten quite sophisticated and can predict when distantly related sequences can fold up into the same shape and catalyze the same reactions. The process is still error-prone, and you often have to test a dozen or more proposed proteins to get a working one, but it has produced some impressive successes.

So, the team developed a hypothesis to test: AI can take an existing toxin and design a protein with the same function that’s distantly related enough that the screening programs do not detect orders for the DNA that encodes it.

The zero-day treatment

The team started with a basic test: use AI tools to design variants of the toxin ricin, then test them against the software that is used to screen DNA orders. The results of the test suggested there was a risk of dangerous protein variants slipping past existing screening software, so the situation was treated like the equivalent of a zero-day vulnerability.

“Taking inspiration from established cybersecurity processes for addressing such situations, we contacted the relevant bodies regarding the potential vulnerability, including the International Gene Synthesis Consortium and trusted colleagues in the protein design community as well as leads in biosecurity at the US Office of Science and Technology Policy, US National Institute of Standards and Technologies, US Department of Homeland Security, and US Office of Pandemic Preparedness and Response,” the authors report. “Outside of those bodies, details were kept confidential until a more comprehensive study could be performed in pursuit of potential mitigations and for ‘patches’… to be developed and deployed.”

Details of that original test are being made available today as part of a much larger analysis that extends the approach to a large range of toxic proteins. Starting with 72 toxins, the researchers used three open source AI packages to generate a total of about 75,000 potential protein variants.

And this is where things get a little complicated. Many of the AI-designed protein variants are going to end up being non-functional, either subtly or catastrophically failing to fold up into the correct configuration to create an active toxin. The only way to know which ones work is to make the proteins and test them biologically; most AI protein design efforts will make actual proteins from dozens to hundreds of the most promising-looking potential designs to find a handful that are active. But doing that for 75,000 designs is completely unrealistic.

Instead, the researchers used two software-based tools to evaluate each of the 75,000 designs. One of these focuses on the similarity between the overall predicted physical structure of the proteins, and another looks at the predicted differences between the positions of individual amino acids. Either way, they’re a rough approximation of just how similar the proteins formed by two strings of amino acids should be. But they’re definitely not a clear indicator of whether those two proteins would be equally functional.

In any case, DNA sequences encoding all 75,000 designs were fed into the software that screens DNA orders for potential threats. One thing that was very clear is that there were huge variations in the ability of the four screening programs to flag these variant designs as threatening. Two of them seemed to do a pretty good job, one was mixed, and another let most of them through. Three of the software packages were updated in response to this performance, which significantly improved their ability to pick out variants.

There was also a clear trend in all four screening packages: The closer the variant was to the original structurally, the more likely the package (both before and after the patches) was to be able to flag it as a threat. In all cases, there was also a cluster of variant designs that were unlikely to fold into a similar structure, and these generally weren’t flagged as threats.

What does this mean?

Again, it’s important to emphasize that this evaluation is based on predicted structures; “unlikely” to fold into a similar structure to the original toxin doesn’t mean these proteins will be inactive as toxins. Functional proteins are probably going to be very rare among this group, but there may be a handful in there. That handful is also probably rare enough that you would have to order up and test far too many designs to find one that works, making this an impractical threat vector.

At the same time, there are also a handful of proteins that are very similar to the toxin structurally and not flagged by the software. For the three patched versions of the software, the ones that slip through the screening represent about 1 to 3 percent of the total in the “very similar” category. That’s not great, but it’s probably good enough that any group that tries to order up a toxin by this method would attract attention because they’d have to order over 50 just to have a good chance of finding one that slipped through, which would raise all sorts of red flags.

One other notable result is that the designs that weren’t flagged were mostly variants of just a handful of toxin proteins. So this is less of a general problem with the screening software and might be more of a small set of focused problems. Of note, one of the proteins that produced a lot of unflagged variants isn’t toxic itself; instead, it’s a co-factor necessary for the actual toxin to do its thing. As such, some of the screening software packages didn’t even flag the original protein as dangerous, much less any of its variants. (For these reasons, the company that makes one of the better-performing software packages decided the threat here wasn’t significant enough to merit a security patch.)

So, on its own, this work doesn’t seem to have identified something that’s a major threat at the moment. But it’s probably useful, in that it’s a good thing to get the people who engineer the screening software to start thinking about emerging threats.

That’s because, as the people behind this work note, AI protein design is still in its early stages, and we’re likely to see considerable improvements. And there’s likely to be a limit to the sorts of things we can screen for. We’re already at the point where AI protein design tools can be used to create proteins that have entirely novel functions and do so without starting with variants of existing proteins. In other words, we can design proteins that are impossible to screen for based on similarity to known threats, because they don’t look at all like anything we know is dangerous.

Protein-based toxins would be very difficult to design, because they have to both cross the cell membrane and then do something dangerous once inside. While AI tools are probably unable to design something that sophisticated at the moment, I would be hesitant to rule out the prospects of them eventually reaching that sort of sophistication.

Science, 2025. DOI: 10.1126/science.adu8578  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

A biological 0-day? Threat-screening tools may miss AI-designed proteins. Read More »

scientists-revive-old-bulgarian-recipe-to-make-yogurt-with-ants

Scientists revive old Bulgarian recipe to make yogurt with ants

Fermenting milk to make yogurt, cheeses, or kefir is an ancient practice, and different cultures have their own traditional methods, often preserved in oral histories. The forests of Bulgaria and Turkey have an abundance of red wood ants, for instance, so a time-honored Bulgarian yogurt-making practice involves dropping a few live ants (or crushed-up ant eggs) into the milk to jump-start fermentation. Scientists have now figured out why the ants are so effective in making edible yogurt, according to a paper published in the journal iScience. The authors even collaborated with chefs to create modern recipes using ant yogurt.

“Today’s yogurts are typically made with just two bacterial strains,” said co-author Leonie Jahn from the Technical University of Denmark. “If you look at traditional yogurt, you have much bigger biodiversity, varying based on location, households, and season. That brings more flavors, textures, and personality.”

If you want to study traditional culinary methods, it helps to go where those traditions emerged, since the locals likely still retain memories and oral histories of said culinary methods—in this case, Nova Mahala, Bulgaria, where co-author Sevgi Mutlu Sirakova’s family still lives. To recreate the region’s ant yogurt, the team followed instructions from Sirakova’s uncle. They used fresh raw cow milk, warmed until scalding, “such that it could ‘bite your pinkie finger,'” per the authors. Four live red wood ants were then collected from a local colony and added to the milk.

The authors secured the milk with cheesecloth and wrapped the glass container in fabric for insulation before burying it inside the ant colony, covering the container completely with the mound material. “The nest itself is known to produce heat and thus act as an incubator for yogurt fermentation,” they wrote. They retrieved the container 26 hours later to taste it and check the pH, stirring it to observe the coagulation. The milk had definitely begun to thicken and sour, producing the early stage of yogurt. Tasters described it as “slightly tangy, herbaceous,” with notes of “grass-fed fat.”

Scientists revive old Bulgarian recipe to make yogurt with ants Read More »

trump-offers-universities-a-choice:-comply-for-preferential-funding

Trump offers universities a choice: Comply for preferential funding

On Wednesday, The Wall Street Journal reported that the Trump administration had offered nine schools a deal: manage your universities in a way that aligns with administration priorities and get “substantial and meaningful federal grants,” along with other benefits. Failure to accept the bargain would result in a withdrawal of federal programs that would likely cripple most universities. The offer, sent to a mixture of state and private universities, would see the government dictate everything from hiring and admissions standards to grading and has provisions that appear intended to make conservative ideas more welcome on campus.

The document was sent to the University of Arizona, Brown University, Dartmouth College, Massachusetts Institute of Technology, the University of Pennsylvania, the University of Southern California, the University of Texas, Vanderbilt University, and the University of Virginia. However, independent reporting indicates that the administration will ultimately extend the deal to all colleges and universities.

Ars has obtained a copy of the proposed “Compact for Academic Excellence in Higher Education,” which makes the scope of the bargain clear in its introduction. “Institutions of higher education are free to develop models and values other than those below, if the institution elects to forego federal benefits,” it suggests, while mentioning that those benefits include access to fundamental needs, like student loans, federal contracts, research funding, tax benefits, and immigration visas for students and faculty.

It is difficult to imagine how it would be possible to run a major university without access to those programs, making this less a compact and more of an ultimatum.

Poorly thought through

The Compact itself would see universities agree to cede admissions standards to the federal government. The government, in this case, is demanding only the use of “objective” criteria such as GPA and standardized test scores as the basis of admissions decisions, and that schools publish those criteria on their websites. They would also have to publish anonymized data comparing how admitted and rejected students did relative to these criteria.

Trump offers universities a choice: Comply for preferential funding Read More »