Science

lunar-gateway’s-skeleton-is-complete—its-next-stop-may-be-trump’s-chopping-block

Lunar Gateway’s skeleton is complete—its next stop may be Trump’s chopping block

Officials blame changing requirements for much of the delays and rising costs. NASA managers dramatically changed their plans for the Gateway program in 2020, when they decided to launch the PPE and HALO on the same rocket, prompting major changes to their designs.

Jared Isaacman, Trump’s nominee for NASA administrator, declined to commit to the Gateway program during a confirmation hearing before the Senate Commerce Committee on April 9. Sen. Ted Cruz (R-Texas), the committee’s chairman, pressed Isaacman on the Lunar Gateway. Cruz is one of the Gateway program’s biggest backers in Congress since it is managed by Johnson Space Center in Texas. If it goes ahead, Gateway would guarantee numerous jobs at NASA’s mission control in Houston throughout its 15-year lifetime.

That’s an area that if I’m confirmed, I would love to roll up my sleeves and further understand what’s working right?” Isaacman replied to Cruz. “What are the opportunities the Gateway presents to us? And where are some of the challenges, because I think the Gateway is a component of many programs that are over budget and behind schedule.”

The pressure shell for the Habitation and Logistics Outpost (HALO) module arrived in Gilbert, Arizona, last week for internal outfitting. Credit: NASA/Josh Valcarcel

Checking in with Gateway

Nevertheless, the Gateway program achieved a milestone one week before Isaacman’s confirmation hearing. The metallic pressure shell for the HALO module was shipped from its factory in Italy to Arizona. The HALO module is only partially complete, and it lacks life support systems and other hardware it needs to operate in space.

Over the next couple of years, Northrop Grumman will outfit the habitat with those components and connect it with the Power and Propulsion Element under construction at Maxar Technologies in Silicon Valley. This stage of spacecraft assembly, along with prelaunch testing, often uncovers problems that can drive up costs and trigger more delays.

Ars recently spoke with Jon Olansen, a bio-mechanical engineer and veteran space shuttle flight controller who now manages the Gateway program at Johnson Space Center. A transcript of our conversation with Olansen is below. It is lightly edited for clarity and brevity.

Ars: The HALO module has arrived in Arizona from Italy. What’s next?

Olansen: This HALO module went through significant effort from the primary and secondary structure perspective out at Thales Alenia Space in Italy. That was most of their focus in getting the vehicle ready to ship to Arizona. Now that it’s in Arizona, Northrop is setting it up in their facility there in Gilbert to be able to do all of the outfitting of the systems we need to actually execute the missions we want to do, keep the crew safe, and enable the science that we’re looking to do. So, if you consider your standard spacecraft, you’re going to have all of your command-and-control capabilities, your avionics systems, your computers, your network management, all of the things you need to control the vehicle. You’re going to have your power distribution capabilities. HALO attaches to the Power and Propulsion Element, and it provides the primary power distribution capability for the entire station. So that’ll all be part of HALO. You’ll have your standard thermal systems for active cooling. You’ll have the vehicle environmental control systems that will need to be installed, [along with] some of the other crew systems that you can think of, from lighting, restraint, mobility aids, all the different types of crew systems. Then, of course, all of our science aspects. So we have payload lockers, both internally, as well as payload sites external that we’ll have available, so pretty much all the different systems that you would need for a human-rated spacecraft.

Ars: What’s the latest status of the Power and Propulsion Element?

Olansen: PPE is fairly well along in their assembly and integration activities. The central cylinder has been integrated with the propulsion tanks… Their propulsion module is in good shape. They’re working on the avionics shelves associated with that spacecraft. So, with both vehicles, we’re really trying to get the assembly done in the next year or so, so we can get into integrated spacecraft testing at that point in time.

Ars: What’s in the critical path in getting to the launch pad?

Olansen: The assembly and integration activity is really the key for us. It’s to get to the full vehicle level test. All the different activities that we’re working on across the vehicles are making substantive progress. So, it’s a matter of bringing them all in and doing the assembly and integration in the appropriate sequences, so that we get the vehicles put together the way we need them and get to the point where we can actually power up the vehicles and do all the testing we need to do. Obviously, software is a key part of that development activity, once we power on the vehicles, making sure we can do all the control work that we need to do for those vehicles.

[There are] a couple of key pieces I will mention along those lines. On the PPE side, we have the electrical propulsion system. The thrusters associated with that system are being delivered. Those will go through acceptance testing at the Glenn Research Center [in Ohio] and then be integrated on the spacecraft out at Maxar; so that work is ongoing as we speak. Out at ESA, ESA is providing the HALO lunar communication system. That’ll be delivered later this year. That’ll be installed on HALO as part of its integrated test and checkout and then launch on HALO. That provides the full communication capability down to the lunar surface for us, where PPE provides the communication capability back to Earth. So, those are key components that we’re looking to get delivered later this year.

Jon Olansen, manager of NASA’s Gateway program at Johnson Space Center in Houston. Credit: NASA/Andrew Carlsen

Ars: What’s the status of the electric propulsion thrusters for the PPE?

Olansen: The first one has actually been delivered already, so we’ll have the opportunity to go through, like I said, the acceptance testing for those. The other flight units are right on the heels of the first one that was delivered. They’ll make it through their acceptance testing, then get delivered to Maxar, like I said, for integration into PPE. So, that work is already in progress. [The Power and Propulsion Element will have three xenon-fueled 12-kilowatt Hall thrusters produced by Aerojet Rocketdyne, and four smaller 6-kilowatt thrusters.]

Ars: The Government Accountability Office (GAO) outlined concerns last year about keeping the mass of Gateway within the capability of its rocket. Has there been any progress on that issue? Will you need to remove components from the HALO module and launch them on a future mission? Will you narrow your launch windows to only launch on the most fuel-efficient trajectories?

Olansen: We’re working the plan. Now that we’re launching the two vehicles together, we’re working mass management. Mass management is always an issue with spacecraft development, so it’s no different for us. All of the things you described are all knobs that are in the trade space as we proceed, but fundamentally, we’re working to design the optimal spacecraft that we can, first. So, that’s the key part. As we get all the components delivered, we can measure mass across all of those components, understand what our integrated mass looks like, and we have several different options to make sure that we’re able to execute the mission we need to execute. All of those will be balanced over time based on the impacts that are there. There’s not a need for a lot of those decisions to happen today. Those that are needed from a design perspective, we’ve already made. Those that are needed from enabling future decisions, we’ve already made all of those. So, really, what we’re working through is being able to, at the appropriate time, make decisions necessary to fly the vehicle the way we need to, to get out to NRHO [Near Rectilinear Halo Orbit, an elliptical orbit around the Moon], and then be able to execute the Artemis missions in the future.

Ars: The GAO also discussed a problem with Gateway’s controllability with something as massive as Starship docked to it. What’s the latest status of that problem?

Olansen: There are a number of different risks that we work through as a program, as you’d expect. We continue to look at all possibilities and work through them with due diligence. That’s our job, to be able to do that on a daily basis. With the stack controllability [issue], where that came from for GAO, we were early in the assessments of what the potential impacts could be from visiting vehicles, not just any one [vehicle] but any visiting vehicle. We’re a smaller space station than ISS, so making sure we understand the implications of thruster firings as vehicles approach the station, and the implications associated with those, is where that stack controllability conversation came from.

The bus that Maxar typically designs doesn’t have to generally deal with docking. Part of what we’ve been doing is working through ways that we can use the capabilities that are already built into that spacecraft differently to provide us the control authority we need when we have visiting vehicles, as well as working with the visiting vehicles and their design to make sure that they’re minimizing the impact on the station. So, the combination of those two has largely, over the past year since that report came out, improved where we are from a stack controllability perspective. We still have forward work to close out all of the different potential cases that are there. We’ll continue to work through those. That’s standard forward work, but we’ve been able to make some updates, some software updates, some management updates, and logic updates that really allow us to control the stack effectively and have the right amount of control authority for the dockings and undockings that we will need to execute for the missions.

Lunar Gateway’s skeleton is complete—its next stop may be Trump’s chopping block Read More »

scientists-made-a-stretchable-lithium-battery-you-can-bend,-cut,-or-stab

Scientists made a stretchable lithium battery you can bend, cut, or stab

The Li-ion batteries that power everything from smartphones to electric cars are usually packed in rigid, sealed enclosures that prevent stresses from damaging their components and keep air from coming into contact with their flammable and toxic electrolytes. It’s hard to use batteries like this in soft robots or wearables, so a team of scientists at the University California, Berkeley built a flexible, non-toxic, jelly-like battery that could survive bending, twisting, and even cutting with a razor.

While flexible batteries using hydrogel electrolytes have been achieved before, they came with significant drawbacks. “All such batteries could [only] operate [for] a short time, sometimes a few hours, sometimes a few days,” says Liwei Lin, a mechanical engineering professor at UC Berkeley and senior author of the study. The battery built by his team endured 500 complete charge cycles—about as many as the batteries in most smartphones are designed for.

Power in water

“Current-day batteries require a rigid package because the electrolyte they use is explosive, and one of the things we wanted to make was a battery that would be safe to operate without this rigid package,” Lin told Ars. Unfortunately, flexible packaging made of polymers or other stretchable materials can be easily penetrated by air or water, which will react with standard electrolytes, generating lots of heat, potentially resulting in fires and explosions. This is why, in 2017, scientists started to experiment with quasi-solid-state hydrogel electrolytes.

These hydrogels were made of a polymer net that gave them their shape, crosslinkers like borax or hydrogen bonds that held this net together, a liquid phase made of water, and salt or other electrolyte additives providing ions that moved through the watery gel as the battery charged or discharged.

But hydrogels like that had their own fair share of issues. The first was a fairly narrow electrochemical stability window—a safe zone of voltage the battery can be exposed to. “This really limits how much voltage your battery can output,” says Peisheng He, a researcher at UC Berkeley Sensor and Actuator Center and lead author of the study. “Nowadays, batteries usually operate at 3.3 volts, so their stability window must be higher than that, probably four volts, something like that.” Water, which was the basis of these hydrogel electrolytes, typically broke down into hydrogen and oxygen when exposed to around 1.2 volts. That problem was solved by using highly concentrated salt water loaded with highly fluorinated lithium salts, which made it less likely to break down. But this led the researchers straight into safety issues, as fluorinated lithium salts are highly toxic to humans.

Scientists made a stretchable lithium battery you can bend, cut, or stab Read More »

trump-white-house-budget-proposal-eviscerates-science-funding-at-nasa

Trump White House budget proposal eviscerates science funding at NASA

This week, as part of the process to develop a budget for fiscal-year 2026, the Trump White House shared the draft version of its budget request for NASA with the space agency.

This initial version of the administration’s budget request calls for an approximately 20 percent overall cut to the agency’s budget across the board, effectively $5 billion from an overall topline of about $25 billion. However, the majority of the cuts are concentrated within the agency’s Science Mission Directorate, which oversees all planetary science, Earth science, astrophysics research, and more.

According to the “passback” documents given to NASA officials on Thursday, the space agency’s science programs would receive nearly a 50 percent cut in funding. After the agency received $7.5 billion for science in fiscal-year 2025, the Trump administration has proposed a science topline budget of just $3.9 billion for the coming fiscal year.

Detailing the cuts

Among the proposals were: A two-thirds cut to astrophysics, down to $487 million; a greater than two-thirds cut to heliophysics, down to $455 million; a greater than 50 percent cut to Earth science, down to $1.033 billion; and a 30 percent cut to Planetary science, down to $1.929 billion.

Although the budget would continue support for ongoing missions such as the Hubble Space Telescope and the James Webb Space Telescope, it would kill the much-anticipated Nancy Grace Roman Space Telescope, an observatory seen as on par with those two world-class instruments that is already fully assembled and on budget for a launch in two years.

“Passback supports continued operation of the Hubble and James Webb Space Telescopes and assumes no funding is provided for other telescopes,” the document states.

Trump White House budget proposal eviscerates science funding at NASA Read More »

rocket-report:-“no-man’s-land”-in-rocket-wars;-isaacman-lukewarm-on-sls

Rocket Report: “No man’s land” in rocket wars; Isaacman lukewarm on SLS


China’s approach to space junk is worrisome as it begins launching its own megaconstellations.

A United Launch Alliance Atlas V rocket rolls to its launch pad in Florida in preparation for liftoff with 27 satellites for Amazon’s Kuiper broadband network. Credit: United Launch Alliance

Welcome to Edition 7.39 of the Rocket Report! Not getting your launch fix? Buckle up. We’re on the cusp of a boom in rocket launches as three new megaconstellations have either just begun or will soon begin deploying thousands of satellites to enable broadband connectivity from space. If the megaconstellations come to fruition, this will require more than a thousand launches in the next few years, on top of SpaceX’s blistering Starlink launch cadence. We discuss the topic of megaconstellations in this week’s Rocket Report.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

So, what is SpinLaunch doing now? Ars Technica has mentioned SpinLaunch, the company that literally wants to yeet satellites into space, in previous Rocket Report newsletters. This company enjoyed some success in raising money for its so-crazy-it-just-might-work idea of catapulting rockets and satellites into the sky, a concept SpinLaunch calls “kinetic launch.” But SpinLaunch is now making a hard pivot to small satellites, a move that, on its face, seems puzzling after going all-in on kinetic launch and even performing several impressive hardware tests, throwing a projectile to altitudes of up to 30,000 feet. Ars got the scoop, with the company’s CEO detailing why and how it plans to build a low-Earth orbit telecommunications constellation with 280 satellites.

Traditional versus kinetic … The planned constellation, named Meridian, is an opportunity for SpinLaunch to diversify away from being solely a launch company, according to David Wrenn, the company’s CEO. We’ve observed this in a number of companies that started out as rocket developers before branching out to satellite manufacturing or space services. Wrenn said SpinLaunch could loft all of the Meridian satellites on a single large conventional rocket, or perhaps two medium-lift rockets, and then maintain the constellation with its own kinetic launch system. A satellite communications network presents a better opportunity for profit, Wrenn said. “The launch market is relatively small compared to the economic potential of satellite communication,” he said. “Launch has generally been more of a cost center than a profit center. Satcom will be a much larger piece of the overall industry.”

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Peter Beck suggests Electron is here to stay. The conventional wisdom is that the small launch vehicle business isn’t a big moneymaker. There is really only one company, Rocket Lab, that has gained traction in selling dedicated rides to orbit for small satellites. Rocket Lab’s launcher, Electron, can place payloads of up to a few hundred pounds into orbit. As soon as Rocket Lab had some success, SpaceX began launching rideshare missions on its much larger Falcon 9 rocket, cobbling together dozens of satellites on a single vehicle to spread the cost of the mission among many customers. This offers customers a lower price point than buying a dedicated launch on Electron. But Peter Beck, Rocket Lab’s founder and CEO, says his company has found a successful market providing dedicated launches for small satellites, despite price pressure from SpaceX, Space News reports. “Dedicated small launch is a real market, and it should not be confused with rideshare,” he argued. “It’s totally different.”

No man’s land … Some small satellite companies that can afford the extra cost of a dedicated launch realize the value of controlling their schedule and orbit, traits that a dedicated launch offers over a rideshare, Beck said. It’s easy to blame SpaceX for undercutting the prices of Rocket Lab and other players in this segment of the launch business, but Beck said companies that have failed or withdrawn from the small launch market didn’t have a good business plan, a good product, or good engineering. He added that the capacity of the Electron vehicle is well-suited for dedicated launch, whereas slightly larger rockets in the one-ton-to-orbit class—a category that includes Firefly Aerospace’s Alpha and Isar Aerospace’s Spectrum rockets—are an ill fit. The one-ton performance range is “no man’s land” in the market, Beck said. “It’s too small to be a useful rideshare mission, and it’s too big to be a useful dedicated rocket” for smallsats. (submitted by EllPeaTea)

ULA scrubs first full-on Kuiper launch. A band of offshore thunderstorms near Florida’s Space Coast on Wednesday night forced United Launch Alliance to scrub a launch attempt of the first of dozens of missions on behalf of its largest commercial customer, Amazon, Spaceflight Now reports. The mission will use an Atlas V rocket to deploy 27 satellites for Amazon’s Project Kuiper network. It’s the first launch of what will eventually be more than 3,200 operational Kuiper satellites beaming broadband connectivity from space, a market currently dominated by SpaceX’s Starlink. As of Thursday, ULA hadn’t confirmed a new launch date, but airspace warning notices released by the FAA suggest the next attempt might occur Monday, April 14.

What’s a few more days? … This mission has been a long time coming. Amazon announced the Kuiper megaconstellation in 2019, and the company says it’s investing at least $10 billion in the project (the real number may be double that). Problems in manufacturing the Kuiper satellites, which Amazon is building in-house, delayed the program’s first full-on launch by a couple of years. Amazon launched a pair of prototype satellites in 2023, but the operational versions are different, and this mission fills the capacity of ULA’s Atlas V rocket. Amazon has booked more than 80 launches with ULA, Arianespace, Blue Origin, and SpaceX to populate the Kuiper network. (submitted by EllPeaTea)

Space Force swaps ULA for SpaceX. For the second time in six months, SpaceX will deploy a US military satellite that was sitting in storage, waiting for a slot on United Launch Alliance’s launch schedule, Ars reports. Space Systems Command, which oversees the military’s launch program, announced Monday that it is reassigning the launch of a Global Positioning System satellite from ULA’s Vulcan rocket to SpaceX’s Falcon 9. This satellite, designated GPS III SV-08 (Space Vehicle-08), will join the Space Force’s fleet of navigation satellites beaming positioning and timing signals for military and civilian users around the world. The move allows the GPS satellite to launch as soon as the end of May, the Space Force said. The military executed a similar rocket swap for a GPS mission that launched on a Falcon 9 in December.

Making ULA whole … The Space Force formally certified ULA’s Vulcan rocket for national security missions last month, so Vulcan may finally be on the cusp of delivering for the military. But there are several military payloads in the queue to launch on Vulcan before GPS III SV-08, which was already completed and in storage at its Lockheed Martin factory in Colorado. Meanwhile, SpaceX is regularly launching Falcon 9 rockets with ample capacity to add the GPS mission to the manifest. In exchange for losing the contract to launch this particular GPS satellite, the Space Force swapped a future GPS mission that was assigned to SpaceX to fly on ULA’s Vulcan instead.

Russia launches a former Navy SEAL to space. Jonny Kim, a former Navy SEAL, Harvard Medical School graduate, and now a NASA astronaut, blasted off with two cosmonaut crewmates aboard a Russian Soyuz rocket early Tuesday, CBS News reports. Three hours later, Kim and his Russian crewmates—Sergey Ryzhikov and Alexey Zubritsky—chased down the International Space Station and moved in for a picture-perfect docking aboard their Soyuz MS-27 spacecraft. “It was the trip of a lifetime and an honor to be here,” Kim told flight controllers during a traditional post-docking video conference.

Rotating back to Earth … Ryzhikov, Zubritsky, and Kim joined a crew of seven living aboard the International Space Station, temporarily raising the lab’s crew complement to 10 people. The new station residents are replacing an outgoing Soyuz crew—Alexey Ovchinin, Ivan Wagner, and Don Pettit—who launched to the ISS last September and who plan to return to Earth aboard their own spacecraft April 19 to wrap up a 219-day stay in space. This flight continues the practice of launching US astronauts on Russian Soyuz missions, part of a barter agreement between NASA and the Russian space agency that also reserves a seat on SpaceX Dragon missions for Russian cosmonauts.

China is littering in LEO. China’s construction of a pair of communications megaconstellations could cloud low Earth orbit with large spent rocket stages for decades or beyond, Space News reports. Launches for the government’s Guowang and Shanghai-backed but more commercially oriented Qianfan (Thousand Sails) constellation began in the second half of 2024, with each planned to consist of over 10,000 satellites, demanding more than a thousand launches in the coming years. Placing this number of satellites is enough to cause concern about space debris because China hasn’t disclosed its plans for removing the spacecraft from orbit at the end of their missions. It turns out there’s another big worry: upper stages.

An orbital time bomb … While Western launch providers typically deorbit their upper stages after dropping off megaconstellation satellites in space, China does not. This means China is leaving rockets in orbits high enough to persist in space for more than a century, according to Jim Shell, a space domain awareness and orbital debris expert at Novarum Tech. Space News reported on Shell’s commentary in a social media post, where he wrote that orbital debris mass in low-Earth orbit “will be dominated by PRC [People’s Republic of China] upper stages in short order unless something changes (sigh).” So far, China has launched five dedicated missions to deliver 90 Qianfan satellites into orbit. Four of these missions used China’s Long March 6A rocket, with an upper stage that has a history of breaking up in orbit, exacerbating the space debris problem. (submitted by EllPeaTea)

SpaceX wins another lunar lander launch deal. Intuitive Machines has selected a SpaceX Falcon 9 rocket to launch a lunar delivery mission scheduled for 2027, the Houston Chronicle reports. The upcoming IM-4 mission will carry six NASA payloads, including a European Space Agency-led drill suite designed to search for water at the lunar south pole. It will also include the launch of two lunar data relay satellites that support NASA’s so-called Near Space Network Services program. This will be the fourth lunar lander mission for Houston-based Intuitive Machines under the auspices of NASA’s Commercial Lunar Payload Services program.

Falcon 9 has the inside track … SpaceX almost certainly offered Intuitive Machines the best deal for this launch. The flight-proven Falcon 9 rocket is reliable and inexpensive compared to competitors and has already launched two Intuitive Machines missions, with a third one set to fly late this year. However, there’s another factor that made SpaceX a shoe-in for this contract. SpaceX has outfitted one of its launch pads in Florida with a unique cryogenic loading system to pump liquid methane and liquid oxygen propellants into the Intuitive Machines lunar lander as it sits on top of its rocket just before liftoff. The lander from Intuitive Machines uses these super-cold propellants to feed its main engine, and SpaceX’s infrastructure for loading it makes the Falcon 9 rocket the clear choice for launching it.

Time may finally be running out for SLS. Jared Isaacman, President Trump’s nominee for NASA administrator, said Wednesday in a Senate confirmation hearing that he wants the space agency to pursue human missions to the Moon and Mars at the same time, an effort that will undoubtedly require major changes to how NASA spends its money. My colleague Eric Berger was in Washington for the hearing and reported on it for Ars. Senators repeatedly sought Isaacman’s opinion on the Space Launch System, the NASA heavy-lifter designed to send astronauts to the Moon. The next SLS mission, Artemis II, is slated to launch a crew of four astronauts around the far side of the Moon next year. NASA’s official plans call for the Artemis III mission to launch on an SLS rocket later this decade and attempt a landing at the Moon’s south pole.

Limited runway … Isaacman sounded as if he were on board with flying the Artemis II mission as envisioned—no surprise, then, that the four Artemis II astronauts were in the audience—and said he wanted to get a crew of Artemis III to the lunar surface as quickly as possible. But he questioned why it has taken NASA so long, and at such great expense, to get its deep space human exploration plans moving. In one notable exchange, Isaacman said NASA’s current architecture for the Artemis lunar plans, based on the SLS rocket and Orion spacecraft, is probably not the ideal “long-term” solution to NASA’s deep space transportation plans. The smart reading of this is that Isaacman may be willing to fly the Artemis II and Artemis III missions as conceived, given that much of the hardware is already built. But everything that comes after this, including SLS rocket upgrades and the Lunar Gateway, could be on the chopping block.

Welcome to the club, Blue Origin. Finally, the Space Force has signaled it’s ready to trust Jeff Bezos’ space company, Blue Origin, for launching the military’s most precious satellites, Ars reports. Blue Origin received a contract on April 4 to launch seven national security missions for the Space Force between 2027 and 2032, an opening that could pave the way for more launch deals in the future. These missions will launch on Blue Origin’s heavy-lift New Glenn rocket, which had a successful debut test flight in January. The Space Force hasn’t certified New Glenn for national security launches, but military officials expect to do so sometime next year. Blue Origin joins SpaceX and United Launch Alliance in the Space Force’s mix of most-trusted launch providers.

A different class … The contract Blue Origin received last week covers launch services for the Space Force’s most critical space missions, requiring rocket certification and a heavy dose of military oversight to ensure reliability. Blue Origin was already eligible to launch a separate batch of missions the Space Force set aside to fly on newer rockets. The military is more tolerant of risk on these lower-priority missions, which include launches of “cookie-cutter” satellites for the Pentagon’s large fleet of missile-tracking satellites and a range of experimental payloads.

Why is SpaceX winning so many Space Force contracts? In less than a week, the US Space Force awarded SpaceX a $5.9 billion deal to make Elon Musk’s space company the Pentagon’s leading launch provider, replacing United Launch Alliance in the top position. Then, the Space Force assigned most of this year’s most lucrative launch contracts to SpaceX. As we mentioned earlier in the Rocket Report, the military also swapped a ULA rocket for a SpaceX launch vehicle for an upcoming GPS mission. So, is SpaceX’s main competitor worried Elon Musk is tipping the playing field for lucrative government contracts by cozying up to President Trump?

It’s all good, man … Tory Bruno, ULA’s chief executive, doesn’t seem too worried in his public statements, Ars reports. In a roundtable with reporters this week at the annual Space Symposium conference in Colorado, Bruno was asked about Musk’s ties with Trump. “We have not been impacted by our competitor’s position advising the president, certainly not yet,” Bruno said. “I expect that the government will follow all the rules and be fair and follow all the laws, and so we’re behaving that way.” The reason Bruno can say Musk’s involvement in the Trump administration so far hasn’t affected ULA is simple. SpaceX is cheaper and has a ready-made line of Falcon 9 and Falcon Heavy rockets available to launch the Pentagon’s satellites. ULA’s Vulcan rocket is now certified to launch military payloads, but it reached this important milestone years behind schedule.

Two Texas lawmakers are still fighting the last war. NASA has a lot to figure out in the next couple of years. Moon or Mars? Should, or when should, the Space Launch System be canceled? Can the agency absorb a potential 50 percent cut to its science budget? If Senators John Cornyn and Ted Cruz get their way, NASA can add moving a space shuttle to its list. The Lone Star State’s two Republican senators introduced the “Bring the Space Shuttle Home Act” on Thursday, CollectSpace reports. If passed by Congress and signed into law, the bill would direct NASA to take the space shuttle Discovery from the national collection at the Smithsonian National Air and Space Museum and transport it to Space Center Houston, a museum and visitor attraction next to Johnson Space Center, home to mission control and NASA’s astronaut training base. Discovery has been on display at the Smithsonian since 2012. NASA awarded museums in California, Florida, and New York the other three surviving shuttle orbiters.

Dollars and nonsense … Moving a space shuttle from Virginia to Texas would be a logistical nightmare, cost an untold amount of money, and would create a distraction for NASA when its focus should be on future space exploration. In a statement, Cruz said Houston deserves one of NASA’s space shuttles because of the city’s “unique relationship” with the program. Cornyn alleged in a statement that the Obama administration blocked Houston from receiving a space shuttle for political reasons. NASA’s inspector general found no evidence of this. On the contrary, transferring a space shuttle to Texas now would be an unequivocal example of political influence. The Boeing 747s that NASA used to move space shuttles across the country are no longer flightworthy, and NASA scrapped the handling equipment needed to prepare a shuttle for transport. Moving the shuttle by land or sea would come with its own challenges. “I can easily see this costing a billion dollars,” Dennis Jenkins, a former shuttle engineer who directed NASA’s shuttle transition and retirement program more than a decade ago, told CollectSpace in an interview. On a personal note, the presentation of Discovery at the Smithsonian is remarkable to see in person, with aerospace icons like the Concorde and the SR-71 spy plane under the same roof. Space Center Houston can’t match that.

Next three launches

April 12: Falcon 9 | Starlink 12-17 | Kennedy Space Center, Florida | 01: 15 UTC

April 12: Falcon 9 | NROL-192 | Vandenberg Space Force Base, California | 12: 17 UTC

April 14: Falcon 9 | Starlink 6-73 | Cape Canaveral Space Force Station, Florida | 01: 59 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: “No man’s land” in rocket wars; Isaacman lukewarm on SLS Read More »

a-guide-to-the-“platonic-ideal”-of-a-negroni-and-other-handy-tips

A guide to the “platonic ideal” of a Negroni and other handy tips


Perfumer by day, mixologist by night, Kevin Peterson specializes in crafting scent-paired cocktails.

Kevin Peterson is a “nose” for his own perfume company, Sfumato Fragrances, by day. By night, Sfumato’s retail store in Detroit transforms into Peterson’s craft cocktail bar, Castalia, where he is chief mixologist and designs drinks that pair with carefully selected aromas. He’s also the author of Cocktail Theory: A Sensory Approach to Transcendent Drinks, which grew out of his many (many!) mixology experiments and popular YouTube series, Objective Proof: The Science of Cocktails.

It’s fair to say that Peterson has had an unusual career trajectory. He worked as a line cook and an auto mechanic, and he worked on the production line of a butter factory, among other gigs, before attending culinary school in hopes of becoming a chef. However, he soon realized it wasn’t really what he wanted out of life and went to college, earning an undergraduate degree in physics from Carleton College and a PhD in mechanical engineering from the University of Michigan.

After 10 years as an engineer, he switched focus again and became more serious about his side hobby, perfumery. “Not being in kitchens anymore, I thought—this is a way to keep that little flavor part of my brain engaged,” Peterson told Ars. “I was doing problem sets all day. It was my escape to the sensory realm. ‘OK, my brain is melting—I need a completely different thing to do. Let me go smell smells, escape to my little scent desk.'” He and his wife, Jane Larson, founded Sfumato, which led to opening Castalia, and Peterson finally found his true calling.

Peterson spent years conducting mixology experiments to gather empirical data about the interplay between scent and flavor, correct ratios of ingredients, temperature, and dilution for all the classic cocktails—seeking a “Platonic ideal,” for each, if you will. He supplemented this with customer feedback data from the drinks served at Castalia. All that culminated in Cocktail Theory, which delves into the chemistry of scent and taste, introducing readers to flavor profiles, textures, visual presentation, and other factors that contribute to one’s enjoyment (or lack thereof) of a cocktail. And yes, there are practical tips for building your own home bar, as well as recipes for many of Castalia’s signature drinks.

In essence, Peterson’s work adds scientific rigor to what is frequently called the “Mr. Potato Head” theory of cocktails, a phrase coined by the folks at Death & Company, who operate several craft cocktail bars in key cities. “Let’s say you’ve got some classic cocktail, a daiquiri, that has this many parts of rum, this many parts of lime, this many parts of sugar,” said Peterson, who admits to having a Mr. Potato Head doll sitting on Castalia’s back bar in honor of the sobriquet. “You can think about each ingredient in a more general way: instead of rum, this is the spirit; instead of lime, this is the citrus; sugars are sweetener. Now you can start to replace those things with other things in the same categories.”

We caught up with Peterson to learn more.

Ars Technica: How did you start thinking about the interplay between perfumery and cocktail design and the role that aroma plays in each?

Kevin Peterson: The first step was from food over to perfumery, where I think about building a flavor for a soup, for a sauce, for a curry, in a certain way. “Oh, there’s a gap here that needs to be filled in by some herbs, some spice.” It’s almost an intuitive kind of thing. When I was making scents, I had those same ideas: “OK, the shape of this isn’t quite right. I need this to roughen it up or to smooth out this edge.”

Then I did the same thing for cocktails and realized that those two worlds didn’t really talk to each other. You’ve got two groups of people that study all the sensory elements and how to create the most intriguing sensory impression, but they use different language; they use different toolkits. They’re going for almost the same thing, but there was very little overlap between the two. So I made that my niche: What can perfumery teach bartenders? What can the cocktail world teach perfumery?

Ars Technica: In perfumery you talk about a top, a middle, and a base note. There must be an equivalent in cocktail theory?

Kevin Peterson: In perfumery, that is mostly talking about the time element: top notes perceived first, then middle notes, then base notes as you wear it over the course of a few hours. In the cocktail realm, there is that time element as well. You get some impression when you bring the glass to your nose, something when you sip, something in the aftertaste. But there can also be a spatial element. Some things you feel right at the tip of your tongue, some things you feel in different parts of your face and head, whether that’s a literal impression or you just kind of feel it somewhere where there’s not a literal nerve ending. It’s about filling up that space, or not filling it up, depending on what impression you’re going for—building out the full sensory space.

Ars Technica: You also talk about motifs and supportive effects or ornamental flourishes: themes that you can build on in cocktails.

Kevin Peterson: Something I see in the cocktail world occasionally is that people just put a bunch of ingredients together and figure, “This tastes fine.” But what were you going for here? There are 17 things in here, and it just kind of tastes like you were finger painting: “Hey, I made brown.” Brown is nice. But the motifs that I think about—maybe there’s just one particular element that I want to highlight. Say I’ve got this really great jasmine essence. Everything else in the blend is just there to highlight the jasmine.

If you’re dealing with a really nice mezcal or bourbon or some unique herb or spice, that’s going to be the centerpiece. You’re not trying to get overpowered by some smoky scotch, by some other more intense ingredient. The motif could just be a harmonious combination of elements. I think the perfect old-fashioned is where everything is present and nothing’s dominating. It’s not like the bitters or the whiskey totally took over. There’s the bitters, there’s a little bit of sugar, there’s the spirit. Everything’s playing nicely.

Another motif, I call it a jazz note. A Sazerac is almost the same as an old-fashioned, but it’s got a little bit of absinthe in it. You get all the harmony of the old-fashioned, but then you’re like, “Wait, what’s this weird thing pulling me off to the side? Oh, this absinthe note is kind of separate from everything else that’s going on in the drink.” It’s almost like that tension in a musical composition: “Well, these notes sound nice, but then there’s one that’s just weird.” But that’s what makes it interesting, that weird note. For me, formalizing some of those motifs help me make it clearer. Even if I don’t tell that to the guest during the composition stage, I know this is the effect I’m going for. It helps me build more intentionally when I’ve got a motif in mind.

Ars Technica: I tend to think about cocktails more in terms of chemistry, but there are many elements to taste and perception and flavor. You talk about ingredient matching, molecular matching, and impression matching, i.e., how certain elements will overlap in the brain. What role do each of those play?

Kevin Peterson: A lot of those ideas relate to how we pair scents with cocktails. At my perfume company, we make eight fragrances as our main line. Each scent then gets a paired drink on the cocktail menu. For example, this scent has coriander, cardamom, and nutmeg. What does it mean that the drink is paired with that? Does it need to literally have coriander, cardamom, and nutmeg in it? Does it need to have every ingredient? If the scent has 15 things, do I need to hit every note?

chart with sad neutral and happy faces showing the optimal temperature and dilution for a dauquiri

Peterson made over 100 daiquiris to find the “Platonic ideal” of the classic cocktail Credit: Kevin Peterson

The literal matching is the most obvious. “This has cardamom, that has cardamom.” I can see how that pairs. The molecular matching is essentially just one more step removed: Rosemary has alpha-pinene in it, and juniper berries have alpha-pinene in them. So if the scent has rosemary and the cocktail has gin, they’re both sharing that same molecule, so it’s still exciting that same scent receptor. What I’m thinking about is kind of resonant effects. You’re approaching the same receptor or the same neural structure in two different ways, and you’re creating a bigger peak with that.

The most hand-wavy one to me is the impression matching. Rosemary smells cold, and Fernet-Branca tastes cold even when it’s room temperature. If the scent has rosemary, is Fernet now a good match for that? Some of the neuroscience stuff that I’ve read has indicated that these more abstract ideas are represented by the same sort of neural-firing patterns. Initially, I was hesitant; cold and cold, it doesn’t feel as fulfilling to me. But then I did some more reading and realized there’s some science behind it and have been more intrigued by that lately.

Ars Technica: You do come up with some surprising flavor combinations, like a drink that combined blueberry and horseradish, which frankly sounds horrifying. 

Kevin Peterson: It was a hit on the menu. I would often give people a little taste of the blueberry and then a little taste of the horseradish tincture, and they’d say, “Yeah, I don’t like this.” And then I’d serve them the cocktail, and they’d be like, “Oh my gosh, it actually worked. I can’t believe it.”  Part of the beauty is you take a bunch of things that are at least not good and maybe downright terrible on their own, and then you stir them all together and somehow it’s lovely. That’s basically alchemy right there.

Ars Technica: Harmony between scent and the cocktail is one thing, but you also talk about constructive interference to get a surprising, unexpected, and yet still pleasurable result.

Kevin Peterson: The opposite is destructive interference, where there’s just too much going on. When I’m coming up with a drink, sometimes that’ll happen, where I’m adding more, but the flavor impression is going down. It’s sort of a weird non-linearity of flavor, where sometimes two plus two equals four, sometimes it equals three, sometimes it equals 17. I now have intuition about that, having been in this world for a lot of years, but I still get surprised sometimes when I put a couple things together.

Often with my end-of-the-shift drink, I’ll think, “Oh, we got this new bottle in. I’m going to try that in a Negroni variation.” Then I lose track and finish mopping, and then I sip, and I’m like, “What? Oh my gosh, I did not see this coming at all.” That little spark, or whatever combo creates that, will then often be the first step on some new cocktail development journey.

man's torso in a long-sleeved button down white shirt, with a small glass filled with juniper berries in front of him

Pairing scents with cocktails involves experimenting with many different ingredients Credit: EE Berger

Ars Technica: Smoked cocktails are a huge trend right now. What’s the best way to get a consistently good smoky element?

Kevin Peterson: Smoke is tricky to make repeatable. How many parts per million of smoke are you getting in the cocktail? You could standardize the amount of time that it’s in the box [filled with smoke]. Or you could always burn, say, exactly three grams of hickory or whatever. One thing that I found, because I was writing the book while still running the bar: People have a lot of expectations around how the drink is going to be served. Big ice cubes are not ideal for serving drinks, but people want a big ice cube in their old-fashioned. So we’re still using big ice cubes. There might be a Platonic ideal in terms of temperature, dilution, etc., but maybe it’s not the ideal in terms of visuals or tactile feel, and that is a part of the experience.

With the smoker, you open the doors, smoke billows out, your drink emerges from the smoke, and people say, “Wow, this is great.” So whether you get 100 PPM one time and 220 PPM the next, maybe that gets outweighed by the awesomeness of the presentation. If I’m trying to be very dialed in about it, I’ll either use a commercial smoky spirit—Laphroaig scotch, a smoky mezcal—where I decide that a quarter ounce is the amount of smokiness that I want in the drink. I can just pour the smoke instead of having to burn and time it.

Or I might even make my own smoke: light something on fire and then hold it under a bottle, tip it back up, put some vodka or something in there, shake it up. Now I’ve got smoke particles in my vodka. Maybe I can say, “OK, it’s always going to be one milliliter,” but then you miss out on the presentation—the showmanship, the human interaction, the garnish. I rarely garnish my own drinks, but I rarely send a drink out to a guest ungarnished, even if it’s just a simple orange peel.

Ars Technica: There’s always going to be an element of subjectivity, particularly when it comes to our sensory perceptions. Sometimes you run into a person who just can’t appreciate a certain note.

Kevin Peterson: That was something I grappled with. On the one hand, we’re all kind of living in our own flavor world. Some people are more sensitive to bitter. Different scent receptors are present in different people. It’s tempting to just say, “Well, everything’s so unique. Maybe we just can’t say anything about it at all.” But that’s not helpful either. Somehow, we keep having delicious food and drink and scents that come our way.

A sample page from Cocktail Theory discussing temperature and dilution

A sample page from Cocktail Theory discussing temperature and dilution. Credit: EE Berger

I’ve been taking a lot of survey data in my bar more recently, and definitely the individuality of preference has shown through in the surveys. But another thing that has shown through is that there are some universal trends. There are certain categories. There’s the spirit-forward, bittersweet drinkers, there’s the bubbly citrus folks, there’s the texture folks who like vodka soda. What is the taste? What is the aroma? It’s very minimal, but it’s a very intense texture. Having some awareness of that is critical when you’re making drinks.

One of the things I was going for in my book was to find, for example, the platonically ideal gin and tonic. What are the ratios? What is the temperature? How much dilution to how much spirit is the perfect amount? But if you don’t like gin and tonics, it doesn’t matter if it’s a platonically ideal gin and tonic. So that’s my next project. It’s not just getting the drink right. How do you match that to the right person? What questions do I have to ask you, or do I have to give you taste tests? How do I draw that information out of the customer to determine the perfect drink for them?

We offer a tasting menu, so our full menu is eight drinks, and you get a mini version of each drink. I started giving people surveys when they would do the tasting menu, asking, “Which drink do you think you like the most? Which drink do you think you like the least?” I would have them rate it. Less than half of people predicted their most liked and least liked, meaning if you were just going to order one drink off the menu, your odds are less than a coin flip that you would get the right drink.

Ars Technica: How does all this tie into your “cocktails as storytelling” philosophy? 

Kevin Peterson: So much of flavor impression is non-verbal. Scent is very hard to describe. You can maybe describe taste, but we only have five-ish words, things like bitter, sour, salty, sweet. There’s not a whole lot to say about that: “Oh, it was perfectly balanced.” So at my bar, when we design menus, we’ll put the drinks together, but then we’ll always give the menu a theme. The last menu that we did was the scientist menu, where every drink was made in honor of some scientist who didn’t get the credit they were due in the time they were alive.

Having that narrative element, I think, helps people remember the drink better. It helps them in the moment to latch onto something that they can more firmly think about. There’s a conceptual element. If I’m just doing chores around the house, I drink a beer, it doesn’t need to have a conceptual element. If I’m going out and spending money and it’s my night and I want this to be a more elevated experience, having that conceptual tie-in is an important part of that.

two martini glasses side by side with a cloudy liquid in them a bright red cherry at the bottom of the glass

My personal favorite drink, Corpse Reviver No. 2, has just a hint of absinthe. Credit: Sean Carroll

Ars Technica: Do you have any simple tips for people who are interested in taking their cocktail game to the next level?

Kevin Peterson:  Old-fashioneds are the most fragile cocktail. You have to get all the ratios exactly right. Everything has to be perfect for an old-fashioned to work. Anecdotally, I’ve gotten a lot of old-fashioneds that were terrible out on the town. In contrast, the Negroni is the most robust drink. You can miss the ratios. It’s got a very wide temperature and dilution window where it’s still totally fine. I kind of thought of them in the same way prior to doing the test. Then I found that this band of acceptability is much bigger for the Negroni. So now I think of old-fashioneds as something that either I make myself or I order when I either trust the bartender or I’m testing someone who wants to come work for me.

My other general piece of advice: It can be a very daunting world to try to get into. You may say, “Oh, there’s all these classics that I’m going to have to memorize, and I’ve got to buy all these weird bottles.” My advice is to pick a drink you like and take baby steps away from that drink. Say you like Negronis. That’s three bottles: vermouth, Campari, and gin. Start with that. When you finish that bottle of gin, buy a different type of gin. When you finish the Campari, try a different bittersweet liqueur. See if that’s going to work. You don’t have to drop hundreds of dollars, thousands of dollars, to build out a back bar. You can do it with baby steps.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

A guide to the “platonic ideal” of a Negroni and other handy tips Read More »

quantum-hardware-may-be-a-good-match-for-ai

Quantum hardware may be a good match for AI

Quantum computers don’t have that sort of separation. While they could include some quantum memory, the data is generally housed directly in the qubits, while computation involves performing operations, called gates, directly on the qubits themselves. In fact, there has been a demonstration that, for supervised machine learning, where a system can learn to classify items after training on pre-classified data, a quantum system can outperform classical ones, even when the data being processed is housed on classical hardware.

This form of machine learning relies on what are called variational quantum circuits. This is a two-qubit gate operation that takes an additional factor that can be held on the classical side of the hardware and imparted to the qubits via the control signals that trigger the gate operation. You can think of this as analogous to the communications involved in a neural network, with the two-qubit gate operation equivalent to the passing of information between two artificial neurons and the factor analogous to the weight given to the signal.

That’s exactly the system that a team from the Honda Research Institute worked on in collaboration with a quantum software company called Blue Qubit.

Pixels to qubits

The focus of the new work was mostly on how to get data from the classical world into the quantum system for characterization. But the researchers ended up testing the results on two different quantum processors.

The problem they were testing is one of image classification. The raw material was from the Honda Scenes dataset, which has images taken from roughly 80 hours of driving in Northern California; the images are tagged with information about what’s in the scene. And the question the researchers wanted the machine learning to handle was a simple one: Is it snowing in the scene?

Quantum hardware may be a good match for AI Read More »

new-simulation-of-titanic’s-sinking-confirms-historical-testimony

New simulation of Titanic’s sinking confirms historical testimony


NatGeo documentary follows a cutting-edge undersea scanning project to make a high-resolution 3D digital twin of the ship.

The bow of the Titanic Digital Twin, seen from above at forward starboard side. Credit: Magellan Limited/Atlantic Productions

In 2023, we reported on the unveiling of the first full-size 3D digital scan of the remains of the RMS Titanic—a “digital twin” that captured the wreckage in unprecedented detail. Magellan Ltd, a deep-sea mapping company, and Atlantic Productions conducted the scans over a six-week expedition. That project is the subject of the new National Geographic documentary Titanic: The Digital Resurrection, detailing several fascinating initial findings from experts’ ongoing analysis of that full-size scan.

Titanic met its doom just four days into the Atlantic crossing, roughly 375 miles (600 kilometers) south of Newfoundland. At 11: 40 pm ship’s time on April 14, 1912, Titanic hit that infamous iceberg and began taking on water, flooding five of its 16 watertight compartments, thereby sealing its fate. More than 1,500 passengers and crew perished; only around 710 of those on board survived.

Titanic remained undiscovered at the bottom of the Atlantic Ocean until an expedition led by Jean-Louis Michel and Robert Ballard reached the wreck on September 1, 1985. The ship split apart as it sank, with the bow and stern sections lying roughly one-third of a mile apart. The bow proved to be surprisingly intact, while the stern showed severe structural damage, likely flattened from the impact as it hit the ocean floor. There is a debris field spanning a 5×3-mile area, filled with furniture fragments, dinnerware, shoes and boots, and other personal items.

The joint mission by Magellan and Atlantic Productions deployed two submersibles nicknamed Romeo and Juliet to map every millimeter of the wreck, including the debris field spanning some three miles. The result was a whopping 16 terabytes of data, along with over 715,000 still images and 4K video footage. That raw data was then processed to create the 3D digital twin. The resolution is so good, one can make out part of the serial number on one of the propellers.

“I’ve seen the wreck in person from a submersible, and I’ve also studied the products of multiple expeditions—everything from the original black-and-white imagery from the 1985 expedition to the most modern, high-def 3D imagery,” deep ocean explorer Parks Stephenson told Ars. “This still managed to blow me away with its immense scale and detail.”

The Juliet ROV scans the bow railing of the Titanic wreck site. Magellan Limited/Atlantic Productions

The NatGeo series focuses on some of the fresh insights gained from analyzing the digital scan, enabling Titanic researchers like Stephenson to test key details from eyewitness accounts. For instance, some passengers reported ice coming into their cabins after the collision. The scan shows there is a broken porthole that could account for those reports.

One of the clearest portions of the scan is Titanic‘s enormous boiler rooms right at the rear bow section where the ship snapped in half. Eyewitness accounts reported that the ship’s lights were still on right up until the sinking, thanks to the tireless efforts of Joseph Bell and his team of engineers, all of whom perished. The boilers show up as concave on the digital replica of Titanic, and one of the valves is in an open position, supporting those accounts.

The documentary spends a significant chunk of time on a new simulation of the actual sinking, taking into account the ship’s original blueprints, as well as information on speed, direction, and position. Researchers at University College London were also able to extrapolate how the flooding progressed. Furthermore, a substantial portion of the bow hit the ocean floor with so much force that much of it remains buried under mud. Romeo’s scans of the debris field scattered across the ocean floor enabled researchers to reconstruct the damage to the buried portion.

Titanic was famously designed to stay afloat if up to four of its watertight compartments flooded. But the ship struck the iceberg from the side, causing a series of punctures along the hull across 18 feet, affecting six of the compartments. Some of those holes were quite small, about the size of a piece of paper, but water could nonetheless seep in and eventually flood the compartments. So the analysis confirmed the testimony of naval architect Edward Wilding—who helped design Titanic—as to how a ship touted as unsinkable could have met such a fate. And as Wilding hypothesized, the simulations showed that had Titanic hit the iceberg head-on, she would have stayed afloat.

These are the kinds of insights that can be gleaned from the 3D digital model, according to Atlantic Productions CEO Anthony Geffen, who produced the NatGeo series. “It’s not really a replica. It is a digital twin, down to the last rivet,” he told Ars. “That’s the only way that you can start real research. The detail here is what we’ve never had. It’s like a crime scene. If you can see what the evidence is, in the context of where it is, you can actually piece together what happened. You can extrapolate what you can’t see as well. Maybe we can’t physically go through the sand or the silt, but we can simulate anything because we’ve actually got the real thing.”

Ars caught up with Stephenson and Geffen to learn more.

A CGI illustration of the bow of the Titanic as it sinks into the ocean. National Geographic

Ars Technica: What is so unique and memorable about experiencing the full-size 3D scan of Titanic, especially for those lucky enough to have seen the actual wreckage first-hand via submersible?

Parks Stephenson: When you’re in the submersible, you are restricted to a 7-inch viewport and as far as your light can travel, which is less than 100 meters or so. If you have a camera attached to the exterior of the submersible, you can only get what comes into the frame of the camera. In order to get the context, you have to stitch it all together somehow, and, even then, you still have human bias that tends to make the wreck look more like the original Titanic of 1912 than it actually does today. So in addition to seeing it full-scale and well-lit wherever you looked, able to wander around the wreck site, you’re also seeing it for the first time as a purely data-driven product that has no human bias. As an analyst, this is an analytical dream come true.

Ars Technica: One of the most visually arresting images from James Cameron’s blockbuster film Titanic was the ship’s stern sticking straight up out of the water after breaking apart from the bow. That detail was drawn from eyewitness accounts, but a 2023 computer simulation called it into question. What might account for this discrepancy? 

Parks Stephenson: One thing that’s not included in most pictures of Titanic sinking is the port heel that she had as she’s going under. Most of them show her sinking on an even keel. So when she broke with about a 10–12-degree port heel that we’ve reconstructed from eyewitness testimony, that stern would tend to then roll over on her side and go under that way. The eyewitness testimony talks about the stern sticking up as a finger pointing to the sky. If you even take a shallow angle and look at it from different directions—if you put it in a 3D environment and put lifeboats around it and see the perspective of each lifeboat—there is a perspective where it does look like she’s sticking up like a finger in the sky.

Titanic analyst Parks Stephenson, metallurgist Jennifer Hooper, and master mariner Captain Chris Hearn find evidence exonerating First Officer William Murdoch, long accused of abandoning his post.

This points to a larger thing: the Titanic narrative as we know it today can be challenged. I would go as far as to say that most of what we know about Titanic now is wrong. With all of the human eyewitnesses having passed away, the wreck is our only remaining witness to the disaster. This photogrammetry scan is providing all kinds of new evidence that will help us reconstruct that timeline and get closer to the truth.

Ars Technica: What more are you hoping to learn about Titanic‘s sinking going forward? And how might those lessons apply more broadly?

Parks Stephenson: The data gathered in this 2022 expedition yielded more new information that could be put into this program. There’s enough material already to have a second show. There are new indicators about the condition of the wreck and how long she’s going to be with us and what happens to these wrecks in the deep ocean environment. I’ve already had a direct application of this. My dives to Titanic led me to another shipwreck, which led me to my current position as executive director of a museum ship in Louisiana, the USS Kidd.

She’s now in dry dock, and there’s a lot that I’m understanding about some of the corrosion issues that we experienced with that ship based on corrosion experiments that have been conducted at the Titanic wreck sites—specifically how metal acts underwater over time if it’s been stressed on the surface. It corrodes differently than just metal that’s been submerged. There’s all kinds of applications for this information. This is a new ecosystem that has taken root in Titanic. I would say between my dive in 2005 and 2019, I saw an explosion of life over that 14-year period. It’s its own ecosystem now. It belongs more to the creatures down there than it does to us anymore.

The bow of the Titanic Digital Twin. Magellan Limited/Atlantic Productions

As far as Titanic itself is concerned, this is key to establishing the wreck site, which is one of the world’s largest archeological sites, as an archeological site that follows archeological rigor and standards. This underwater technology—that Titanic has accelerated because of its popularity—is the way of the future for deep-ocean exploration. And the deep ocean is where our future is. It’s where green technology is going to continue to get its raw elements and minerals from. If we don’t do it responsibly, we could screw up the ocean bottom in ways that would destroy our atmosphere faster than all the cars on Earth could do. So it’s not just for the Titanic story, it’s for the future of deep-ocean exploration.

Anthony Geffen: This is the beginning of the work on the digital scan. It’s a world first. Nothing’s ever been done like this under the ocean before. This film looks at the first set of things [we’ve learned], and they’re very substantial. But what’s exciting about the digital twin is, we’ll be able to take it to location-based experiences where the public will be able to engage with the digital twin themselves, walk on the ocean floor. Headset technology will allow the audience to do what Parks did. I think that’s really important for citizen science. I also think the next generation is going to engage with the story differently. New tech and new platforms are going to be the way the next generation understands the Titanic. Any kid, anywhere on the planet, will be able to walk in and engage with the story. I think that’s really powerful.

Titanic: The Digital Resurrection premieres on April 11, 2025, on National Geographic. It will be available for streaming on Disney+ and Hulu on April 12, 2025.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

New simulation of Titanic’s sinking confirms historical testimony Read More »

trump-administration’s-attack-on-university-research-accelerates

Trump administration’s attack on university research accelerates

Shortly after its inauguration, the Trump administration has made no secret that it isn’t especially interested in funding research. Before January’s end, major science agencies had instituted pauses on research funding, and grant funding has not been restored to previous levels since. Many individual grants have been targeted on ideological grounds, and agencies like the National Science Foundation are expected to see significant cuts. Since then, individual universities have been targeted, starting with an ongoing fight with Columbia University over $400 million in research funding.

This week, however, it appears that the targeting of university research has entered overdrive, with multiple announcements of funding freezes targeting several universities. Should these last for any considerable amount of time, they will likely cripple research at the targeted universities.

On Wednesday, Science learned that the National Institutes of Health has frozen all of its research funding to Columbia, despite the university agreeing to steps previously demanded by the administration and the resignation of its acting president. In 2024, Columbia had received nearly $700 million in grants from the NIH, with the money largely going to the university’s prestigious medical and public health schools.

But the attack goes well beyond a single university. On Tuesday, the Trump administration announced a hold on all research funding to Northwestern University (nearly $800 million) and Cornell University ($1 billion). These involved money granted by multiple government agencies, including a significant amount from the Department of Defense in Cornell’s case. Ostensibly, all of these actions were taken because of the university administrators’ approach to protests about the conflict in Gaza, which the administration has characterized as allowing antisemitism.

Trump administration’s attack on university research accelerates Read More »

fewer-beans-=-great-coffee-if-you-get-the-pour-height-right

Fewer beans = great coffee if you get the pour height right

Based on their findings, the authors recommend pouring hot water over your coffee grounds slowly to give the beans more time immersed in the water. But pour the water too slowly and the resulting jet will stick to the spout (the “teapot effect”) and there won’t be sufficient mixing of the grounds; they’ll just settle to the bottom instead, decreasing extraction yield. “If you have a thin jet, then it tends to break up into droplets,” said co-author Margot Young. “That’s what you want to avoid in these pour-overs, because that means the jet cannot mix the coffee grounds effectively.”

Smaller jet diameter impact on dynamics.

Smaller jet diameter impact on dynamics. Credit: E. Park et al., 2025

That’s where increasing the height from which you pour comes in. This imparts more energy from gravity, per the authors, increasing the mixing of the granular coffee grounds. But again, there’s such a thing as pouring from too great a height, causing the water jet to break apart. The ideal height is no more than 50 centimeters (about 20 inches) above the filter. The classic goosenecked tea kettle turns out to be ideal for achieving that optimal height. Future research might explore the effects of varying the grain size of the coffee grounds.

Increasing extraction yields and, by extension, reducing how much coffee grounds one uses matters because it is becoming increasingly difficult to cultivate the most common species of coffee because of ongoing climate change. “Coffee is getting harder to grow, and so, because of that, prices for coffee will likely increase in coming years,” co-author Arnold Mathijssen told New Scientist. “The idea for this research was really to see if we could help do something by reducing the amount of coffee beans that are needed while still keeping the same amount of extraction, so that you get the same strength of coffee.”

But the potential applications aren’t limited to brewing coffee. The authors note that this same liquid jet/submerged granular bed interplay is also involved in soil erosion from waterfalls, for example, as well as wastewater treatment—using liquid jets to aerate wastewater to enhance biodegradation of organic matter—and dam scouring, where the solid ground behind a dam is slowly worn away by water jets. “Although dams operate on a much larger scale, they may undergo similar dynamics, and finding ways to decrease the jet height in dams may decrease erosion and elongate dam health,” they wrote.

Physics of Fluids, 2025. DOI: 10.1063/5.0257924 (About DOIs).

Fewer beans = great coffee if you get the pour height right Read More »

de-extinction-company-announces-that-the-dire-wolf-is-back

De-extinction company announces that the dire wolf is back

On Monday, biotech company Colossal announced what it views as its first successful de-extinction: the dire wolf. These large predators were lost during the Late Pleistocene extinctions that eliminated many large land mammals from the Americas near the end of the most recent glaciation. Now, in a coordinated PR blitz, the company is claiming that clones of gray wolves with lightly edited genomes have essentially brought the dire wolf back. (Both Time and The New Yorker were given exclusive access to the animals ahead of the announcement.)

The dire wolf is a relative of the now-common gray wolf, with clear differences apparent between the two species’ skeletons. Based on the sequence of two new dire wolf genomes, the researchers at Colossal conclude that dire wolves formed a distinct branch within the canids over 2.5 million years ago. For context, that’s over twice as long as brown and polar bears are estimated to have been distinct species. Dire wolves are also large, typically the size of the largest gray wolf populations. Comparisons between the new genomes and those of other canids show that the dire wolf also had a light-colored coat.

That large of an evolutionary separation means there are likely a lot of genetic differences between the gray and dire wolves. Colossal’s internal and unpublished analysis suggested that key differences could be made by editing 14 different areas of the genome, with 20 total edits required. The new animals are reported to have had 15 variants engineered in. It’s unclear what accounts for the difference, and a Colossal spokesperson told Ars: “We are not revealing all of the edits that we made at this point.”

Nevertheless, the information that the company has released indicates that it was focused on recapitulating the appearance of a dire wolf, with an emphasis on large size and a white coat. For example, the researchers edited in a gene variant that’s found in gray wolf populations that are physically large, rather than the variant found in the dire wolf genome. A similar thing was done to achieve the light coat color. This is a cautious approach, as these changes are already known to be compatible with the rest of the gray wolf’s genome.

De-extinction company announces that the dire wolf is back Read More »

how-did-eastern-north-america-form?

How did eastern North America form?


Collisions hold lessons for how the edges of continents are built and change over time.

When Maureen Long talks to the public about her work, she likes to ask her audience to close their eyes and think of a landscape with incredible geology. She hears a lot of the same suggestions: Iceland, the Grand Canyon, the Himalayas. “Nobody ever says Connecticut,” says Long, a geologist at Yale University in New Haven in that state.

And yet Connecticut—along with much of the rest of eastern North America—holds important clues about Earth’s history. This region, which geologists call the eastern North American margin, essentially spans the US eastern seaboard and a little farther north into Atlantic Canada. It was created over hundreds of millions of years as slivers of Earth’s crust collided and merged. Mountains rose, volcanoes erupted and the Atlantic Ocean was born.

Much of this geological history has become apparent only in the past decade or so, after scientists blanketed the United States with seismometers and other instruments to illuminate geological structures hidden deep in Earth’s crust. The resulting findings include many surprises—from why there are volcanoes in Virginia to how the crust beneath New England is weirdly crumpled.

The work could help scientists better understand the edges of continents in other parts of the world; many say that eastern North America is a sort of natural laboratory for studying similar regions. And that’s important. “The story that it tells about Earth history and about this set of Earth processes … [is] really fundamental to how the Earth system works,” says Long, who wrote an in-depth look at the geology of eastern North America for the 2024 Annual Review of Earth and Planetary Sciences.

Born of continental collisions

The bulk of North America today is made of several different parts. To the west are relatively young and mighty mountain ranges like the Sierra Nevada and the Rockies. In the middle is the ancient heart of the continent, the oldest and stablest rocks around. And in the east is the long coastal stretch of the eastern North American margin. Each of these has its own geological history, but it is the story of the eastern bit that has recently come into sharper focus.

For decades, geologists have understood the broad picture of how eastern North America came to be. It begins with plate tectonics, the process in which pieces of Earth’s crust shuffle around over time, driven by churning motions in the underlying mantle. Plate tectonics created and then broke apart an ancient supercontinent known as Rodinia. By around 550 million years ago, a fragment of Rodinia had shuffled south of the equator, where it lay quietly for tens of millions of years. That fragment is the heart of what we know today as eastern North America.

Then, around 500 million years ago, tectonic forces started bringing fragments of other landmasses toward the future eastern North America. Carried along like parts on an assembly line, these continental slivers crashed into it, one after another. The slivers glommed together and built up the continental margin.

During that process, as more and more continental collisions crumpled eastern North America and thrust its agglomerated slivers into the sky, the Appalachian Mountains were born. To the west, the eastern North American margin had merged with ancient rocks that today make up the heart of the continent, west of the Appalachians and through the Midwest and into the Great Plains.

When one tectonic plate slides beneath another, slivers of Earth’s crust, known as terranes, can build up and stick together, forming a larger landmass. Such a process was key to the formation of eastern North America. Credit: Knowable Magazine

By around 270 million years ago, that action was done, and all the world’s landmasses had merged into a second single supercontinent, Pangaea. Then, around 200 million years ago, Pangaea began splitting apart, a geological breakup that formed the Atlantic Ocean, and eastern North America shuffled toward its current position on the globe.

Since then, erosion has worn down the peaks of the once-mighty Appalachians, and eastern North America has settled into a mostly quiet existence. It is what geologists call a “passive margin,” because although it is the edge of a continent, it is not the edge of a tectonic plate anymore: That lies thousands of miles out to the east, in the middle of the Atlantic Ocean.

In many parts of the world, passive continental margins are just that—passive, and pretty geologically boring. Think of the eastern edge of South America or the coastline around the United Kingdom; these aren’t places with active volcanoes, large earthquakes, or other major planetary activity.

But eastern North America is different. There’s so much going on there that some geologists have humorously dubbed it a “passive-aggressive margin.”

The eastern edge of North America, running along the US seaboard, contains fragments of different landscapes that attest to its complex birth. They include slivers of Earth’s crust that glommed together along what is now the east coast, with a more ancient mountain belt to their west and a chunk of even more ancient crust to the west of that. Credit: Knowable Magazine

That action includes relatively high mountains—for some reason, the Appalachians haven’t been entirely eroded away even after tens to hundreds of millions of years—as well as small volcanoes and earthquakes. Recent east-coast quakes include the magnitude-5.8 tremor near Mineral, Virginia, in 2011, and a magnitude-3.8 blip off the coast of Maine in January 2025. So geological activity exists in eastern North America. “It’s just not following your typical tectonic activity,” says Sarah Mazza, a petrologist at Smith College in Northampton, Massachusetts.

Crunching data on the crust

Over decades, geologists had built up a history of eastern North America by mapping rocks on Earth’s surface. But they got a much better look, and many fresh insights, starting around 2010. That’s after a federally funded research project known as EarthScope blanketed the continental United States with seismometers. One aim was to gather data on how seismic energy from earthquakes reverberated through the Earth’s crust and upper mantle. Like a CT scan of the planet, that information highlights structures that lie beneath the surface and would not otherwise be detected.

With EarthScope, researchers could suddenly see what parts of the crust were warm or cold, or strong or weak—information that told them what was happening underground. Having the new view was like astronomers going from looking at the stars with binoculars to using a telescope, Long says. “You can see more detail, and you can see finer structure,” she says. “A lot of features that we now know are present in the upper mantle beneath eastern North America, we really just did not know about.”

And then scientists got even better optics. Long and other researchers began putting out additional seismometers, packing them in dense lines and arrays over the ground in places where they wanted even better views into what was going on beneath the surface, including Georgia and West Virginia. Team members would find themselves driving around the countryside to carefully set up seismometer stations, hoping these would survive the snowfall and spiders of a year or two until someone could return to retrieve the data.

The approach worked—and geophysicists now have a much better sense of what the crust and upper mantle are doing under eastern North America. For one thing, they found that the thickness of the crust varies from place to place. Parts that are the remains of the original eastern North America landmass have a much thicker crust, around 45 kilometers. The crust beneath the continental slivers that attached later on to the eastern edge is much thinner, more like 25 to 30 kilometers thick. That difference probably traces back to the formation of the continent, Long says.

Seismic studies have revealed in recent years that Earth’s crust varies dramatically in thickness along the eastern seaboard—a legacy of how this region was pieced together from various landmasses over time. Credit: Knowable Magazine

But there’s something even weirder going on. Seismic images show that beneath parts of New England, it’s as if parts of the crust and upper mantle have slid atop one another. A 2022 study led by geoscientist Yantao Luo, a colleague of Long, found that the boundary that marks the bottom of Earth’s crust—often referred to as the Moho, after the Croatian seismologist Andrija Mohorovičić—was stacked double, like two overlapping pancakes, under southern New England.

The result was so surprising that at first Long didn’t think it could be right. But Luo double-checked and triple-checked, and the answer held. “It’s this super-unusual geometry,” Long says. “I’m not sure I’ve seen it anywhere else.”

It’s particularly odd because the Moho in this region apparently has managed to maintain its double-stacking for hundreds of millions of years, says Long. How that happened is a bit of a mystery. One idea is that the original landmass of eastern North America had an extremely strong and thick crust. When weaker continental slivers began arriving and glomming on to it, they squeezed up and over it in places.

How the Moho is working

The force of that collision could have carried the Moho of the incoming pieces up and over the older landmass, resulting in a doubling of the Moho there, says Paul Karabinos, a geologist at Williams College in Williamstown, Massachusetts. Something similar might be happening in Tibet today as the tectonic plate carrying India rams into that of Asia and crumples the crust against the Tibetan plateau. Long and her colleagues are still trying to work out how widespread the stacked-Moho phenomenon is across New England; already, they have found more signs of it beneath northwestern Massachusetts.

A second surprising discovery that emerged from the seismic surveys is why 47-million-year-old volcanoes along the border of Virginia and West Virginia might have erupted. The volcanoes are the youngest eruptions that have happened in eastern North America. They are also a bit of a mystery, since there is no obvious source of molten rock in the passive continental margin that could be fueling them.

The answer, once again, came from detailed seismic scans of the Earth. These showed that a chunk was missing from the bottom of Earth’s crust beneath the volcanoes: For some reason, the bottom of the crust became heavy and dripped downward from the top part, leaving a gap. “That now needs to be filled,” says Mazza. Mantle rocks obligingly flowed into the gap, experiencing a drop in pressure as they moved upward. That change in pressure triggered the mantle rocks to melt—and created the molten reservoir that fueled the Virginia eruptions.

The same process could be happening in other passive continental margins, Mazza says. Finding it beneath Virginia is important because it shows that there are more and different ways to fuel volcanoes in these areas than scientists had previously thought possible: “It goes into these ideas that you have more ways to create melt than your standard tectonic process,” she says.

Long and her colleagues are looking to see whether other parts of the eastern North American margin also have this crustal drip. One clue is emerging from how seismic energy travels through the upper mantle throughout the region. The rocks beneath the Virginia volcanoes show a strange slowdown, or anomaly, as seismic energy passes through them. That could be related to the crustal dripping that is going on there.

Seismic surveys have revealed a similar anomaly in northern New England. To try to unravel what might be happening at this second anomaly, Long’s team currently has one string of seismometers across Massachusetts, Vermont, and New Hampshire, and a second dense array in eastern Massachusetts. “Maybe something like what went on in Virginia might be in process … elsewhere in eastern North America,” Long says. “This might be a process, not just something that happened one time.”

Long even has her eyes on pushing farther north, to do seismic surveys along the continental margin in Newfoundland, and even across to Ireland—which once lay next to the North American continental margin, until the Atlantic Ocean opened and separated them. Early results suggest there may be significant differences in how the passive margin behaved on the North American side and on the Irish side, Long’s collaborator Roberto Masis Arce of Rutgers University reported at the American Geophysical Union conference in December 2024.

All these discoveries go to show that the eastern North American margin, once deemed a bit of a snooze, has far more going for it than one might think. “Passive doesn’t mean geologically inactive,” Mazza says. “We live on an active planet.”

This article originally appeared in Knowable Magazine, a nonprofit publication dedicated to making scientific knowledge accessible to all. Sign up for Knowable Magazine’s newsletter.

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

How did eastern North America form? Read More »

editorial:-mammoth-de-extinction-is-bad-conservation

Editorial: Mammoth de-extinction is bad conservation


Anti-extinction vs. de-extinction

Ecosystems are inconveniently complex, and elephants won’t make good surrogates.

Are we ready for mammoths when we can’t handle existing human-pachyderm conflicts? Credit: chuchart duangdaw

The start-up Colossal Biosciences aims to use gene-editing technology to bring back the woolly mammoth and other extinct species. Recently, the company achieved major milestones: last year, they generated stem cells for the Asian elephant, the mammoth’s closest living relative, and this month they published photos of genetically modified mice with long, mammoth-like coats. According to the company’s founders, including Harvard and MIT professor George Church, these advances take Colossal a big step closer to their goal of using mammoths to combat climate change by restoring Arctic grassland ecosystems. Church also claims that Colossal’s woolly mammoth program will help protect endangered species like the Asian elephant, saying “we’re injecting money into conservation efforts.”

In other words, the scientific advances Colossal makes in their lab will result in positive changes from the tropics to the Arctic, from the soil to the atmosphere.

Colossal’s Jurassic Park-like ambitions have captured the imagination of the public and investors, bringing its latest valuation to $10 billion. And the company’s research does seem to be resulting in some technical advances. But I’d argue that the broader effort to de-extinct the mammoth is—as far as conservation efforts go—incredibly misguided. Ultimately, Colossal’s efforts won’t end up being about helping wild elephants or saving the climate. They’ll be about creating creatures for human spectacle, with insufficient attention to the costs and opportunity costs to human and animal life.

Shaky evidence

The Colossal website explains how they believe resurrected mammoths could help fight climate change: “cold-tolerant elephant mammoth hybrids grazing the grasslands… [will] scrape away layers of snow, so that the cold air can reach the soil.” This will reportedly help prevent permafrost from melting, blocking the release of greenhouse gasses currently trapped in the soil. Furthermore, by knocking down trees and maintaining grasslands, Colossal says, mammoths will help slow snowmelt, ensuring Arctic ecosystems absorb less sunlight.

Conservationists often claim that the reason to save charismatic species is that they are necessary for the sound functioning of the ecosystems that support humankind. Perhaps the most well-known of these stories is about the ecological changes wolves drove when they were reintroduced to Yellowstone National Park. Through some 25 peer-reviewed papers, two ecologists claimed to demonstrate that the reappearance of wolves in Yellowstone changed the behavior of elk, causing them to spend less time browsing the saplings of trees near rivers. This led to a chain of cause and effect (a trophic cascade) that affected beavers, birds, and even the flow of the river. A YouTube video on the phenomenon called “How Wolves Change Rivers” has been viewed more than 45 million times.

But other scientists were unable to replicate these findings—they discovered that the original statistics were flawed, and that human hunters likely contributed to elk population declines in Yellowstone.Ultimately, a 2019 review of the evidence by a team of researchers concluded that “the most robust science suggests trophic cascades are not evident in Yellowstone.” Similar ecological claims about tigers and sharks as apex predators also fail to withstand scientific scrutiny.

Elephants—widely described as “keystone species”—are also stars of a host of similar ecological stories. Many are featured on the Colossal website, including one of the most common claims about the role elephants play in seed dispersal. “Across all environments,” reads the website, “elephant dung filled with seeds serve to spread plants […] boosting the overall health of the ecosystem.” But would the disappearance of elephants really result in major changes in plant life? After all, some of the world’s grandest forests (like the Amazon) have survived for millennia after the disappearance of mammoth-sized megafauna.

For my PhD research in northeast India, I tried to systematically measure how important Asian elephants were for seed dispersal compared to other animals in the ecosystem; our team’s work, published in five peer-reviewed ecological journals (reviewed here), does find that elephants are uniquely good at dispersing the seeds of a few large-fruited species. But we also found that domestic cattle and macaques disperse some species’ seeds quite well, and that 80 percent of seeds dispersed in elephant dung end up eaten by ants. After several years of study, I cannot say with confidence that the forests where I worked would be drastically different in the absence of elephants.

The evidence for how living elephants affect carbon sequestration is also quite mixed. On the one hand, one paper finds that African forest elephants knock down softwood trees, making way for hardwood trees that sequester more carbon. But on the other hand, many more researchers looking at African savannas have found that elephants knock down lots of trees, converting forests into savannas and reducing carbon sequestration.

Colossal’s website offers links to peer-reviewed research that support their suppositions on the ecological role of woolly mammoths. A key study offers intriguing evidence that keeping large herbivores—reindeer, Yakutian horses, moose, musk ox, European bison, yaks, and cold-adapted sheep—at artificially high levels in a tussock grassland helped achieve colder ground temperatures, ostensibly protecting permafrost. But the study raises lots of questions: is it possible to boost these herbivores’ populations across the whole northern latitudes? If so, why do we need mammoths at all—why not just use species that already exist, which would surely be cheaper?

Plus, as ecologist Michelle Mack noted, as the winters warm due to climate change, too much trampling or sweeping away of snow could have the opposite effect, helping warm the soils underneath more quickly—if so, mammoths could be worse for the climate, not better.

All this is to say that ecosystems are diverse and messy, and those of us working in functional ecology don’t always discover consistent patterns. Researchers in the field often struggle to find robust evidence for how a living species affects modern-day ecosystems—surely it is far harder to understand how a creature extinct for around 10,000 years shaped its environment? And harder still to predict how it would shape tomorrow’s ecosystems? In effect, Colossal’s ecological narrative relies on that difficulty. But just because claims about the distant past are harder to fact-check doesn’t mean they are more likely to be true.

Ethical blind spots

Colossal’s website spells out 10 steps for mammoth resurrection. Steps nine and 10 are: “implant the early embryo into the healthy Asian or African elephant surrogates,” and “care for the surrogates in a world-class conservation facility for the duration of the gestation and afterward.”

Colossal’s cavalier plans to use captive elephants as surrogates for mammoth calves illustrate an old problem in modern wildlife conservation: indifference towards individual animal suffering. Leading international conservation NGOs lack animal welfare policies that would push conservationists to ask whether the costs of interventions in terms of animal welfare outweigh the biodiversity benefits. Over the years, that absence has resulted in a range of questionable decisions.

Colossal’s efforts take this apathy towards individual animals into hyperdrive. Despite society’s thousands of years of experience with Asian elephants, conservationists struggle to breed them in captivity. Asian elephants in modern zoo facilities suffer from infertility and lose their calves to stillbirth and infanticides almost twice as often as elephants in semi-wild conditions. Such problems will almost certainly be compounded when scientists try to have elephants deliver babies created in the lab, with a hodge podge of features from Asian elephants and mammoths.

Even in the best-case scenario, there would likely be many, many failed efforts to produce a viable organism before Colossal gets to a herd that can survive. This necessarily trial-and-error process could lead to incredible suffering for both elephant mothers and mammoth calves along the way. Elephants in the wild have been observed experiencing heartbreaking grief when their calves die, sometimes carrying their babies’ corpses for days—a grief the mother elephants might very well be subjected to as they are separated from their calves or find themselves unable to keep their chimeric offspring alive.

For the calves that do survive, their edited genomes could lead to chronic conditions, and the ancient mammoth gut microbiome might be impossible to resurrect, leading to digestive dysfunction. Then there will likely be social problems. Research finds that Asian elephants in Western zoos don’t live as long as wild elephants, and elephant researchers often bemoan the limited space, stimulation, and companionship available to elephants in captivity. These problems will surely also plague surviving animals.

Introduction to the wild will probably result in even more suffering: elephant experts recommend against introducing captive animals “that have had no natural foraging experience at all” to the wild as they are likely to experience “significant hardship.” Modern elephants survive not just through instinct, but through culture—matriarch-led herds teach calves what to eat and how to survive, providing a nurturing environment. We have good reason to believe mammoths also needed cultural instruction to survive. How many elephant/mammoth chimeras will suffer false starts and tragic deaths in the punishing Arctic without the social conditions that allowed them to thrive millennia ago?

Opportunity costs

If Colossal (or Colossal’s investors) really wish to foster Asian elephant conservation or combat climate change, they have many better options. The opportunity costs are especially striking for Asian elephant conservation: while over a trillion dollars is spent combatting climate change annually, the funds available to address the myriad of problems facing wild Asian elephants are far smaller. Take the example of India, the country with the largest population of wild Asian elephants in the world (estimated at 27,000) in a sea of 1.4 billion human beings.

Indians generally revere elephants and tolerate a great deal of hardship to enable coexistence—about 500 humans are killed due to human-elephant conflict annually there. But as a middle-income country continuing to struggle with widespread poverty, the federal government typically budgets less than $4M for Project Elephant, its flagship elephant conservation program. That’s less than $200 per wild elephant and 1/2000th as much as Colossal has raised so far. India’s conservation NGOs generally have even smaller budgets for their elephant work. The result is that conservationists are a decade behindwhere they expected to be in mapping where elephants range.

With Colossal’s budget, Asian elephant conservation NGOs could tackle the real threats to the survival of elephants: human-elephant conflict, loss of habitat and connectivity, poaching, and the spread of invasive plants unpalatable to elephants. Some conservationists are exploring creative schemes to help keep people and elephants safe from each other. There are also community-based efforts toremove invasive species like Lantana camara and restore native vegetation. Funds could enable development of an AI-powered system that allows the automated identification and monitoring of individual elephants. There is also a need for improved compensation schemes to ensure those who lose crops or property to wild elephants are made whole again.

As a US-based synthetic biology company, Colossal could also use its employees’ skills much more effectively to fight climate change. Perhaps they could genetically engineer trees and shrubs to sequester more carbon. Or Colossal could help us learn to produce meat from modified microbes or cultivated lines of cow, pig, and chicken cells, developing alternative proteins that could more efficiently feed the planet, protecting wildlife habitat and reducing greenhouse gas emissions.

The question is whether Colossal’s leaders and supporters are willing to pivot from a project that grabs news headlines to ones that would likely make positive differences. By tempting us with the resurrection of a long-dead creature, Colossal forces us to ask: do we want conservation to be primarily about feeding an unreflective imagination? Or do we want evidence, logic, and ethics to be central to our relationships with other species? For anyone who really cares about the climate, elephants, or animals in general, de-extincting the mammoth represents a huge waste and a colossal mistake.

Nitin Sekar served as the national lead for elephant conservation at WWF India for five years and is now a member of the Asian Elephant Specialist Group of the International Union for the Conservation of Nature’s Species Survival Commission The views presented here are his own.

Editorial: Mammoth de-extinction is bad conservation Read More »