Science

robobee-sticks-the-landing

RoboBee sticks the landing

Image of the RoboBee with insect-inspired legs standing next to a US penny for scale.

The RoboBee is only slightly larger than a penny. Credit: Harvard Microrobotics Laboratory

The first step was to perform experiments to determine the effects of oscillation on the newly designed robotic legs and leg joints. This involved manually disturbing the leg and then releasing it, capturing the resulting oscillations on high-speed video. This showed that the leg and joint essentially acted as an “underdamped spring-mass-damper model,” with a bit of “viscoelastic creep” for good measure. Next, the team performed a series of free-fall experiments with small fiberglass crash-test dummy vehicles with mass and inertia similar to RoboBee’s, capturing each free fall on high-speed video. This was followed by tests of different takeoff and landing approaches.

The final step was running experiments on consecutive takeoff and landing sequences using RoboBee, with the little robot taking off from one leaf, hovering, then moving laterally before hovering briefly and landing on another leaf nearby. The basic setup was the same as prior experiments, with the exception of placing a plant branch in the motion-capture arena. RoboBee was able to safely land on the second leaf (or similar uneven surfaces) over repeated trials with varying parameters.

Going forward, Wood’s team will seek to further improve the mechanical damping upon landing, drawing lessons from stingless bees and mosquitoes, as well as scaling up to larger vehicles. This would require an investigation into more complex leg geometries, per the authors. And RoboBee still needs to be tethered to off-board control systems. The team hopes one day to incorporate onboard electronics with built-in sensors.

“The longer-term goal is full autonomy, but in the interim we have been working through challenges for electrical and mechanical components using tethered devices,” said Wood. “The safety tethers were, unsurprisingly, getting in the way of our experiments, and so safe landing is one critical step to remove those tethers.” This would make RoboBee more viable for a range of practical applications, including environmental monitoring, disaster surveillance, or swarms of RoboBees engaged in artificial pollination.

Science Robotics, 2025. DOI: 10.1126/scirobotics.adq3059  (About DOIs).

RoboBee sticks the landing Read More »

looking-at-the-universe’s-dark-ages-from-the-far-side-of-the-moon

Looking at the Universe’s dark ages from the far side of the Moon


meet you in the dark side of the moon

Building an observatory on the Moon would be a huge challenge—but it would be worth it.

A composition of the moon with the cosmos radiating behind it

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

There is a signal, born in the earliest days of the cosmos. It’s weak. It’s faint. It can barely register on even the most sensitive of instruments. But it contains a wealth of information about the formation of the first stars, the first galaxies, and the mysteries of the origins of the largest structures in the Universe.

Despite decades of searching for this signal, astronomers have yet to find it. The problem is that our Earth is too noisy, making it nearly impossible to capture this whisper. The solution is to go to the far side of the Moon, using its bulk to shield our sensitive instruments from the cacophony of our planet.

Building telescopes on the far side of the Moon would be the greatest astronomical challenge ever considered by humanity. And it would be worth it.

The science

We have been scanning and mapping the wider cosmos for a century now, ever since Edwin Hubble discovered that the Andromeda “nebula” is actually a galaxy sitting 2.5 million light-years away. Our powerful Earth-based observatories have successfully mapped the detailed location to millions of galaxies, and upcoming observatories like the Vera C. Rubin Observatory and Nancy Grace Roman Space Telescope will map millions more.

And for all that effort, all that technological might and scientific progress, we have surveyed less than 1 percent of the volume of the observable cosmos.

The vast bulk of the Universe will remain forever unobservable to traditional telescopes. The reason is twofold. First, most galaxies will simply be too dim and too far away. Even the James Webb Space Telescope, which is explicitly designed to observe the first generation of galaxies, has such a limited field of view that it can only capture a handful of targets at a time.

Second, there was a time, within the first few hundred million years after the Big Bang, before stars and galaxies had even formed. Dubbed the “cosmic dark ages,” this time naturally makes for a challenging astronomical target because there weren’t exactly a lot of bright sources to generate light for us to look at.

But there was neutral hydrogen. Most of the Universe is made of hydrogen, making it the most common element in the cosmos. Today, almost all of that hydrogen is ionized, existing in a super-heated plasma state. But before the first stars and galaxies appeared, the cosmic reserves of hydrogen were cool and neutral.

Neutral hydrogen is made of a single proton and a single electron. Each of these particles has a quantum property known as spin (which kind of resembles the familiar, macroscopic property of spin, but it’s not quite the same—though that’s a different article). In its lowest-energy state, the proton and electron will have spins oriented in opposite directions. But sometimes, through pure random quantum chance, the electron will spontaneously flip around. Very quickly, the hydrogen notices and gets the electron to flip back to where it belongs. This process releases a small amount of energy in the form of a photon with a wavelength of 21 centimeters.

This quantum transition is exceedingly rare, but with enough neutral hydrogen, you can build a substantial signal. Indeed, observations of 21-cm radiation have been used extensively in astronomy, especially to build maps of cold gas reservoirs within the Milky Way.

So the cosmic dark ages aren’t entirely dark; those clouds of primordial neutral hydrogen are emitting tremendous amounts of 21-cm radiation. But that radiation was emitted in the distant past, well over 13 billion years ago. As it has traveled through the cosmic distances, all those billions of light-years on its way to our eager telescopes, it has experienced the redshift effects of our expanding Universe.

By the time that dark age 21-cm radiation reaches us, it has stretched by a factor of 10, turning the neutral hydrogen signal into radio waves with wavelengths of around 2 meters.

The astronomy

Humans have become rather fond of radio transmissions in the past century. Unfortunately, the peak of this primordial signal from the dark ages sits right below the FM dial of your radio, which pretty much makes it impossible to detect from Earth. Our emissions are simply too loud, too noisy, and too difficult to remove. Teams of astronomers have devised clever ways to reduce or eliminate interference, featuring arrays scattered around the most desolate deserts in the world, but they have not been able to confirm the detection of a signal.

So those astronomers have turned in desperation to the quietest desert they can think of: the far side of the Moon.

It wasn’t until 1959 when the Soviet Luna 3 probe gave us our first glimpse of the Moon’s far side, and it wasn’t until 2019 when the Chang’e 4 mission made the first soft landing. Compared to the near side, and especially low-Earth orbit, there is very little human activity there. We’ve had more active missions on the surface of Mars than on the lunar far side.

Chang’e-4 landing zone on the far side of the moon. Credit: Xiao Xiao and others (CC BY 4.0)

And that makes the far side of the Moon the ideal location for a dark-age-hunting radio telescope, free from human interference and noise.

Ideas abound to make this a possibility. The first serious attempt was DARE, the Dark Ages Radio Explorer. Rather than attempting the audacious goal of building an actual telescope on the surface, DARE was a NASA-funded concept to develop an observatory (and when it comes to radio astronomy, “observatory” can be as a simple as a single antenna) to orbit the Moon and take data when it’s on the opposite side as the Earth.

For various bureaucratic reasons, NASA didn’t develop the DARE concept further. But creative astronomers have put forward even bolder proposals.

The FarView concept, for example, is a proposed radio telescope array that would dwarf anything on the Earth. It would be sensitive to frequency ranges between 5 and 40 MHz, allowing it to target the dark ages and the birth of the first stars. The proposed design contains 100,000 individual elements, with each element consisting of a single, simple dipole antenna, dispersed over a staggering 200 square kilometers. It would be infeasible to deliver that many antennae directly to the surface of the Moon. Instead, we’d have to build them, mining lunar regolith and turning it into the necessary components.

The design of this array is what’s called an interferometer. Instead of a single big dish, the individual antennae collect data on their own and then correlate all their signals together later. The effective resolution of an interferometer is the same as a single dish as big as the widest distance among the elements. The downside of an interferometer is that most of the incoming radiation just hits dirt (or in this case, lunar regolith), so the interferometer has to collect a lot of data to build up a decent signal.

Attempting these kinds of observations on the Earth requires constant maintenance and cleaning to remove radio interference and have essentially sunk all attempts to measure the dark ages. But a lunar-based interferometer will have all the time in the world it needs, providing a much cleaner and easier-to-analyze stream of data.

If you’re not in the mood for building 100,000 antennae on the Moon’s surface, then another proposal seeks to use the Moon’s natural features—namely, its craters. If you squint hard enough, they kind of look like radio dishes already. The idea behind the project, named the Lunar Crater Radio Telescope, is to find a suitable crater and use it as the support structure for a gigantic, kilometer-wide telescope.

This idea isn’t without precedent. Both the beloved Arecibo and the newcomer FAST observatories used depressions in the natural landscape of Puerto Rico and China, respectively, to take most of the load off of the engineering to make their giant dishes. The Lunar Telescope would be larger than both of those combined, and it would be tuned to hunt for dark ages radio signals that we can’t observe using Earth-based observatories because they simply bounce off the Earth’s ionosphere (even before we have to worry about any additional human interference). Essentially, the only way that humanity can access those wavelengths is by going beyond our ionosphere, and the far side of the Moon is the best place to park an observatory.

The engineering

The engineering challenges we need to overcome to achieve these scientific dreams are not small. So far, humanity has only placed a single soft-landed mission on the distant side of the Moon, and both of these proposals require an immense upgrade to our capabilities. That’s exactly why both far-side concepts were funded by NIAC, NASA’s Innovative Advanced Concepts program, which gives grants to researchers who need time to flesh out high-risk, high-reward ideas.

With NIAC funds, the designers of the Lunar Crater Radio Telescope, led by Saptarshi Bandyopadhyay at the Jet Propulsion Laboratory, have already thought of the challenges they will need to overcome to make the mission a success. Their mission leans heavily on another JPL concept, the DuAxel, which consists of a rover that can split into two single-axel rovers connected by a tether.

To build the telescope, several DuAxels are sent to the crater. One of each pair “sits” to anchor itself on the crater wall, while another one crawls down the slope. At the center, they are met with a telescope lander that has deployed guide wires and the wire mesh frame of the telescope (again, it helps for assembling purposes that radio dishes are just strings of metal in various arrangements). The pairs on the crater rim then hoist their companions back up, unfolding the mesh and lofting the receiver above the dish.

The FarView observatory is a much more capable instrument—if deployed, it would be the largest radio interferometer ever built—but it’s also much more challenging. Led by Ronald Polidan of Lunar Resources, Inc., it relies on in-situ manufacturing processes. Autonomous vehicles would dig up regolith, process and refine it, and spit out all the components that make an interferometer work: the 100,000 individual antennae, the kilometers of cabling to run among them, the solar arrays to power everything during lunar daylight, and batteries to store energy for round-the-lunar-clock observing.

If that sounds intense, it’s because it is, and it doesn’t stop there. An astronomical telescope is more than a data collection device. It also needs to crunch some numbers and get that precious information back to a human to actually study it. That means that any kind of far side observing platform, especially the kinds that will ingest truly massive amounts of data such as these proposals, would need to make one of two choices.

Choice one is to perform most of the data correlation and processing on the lunar surface, sending back only highly refined products to Earth for further analysis. Achieving that would require landing, installing, and running what is essentially a supercomputer on the Moon, which comes with its own weight, robustness, and power requirements.

The other choice is to keep the installation as lightweight as possible and send the raw data back to Earthbound machines to handle the bulk of the processing and analysis tasks. This kind of data throughput is outright impossible with current technology but could be achieved with experimental laser-based communication strategies.

The future

Astronomical observatories on the far side of the Moon face a bit of a catch-22. To deploy and run a world-class facility, either embedded in a crater or strung out over the landscape, we need some serious lunar manufacturing capabilities. But those same capabilities come with all the annoying radio fuzz that already bedevil Earth-based radio astronomy.

Perhaps the best solution is to open up the Moon to commercial exploitation but maintain the far side as a sort of out-world nature preserve, owned by no company or nation, left to scientists to study and use as a platform for pristine observations of all kinds.

It will take humanity several generations, if not more, to develop the capabilities needed to finally build far-side observatories. But it will be worth it, as those facilities will open up the unseen Universe for our hungry eyes, allowing us to pierce the ancient fog of our Universe’s past, revealing the machinations of hydrogen in the dark ages, the birth of the first stars, and the emergence of the first galaxies. It will be a fountain of cosmological and astrophysical data, the richest possible source of information about the history of the Universe.

Ever since Galileo ground and polished his first lenses and through the innovations that led to the explosion of digital cameras, astronomy has a storied tradition of turning the technological triumphs needed to achieve science goals into the foundations of various everyday devices that make life on Earth much better. If we’re looking for reasons to industrialize and inhabit the Moon, the noble goal of pursuing a better understanding of the Universe makes for a fine motivation. And we’ll all be better off for it.

Photo of Paul Sutter

Looking at the Universe’s dark ages from the far side of the Moon Read More »

the-physics-of-bowling-strike-after-strike

The physics of bowling strike after strike

More than 45 million people in the US are fans of bowling, with national competitions awarding millions of dollars. Bowlers usually rely on instinct and experience, earned through lots and lots of practice, to boost their strike percentage. A team of physicists has come up with a mathematical model to better predict ball trajectories, outlined in a new paper published in the journal AIP Advances. The resulting equations take into account such factors as the composition and resulting pattern of the oil used on bowling lanes, as well as the inevitable asymmetries of bowling balls and player variability.

The authors already had a strong interest in bowling. Three are regular bowlers and quite skilled at the sport; a fourth, Curtis Hooper of Longborough University in the UK, is a coach for Team England at the European Youth Championships. Hooper has been studying the physics of bowling for several years, including an analysis of the 2017 Weber Cup, as well as papers devising mathematical models for the application of lane conditioners and oil patterns in bowling.

The calculations involved in such research are very complicated because there are so many variables that can affect a ball’s trajectory after being thrown. Case in point: the thin layer of oil that is applied to bowling lanes, which Hooper found can vary widely in volume and shape among different venues, plus the lack of uniformity in applying the layer, which creates an uneven friction surface.

Per the authors, most research to date has relied on statistically analyzing empirical data, such as a 2018 report by the US Bowling Congress that looked at data generated by 37 bowlers. (Hooper relied on ball-tracking data for his 2017 Weber Cup analysis.) A 2009 analysis showed that the optimal location for the ball to strike the headpin is about 6 centimeters off-center, while the optimal entry angle for the ball to hit is about 6 degrees. However, such an approach struggles to account for the inevitable player variability. No bowler hits their target 100 percent of the time, and per Hooper et al., while the best professionals can come within 0.1 degrees from the optimal launch angle, this slight variation can nonetheless result in a difference of several centimeters down-lane.

The physics of bowling strike after strike Read More »

here’s-how-a-satellite-ended-up-as-a-ghostly-apparition-on-google-earth

Here’s how a satellite ended up as a ghostly apparition on Google Earth

Regardless of the identity of the satellite, this image is remarkable for several reasons.

First, despite so many satellites flying in space, it’s still rare to see a real picture—not just an artist’s illustration—of what one actually looks like in orbit. For example, SpaceX has released photos of Starlink satellites in launch configuration, where dozens of the spacecraft are stacked together to fit inside the payload compartment of the Falcon 9 rocket. But there are fewer well-resolved views of a satellite in its operational environment, with solar arrays extended like the wings of a bird.

This is changing as commercial companies place more and more imaging satellites in orbit. Several companies provide “non-Earth imaging” services by repurposing Earth observation cameras to view other objects in space. These views can reveal information that can be useful in military or corporate espionage.

Secondly, the Google Earth capture offers a tangible depiction of a satellite’s speed. An object in low-Earth orbit must travel at more than 17,000 mph (more than 27,000 km per hour) to keep from falling back into the atmosphere.

While the B-2’s motion caused it to appear a little smeared in the Google Earth image a few years ago, the satellite’s velocity created a different artifact. The satellite appears five times in different colors, which tells us something about how the image was made. Airbus’ Pleiades satellites take pictures in multiple spectral bands: blue, green, red, panchromatic, and near-infrared.

At lower left, the black outline of the satellite is the near-infrared capture. Moving up, you can see the satellite in red, blue, and green, followed by the panchromatic, or black-and-white, snapshot with the sharpest resolution. Typically, the Pleiades satellites record these images a split-second apart and combine the colors to generate an accurate representation of what the human eye might see. But this doesn’t work so well for a target moving at nearly 5 miles per second.

Here’s how a satellite ended up as a ghostly apparition on Google Earth Read More »

after-harvard-says-no-to-feds,-$2.2-billion-of-research-funding-put-on-hold

After Harvard says no to feds, $2.2 billion of research funding put on hold

The Trump administration has been using federal research funding as a cudgel. The government has blocked billions of dollars in research funds and threatened to put a hold on even more in order to compel universities to adopt what it presents as essential reforms. In the case of Columbia University, that includes changes in the leadership of individual academic departments.

On Friday, the government sent a list of demands that it presented as necessary to “maintain Harvard’s financial relationship with the federal government.” On Monday, Harvard responded that accepting these demands would “allow itself to be taken over by the federal government.” The university also changed its home page into an extensive tribute to the research that would be eliminated if the funds were withheld.

In response, the Trump administration later put $2.2 billion of Harvard’s research funding on hold.

Diversity, but only the right kind

Harvard posted the letter it received from federal officials, listing their demands. Some of it is what you expect, given the Trump administration’s interests. The admissions and hiring departments would be required to drop all diversity efforts, with data on faculty and students to be handed over to the federal government for auditing. As at other institutions, there are also some demands presented as efforts against antisemitism, such as the defunding of pro-Palestinian groups. More generally, it demands that university officials “prevent admitting students hostile to the American values and institutions.”

There are also a bunch of basic culture war items, such as a demand for a mask ban, and a ban on “de-platforming” speakers on campus. In addition, the government wants the university to screen all faculty hires for plagiarism issues, which is what caused Harvard’s former president to resign after she gave testimony to Congress. Any violation of these updated conduct codes by a non-citizen would require an immediate report to the Department of Homeland Security and State Department, presumably so they can prepare to deport them.

After Harvard says no to feds, $2.2 billion of research funding put on hold Read More »

lunar-gateway’s-skeleton-is-complete—its-next-stop-may-be-trump’s-chopping-block

Lunar Gateway’s skeleton is complete—its next stop may be Trump’s chopping block

Officials blame changing requirements for much of the delays and rising costs. NASA managers dramatically changed their plans for the Gateway program in 2020, when they decided to launch the PPE and HALO on the same rocket, prompting major changes to their designs.

Jared Isaacman, Trump’s nominee for NASA administrator, declined to commit to the Gateway program during a confirmation hearing before the Senate Commerce Committee on April 9. Sen. Ted Cruz (R-Texas), the committee’s chairman, pressed Isaacman on the Lunar Gateway. Cruz is one of the Gateway program’s biggest backers in Congress since it is managed by Johnson Space Center in Texas. If it goes ahead, Gateway would guarantee numerous jobs at NASA’s mission control in Houston throughout its 15-year lifetime.

That’s an area that if I’m confirmed, I would love to roll up my sleeves and further understand what’s working right?” Isaacman replied to Cruz. “What are the opportunities the Gateway presents to us? And where are some of the challenges, because I think the Gateway is a component of many programs that are over budget and behind schedule.”

The pressure shell for the Habitation and Logistics Outpost (HALO) module arrived in Gilbert, Arizona, last week for internal outfitting. Credit: NASA/Josh Valcarcel

Checking in with Gateway

Nevertheless, the Gateway program achieved a milestone one week before Isaacman’s confirmation hearing. The metallic pressure shell for the HALO module was shipped from its factory in Italy to Arizona. The HALO module is only partially complete, and it lacks life support systems and other hardware it needs to operate in space.

Over the next couple of years, Northrop Grumman will outfit the habitat with those components and connect it with the Power and Propulsion Element under construction at Maxar Technologies in Silicon Valley. This stage of spacecraft assembly, along with prelaunch testing, often uncovers problems that can drive up costs and trigger more delays.

Ars recently spoke with Jon Olansen, a bio-mechanical engineer and veteran space shuttle flight controller who now manages the Gateway program at Johnson Space Center. A transcript of our conversation with Olansen is below. It is lightly edited for clarity and brevity.

Ars: The HALO module has arrived in Arizona from Italy. What’s next?

Olansen: This HALO module went through significant effort from the primary and secondary structure perspective out at Thales Alenia Space in Italy. That was most of their focus in getting the vehicle ready to ship to Arizona. Now that it’s in Arizona, Northrop is setting it up in their facility there in Gilbert to be able to do all of the outfitting of the systems we need to actually execute the missions we want to do, keep the crew safe, and enable the science that we’re looking to do. So, if you consider your standard spacecraft, you’re going to have all of your command-and-control capabilities, your avionics systems, your computers, your network management, all of the things you need to control the vehicle. You’re going to have your power distribution capabilities. HALO attaches to the Power and Propulsion Element, and it provides the primary power distribution capability for the entire station. So that’ll all be part of HALO. You’ll have your standard thermal systems for active cooling. You’ll have the vehicle environmental control systems that will need to be installed, [along with] some of the other crew systems that you can think of, from lighting, restraint, mobility aids, all the different types of crew systems. Then, of course, all of our science aspects. So we have payload lockers, both internally, as well as payload sites external that we’ll have available, so pretty much all the different systems that you would need for a human-rated spacecraft.

Ars: What’s the latest status of the Power and Propulsion Element?

Olansen: PPE is fairly well along in their assembly and integration activities. The central cylinder has been integrated with the propulsion tanks… Their propulsion module is in good shape. They’re working on the avionics shelves associated with that spacecraft. So, with both vehicles, we’re really trying to get the assembly done in the next year or so, so we can get into integrated spacecraft testing at that point in time.

Ars: What’s in the critical path in getting to the launch pad?

Olansen: The assembly and integration activity is really the key for us. It’s to get to the full vehicle level test. All the different activities that we’re working on across the vehicles are making substantive progress. So, it’s a matter of bringing them all in and doing the assembly and integration in the appropriate sequences, so that we get the vehicles put together the way we need them and get to the point where we can actually power up the vehicles and do all the testing we need to do. Obviously, software is a key part of that development activity, once we power on the vehicles, making sure we can do all the control work that we need to do for those vehicles.

[There are] a couple of key pieces I will mention along those lines. On the PPE side, we have the electrical propulsion system. The thrusters associated with that system are being delivered. Those will go through acceptance testing at the Glenn Research Center [in Ohio] and then be integrated on the spacecraft out at Maxar; so that work is ongoing as we speak. Out at ESA, ESA is providing the HALO lunar communication system. That’ll be delivered later this year. That’ll be installed on HALO as part of its integrated test and checkout and then launch on HALO. That provides the full communication capability down to the lunar surface for us, where PPE provides the communication capability back to Earth. So, those are key components that we’re looking to get delivered later this year.

Jon Olansen, manager of NASA’s Gateway program at Johnson Space Center in Houston. Credit: NASA/Andrew Carlsen

Ars: What’s the status of the electric propulsion thrusters for the PPE?

Olansen: The first one has actually been delivered already, so we’ll have the opportunity to go through, like I said, the acceptance testing for those. The other flight units are right on the heels of the first one that was delivered. They’ll make it through their acceptance testing, then get delivered to Maxar, like I said, for integration into PPE. So, that work is already in progress. [The Power and Propulsion Element will have three xenon-fueled 12-kilowatt Hall thrusters produced by Aerojet Rocketdyne, and four smaller 6-kilowatt thrusters.]

Ars: The Government Accountability Office (GAO) outlined concerns last year about keeping the mass of Gateway within the capability of its rocket. Has there been any progress on that issue? Will you need to remove components from the HALO module and launch them on a future mission? Will you narrow your launch windows to only launch on the most fuel-efficient trajectories?

Olansen: We’re working the plan. Now that we’re launching the two vehicles together, we’re working mass management. Mass management is always an issue with spacecraft development, so it’s no different for us. All of the things you described are all knobs that are in the trade space as we proceed, but fundamentally, we’re working to design the optimal spacecraft that we can, first. So, that’s the key part. As we get all the components delivered, we can measure mass across all of those components, understand what our integrated mass looks like, and we have several different options to make sure that we’re able to execute the mission we need to execute. All of those will be balanced over time based on the impacts that are there. There’s not a need for a lot of those decisions to happen today. Those that are needed from a design perspective, we’ve already made. Those that are needed from enabling future decisions, we’ve already made all of those. So, really, what we’re working through is being able to, at the appropriate time, make decisions necessary to fly the vehicle the way we need to, to get out to NRHO [Near Rectilinear Halo Orbit, an elliptical orbit around the Moon], and then be able to execute the Artemis missions in the future.

Ars: The GAO also discussed a problem with Gateway’s controllability with something as massive as Starship docked to it. What’s the latest status of that problem?

Olansen: There are a number of different risks that we work through as a program, as you’d expect. We continue to look at all possibilities and work through them with due diligence. That’s our job, to be able to do that on a daily basis. With the stack controllability [issue], where that came from for GAO, we were early in the assessments of what the potential impacts could be from visiting vehicles, not just any one [vehicle] but any visiting vehicle. We’re a smaller space station than ISS, so making sure we understand the implications of thruster firings as vehicles approach the station, and the implications associated with those, is where that stack controllability conversation came from.

The bus that Maxar typically designs doesn’t have to generally deal with docking. Part of what we’ve been doing is working through ways that we can use the capabilities that are already built into that spacecraft differently to provide us the control authority we need when we have visiting vehicles, as well as working with the visiting vehicles and their design to make sure that they’re minimizing the impact on the station. So, the combination of those two has largely, over the past year since that report came out, improved where we are from a stack controllability perspective. We still have forward work to close out all of the different potential cases that are there. We’ll continue to work through those. That’s standard forward work, but we’ve been able to make some updates, some software updates, some management updates, and logic updates that really allow us to control the stack effectively and have the right amount of control authority for the dockings and undockings that we will need to execute for the missions.

Lunar Gateway’s skeleton is complete—its next stop may be Trump’s chopping block Read More »

scientists-made-a-stretchable-lithium-battery-you-can-bend,-cut,-or-stab

Scientists made a stretchable lithium battery you can bend, cut, or stab

The Li-ion batteries that power everything from smartphones to electric cars are usually packed in rigid, sealed enclosures that prevent stresses from damaging their components and keep air from coming into contact with their flammable and toxic electrolytes. It’s hard to use batteries like this in soft robots or wearables, so a team of scientists at the University California, Berkeley built a flexible, non-toxic, jelly-like battery that could survive bending, twisting, and even cutting with a razor.

While flexible batteries using hydrogel electrolytes have been achieved before, they came with significant drawbacks. “All such batteries could [only] operate [for] a short time, sometimes a few hours, sometimes a few days,” says Liwei Lin, a mechanical engineering professor at UC Berkeley and senior author of the study. The battery built by his team endured 500 complete charge cycles—about as many as the batteries in most smartphones are designed for.

Power in water

“Current-day batteries require a rigid package because the electrolyte they use is explosive, and one of the things we wanted to make was a battery that would be safe to operate without this rigid package,” Lin told Ars. Unfortunately, flexible packaging made of polymers or other stretchable materials can be easily penetrated by air or water, which will react with standard electrolytes, generating lots of heat, potentially resulting in fires and explosions. This is why, in 2017, scientists started to experiment with quasi-solid-state hydrogel electrolytes.

These hydrogels were made of a polymer net that gave them their shape, crosslinkers like borax or hydrogen bonds that held this net together, a liquid phase made of water, and salt or other electrolyte additives providing ions that moved through the watery gel as the battery charged or discharged.

But hydrogels like that had their own fair share of issues. The first was a fairly narrow electrochemical stability window—a safe zone of voltage the battery can be exposed to. “This really limits how much voltage your battery can output,” says Peisheng He, a researcher at UC Berkeley Sensor and Actuator Center and lead author of the study. “Nowadays, batteries usually operate at 3.3 volts, so their stability window must be higher than that, probably four volts, something like that.” Water, which was the basis of these hydrogel electrolytes, typically broke down into hydrogen and oxygen when exposed to around 1.2 volts. That problem was solved by using highly concentrated salt water loaded with highly fluorinated lithium salts, which made it less likely to break down. But this led the researchers straight into safety issues, as fluorinated lithium salts are highly toxic to humans.

Scientists made a stretchable lithium battery you can bend, cut, or stab Read More »

trump-white-house-budget-proposal-eviscerates-science-funding-at-nasa

Trump White House budget proposal eviscerates science funding at NASA

This week, as part of the process to develop a budget for fiscal-year 2026, the Trump White House shared the draft version of its budget request for NASA with the space agency.

This initial version of the administration’s budget request calls for an approximately 20 percent overall cut to the agency’s budget across the board, effectively $5 billion from an overall topline of about $25 billion. However, the majority of the cuts are concentrated within the agency’s Science Mission Directorate, which oversees all planetary science, Earth science, astrophysics research, and more.

According to the “passback” documents given to NASA officials on Thursday, the space agency’s science programs would receive nearly a 50 percent cut in funding. After the agency received $7.5 billion for science in fiscal-year 2025, the Trump administration has proposed a science topline budget of just $3.9 billion for the coming fiscal year.

Detailing the cuts

Among the proposals were: A two-thirds cut to astrophysics, down to $487 million; a greater than two-thirds cut to heliophysics, down to $455 million; a greater than 50 percent cut to Earth science, down to $1.033 billion; and a 30 percent cut to Planetary science, down to $1.929 billion.

Although the budget would continue support for ongoing missions such as the Hubble Space Telescope and the James Webb Space Telescope, it would kill the much-anticipated Nancy Grace Roman Space Telescope, an observatory seen as on par with those two world-class instruments that is already fully assembled and on budget for a launch in two years.

“Passback supports continued operation of the Hubble and James Webb Space Telescopes and assumes no funding is provided for other telescopes,” the document states.

Trump White House budget proposal eviscerates science funding at NASA Read More »

rocket-report:-“no-man’s-land”-in-rocket-wars;-isaacman-lukewarm-on-sls

Rocket Report: “No man’s land” in rocket wars; Isaacman lukewarm on SLS


China’s approach to space junk is worrisome as it begins launching its own megaconstellations.

A United Launch Alliance Atlas V rocket rolls to its launch pad in Florida in preparation for liftoff with 27 satellites for Amazon’s Kuiper broadband network. Credit: United Launch Alliance

Welcome to Edition 7.39 of the Rocket Report! Not getting your launch fix? Buckle up. We’re on the cusp of a boom in rocket launches as three new megaconstellations have either just begun or will soon begin deploying thousands of satellites to enable broadband connectivity from space. If the megaconstellations come to fruition, this will require more than a thousand launches in the next few years, on top of SpaceX’s blistering Starlink launch cadence. We discuss the topic of megaconstellations in this week’s Rocket Report.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

So, what is SpinLaunch doing now? Ars Technica has mentioned SpinLaunch, the company that literally wants to yeet satellites into space, in previous Rocket Report newsletters. This company enjoyed some success in raising money for its so-crazy-it-just-might-work idea of catapulting rockets and satellites into the sky, a concept SpinLaunch calls “kinetic launch.” But SpinLaunch is now making a hard pivot to small satellites, a move that, on its face, seems puzzling after going all-in on kinetic launch and even performing several impressive hardware tests, throwing a projectile to altitudes of up to 30,000 feet. Ars got the scoop, with the company’s CEO detailing why and how it plans to build a low-Earth orbit telecommunications constellation with 280 satellites.

Traditional versus kinetic … The planned constellation, named Meridian, is an opportunity for SpinLaunch to diversify away from being solely a launch company, according to David Wrenn, the company’s CEO. We’ve observed this in a number of companies that started out as rocket developers before branching out to satellite manufacturing or space services. Wrenn said SpinLaunch could loft all of the Meridian satellites on a single large conventional rocket, or perhaps two medium-lift rockets, and then maintain the constellation with its own kinetic launch system. A satellite communications network presents a better opportunity for profit, Wrenn said. “The launch market is relatively small compared to the economic potential of satellite communication,” he said. “Launch has generally been more of a cost center than a profit center. Satcom will be a much larger piece of the overall industry.”

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

Peter Beck suggests Electron is here to stay. The conventional wisdom is that the small launch vehicle business isn’t a big moneymaker. There is really only one company, Rocket Lab, that has gained traction in selling dedicated rides to orbit for small satellites. Rocket Lab’s launcher, Electron, can place payloads of up to a few hundred pounds into orbit. As soon as Rocket Lab had some success, SpaceX began launching rideshare missions on its much larger Falcon 9 rocket, cobbling together dozens of satellites on a single vehicle to spread the cost of the mission among many customers. This offers customers a lower price point than buying a dedicated launch on Electron. But Peter Beck, Rocket Lab’s founder and CEO, says his company has found a successful market providing dedicated launches for small satellites, despite price pressure from SpaceX, Space News reports. “Dedicated small launch is a real market, and it should not be confused with rideshare,” he argued. “It’s totally different.”

No man’s land … Some small satellite companies that can afford the extra cost of a dedicated launch realize the value of controlling their schedule and orbit, traits that a dedicated launch offers over a rideshare, Beck said. It’s easy to blame SpaceX for undercutting the prices of Rocket Lab and other players in this segment of the launch business, but Beck said companies that have failed or withdrawn from the small launch market didn’t have a good business plan, a good product, or good engineering. He added that the capacity of the Electron vehicle is well-suited for dedicated launch, whereas slightly larger rockets in the one-ton-to-orbit class—a category that includes Firefly Aerospace’s Alpha and Isar Aerospace’s Spectrum rockets—are an ill fit. The one-ton performance range is “no man’s land” in the market, Beck said. “It’s too small to be a useful rideshare mission, and it’s too big to be a useful dedicated rocket” for smallsats. (submitted by EllPeaTea)

ULA scrubs first full-on Kuiper launch. A band of offshore thunderstorms near Florida’s Space Coast on Wednesday night forced United Launch Alliance to scrub a launch attempt of the first of dozens of missions on behalf of its largest commercial customer, Amazon, Spaceflight Now reports. The mission will use an Atlas V rocket to deploy 27 satellites for Amazon’s Project Kuiper network. It’s the first launch of what will eventually be more than 3,200 operational Kuiper satellites beaming broadband connectivity from space, a market currently dominated by SpaceX’s Starlink. As of Thursday, ULA hadn’t confirmed a new launch date, but airspace warning notices released by the FAA suggest the next attempt might occur Monday, April 14.

What’s a few more days? … This mission has been a long time coming. Amazon announced the Kuiper megaconstellation in 2019, and the company says it’s investing at least $10 billion in the project (the real number may be double that). Problems in manufacturing the Kuiper satellites, which Amazon is building in-house, delayed the program’s first full-on launch by a couple of years. Amazon launched a pair of prototype satellites in 2023, but the operational versions are different, and this mission fills the capacity of ULA’s Atlas V rocket. Amazon has booked more than 80 launches with ULA, Arianespace, Blue Origin, and SpaceX to populate the Kuiper network. (submitted by EllPeaTea)

Space Force swaps ULA for SpaceX. For the second time in six months, SpaceX will deploy a US military satellite that was sitting in storage, waiting for a slot on United Launch Alliance’s launch schedule, Ars reports. Space Systems Command, which oversees the military’s launch program, announced Monday that it is reassigning the launch of a Global Positioning System satellite from ULA’s Vulcan rocket to SpaceX’s Falcon 9. This satellite, designated GPS III SV-08 (Space Vehicle-08), will join the Space Force’s fleet of navigation satellites beaming positioning and timing signals for military and civilian users around the world. The move allows the GPS satellite to launch as soon as the end of May, the Space Force said. The military executed a similar rocket swap for a GPS mission that launched on a Falcon 9 in December.

Making ULA whole … The Space Force formally certified ULA’s Vulcan rocket for national security missions last month, so Vulcan may finally be on the cusp of delivering for the military. But there are several military payloads in the queue to launch on Vulcan before GPS III SV-08, which was already completed and in storage at its Lockheed Martin factory in Colorado. Meanwhile, SpaceX is regularly launching Falcon 9 rockets with ample capacity to add the GPS mission to the manifest. In exchange for losing the contract to launch this particular GPS satellite, the Space Force swapped a future GPS mission that was assigned to SpaceX to fly on ULA’s Vulcan instead.

Russia launches a former Navy SEAL to space. Jonny Kim, a former Navy SEAL, Harvard Medical School graduate, and now a NASA astronaut, blasted off with two cosmonaut crewmates aboard a Russian Soyuz rocket early Tuesday, CBS News reports. Three hours later, Kim and his Russian crewmates—Sergey Ryzhikov and Alexey Zubritsky—chased down the International Space Station and moved in for a picture-perfect docking aboard their Soyuz MS-27 spacecraft. “It was the trip of a lifetime and an honor to be here,” Kim told flight controllers during a traditional post-docking video conference.

Rotating back to Earth … Ryzhikov, Zubritsky, and Kim joined a crew of seven living aboard the International Space Station, temporarily raising the lab’s crew complement to 10 people. The new station residents are replacing an outgoing Soyuz crew—Alexey Ovchinin, Ivan Wagner, and Don Pettit—who launched to the ISS last September and who plan to return to Earth aboard their own spacecraft April 19 to wrap up a 219-day stay in space. This flight continues the practice of launching US astronauts on Russian Soyuz missions, part of a barter agreement between NASA and the Russian space agency that also reserves a seat on SpaceX Dragon missions for Russian cosmonauts.

China is littering in LEO. China’s construction of a pair of communications megaconstellations could cloud low Earth orbit with large spent rocket stages for decades or beyond, Space News reports. Launches for the government’s Guowang and Shanghai-backed but more commercially oriented Qianfan (Thousand Sails) constellation began in the second half of 2024, with each planned to consist of over 10,000 satellites, demanding more than a thousand launches in the coming years. Placing this number of satellites is enough to cause concern about space debris because China hasn’t disclosed its plans for removing the spacecraft from orbit at the end of their missions. It turns out there’s another big worry: upper stages.

An orbital time bomb … While Western launch providers typically deorbit their upper stages after dropping off megaconstellation satellites in space, China does not. This means China is leaving rockets in orbits high enough to persist in space for more than a century, according to Jim Shell, a space domain awareness and orbital debris expert at Novarum Tech. Space News reported on Shell’s commentary in a social media post, where he wrote that orbital debris mass in low-Earth orbit “will be dominated by PRC [People’s Republic of China] upper stages in short order unless something changes (sigh).” So far, China has launched five dedicated missions to deliver 90 Qianfan satellites into orbit. Four of these missions used China’s Long March 6A rocket, with an upper stage that has a history of breaking up in orbit, exacerbating the space debris problem. (submitted by EllPeaTea)

SpaceX wins another lunar lander launch deal. Intuitive Machines has selected a SpaceX Falcon 9 rocket to launch a lunar delivery mission scheduled for 2027, the Houston Chronicle reports. The upcoming IM-4 mission will carry six NASA payloads, including a European Space Agency-led drill suite designed to search for water at the lunar south pole. It will also include the launch of two lunar data relay satellites that support NASA’s so-called Near Space Network Services program. This will be the fourth lunar lander mission for Houston-based Intuitive Machines under the auspices of NASA’s Commercial Lunar Payload Services program.

Falcon 9 has the inside track … SpaceX almost certainly offered Intuitive Machines the best deal for this launch. The flight-proven Falcon 9 rocket is reliable and inexpensive compared to competitors and has already launched two Intuitive Machines missions, with a third one set to fly late this year. However, there’s another factor that made SpaceX a shoe-in for this contract. SpaceX has outfitted one of its launch pads in Florida with a unique cryogenic loading system to pump liquid methane and liquid oxygen propellants into the Intuitive Machines lunar lander as it sits on top of its rocket just before liftoff. The lander from Intuitive Machines uses these super-cold propellants to feed its main engine, and SpaceX’s infrastructure for loading it makes the Falcon 9 rocket the clear choice for launching it.

Time may finally be running out for SLS. Jared Isaacman, President Trump’s nominee for NASA administrator, said Wednesday in a Senate confirmation hearing that he wants the space agency to pursue human missions to the Moon and Mars at the same time, an effort that will undoubtedly require major changes to how NASA spends its money. My colleague Eric Berger was in Washington for the hearing and reported on it for Ars. Senators repeatedly sought Isaacman’s opinion on the Space Launch System, the NASA heavy-lifter designed to send astronauts to the Moon. The next SLS mission, Artemis II, is slated to launch a crew of four astronauts around the far side of the Moon next year. NASA’s official plans call for the Artemis III mission to launch on an SLS rocket later this decade and attempt a landing at the Moon’s south pole.

Limited runway … Isaacman sounded as if he were on board with flying the Artemis II mission as envisioned—no surprise, then, that the four Artemis II astronauts were in the audience—and said he wanted to get a crew of Artemis III to the lunar surface as quickly as possible. But he questioned why it has taken NASA so long, and at such great expense, to get its deep space human exploration plans moving. In one notable exchange, Isaacman said NASA’s current architecture for the Artemis lunar plans, based on the SLS rocket and Orion spacecraft, is probably not the ideal “long-term” solution to NASA’s deep space transportation plans. The smart reading of this is that Isaacman may be willing to fly the Artemis II and Artemis III missions as conceived, given that much of the hardware is already built. But everything that comes after this, including SLS rocket upgrades and the Lunar Gateway, could be on the chopping block.

Welcome to the club, Blue Origin. Finally, the Space Force has signaled it’s ready to trust Jeff Bezos’ space company, Blue Origin, for launching the military’s most precious satellites, Ars reports. Blue Origin received a contract on April 4 to launch seven national security missions for the Space Force between 2027 and 2032, an opening that could pave the way for more launch deals in the future. These missions will launch on Blue Origin’s heavy-lift New Glenn rocket, which had a successful debut test flight in January. The Space Force hasn’t certified New Glenn for national security launches, but military officials expect to do so sometime next year. Blue Origin joins SpaceX and United Launch Alliance in the Space Force’s mix of most-trusted launch providers.

A different class … The contract Blue Origin received last week covers launch services for the Space Force’s most critical space missions, requiring rocket certification and a heavy dose of military oversight to ensure reliability. Blue Origin was already eligible to launch a separate batch of missions the Space Force set aside to fly on newer rockets. The military is more tolerant of risk on these lower-priority missions, which include launches of “cookie-cutter” satellites for the Pentagon’s large fleet of missile-tracking satellites and a range of experimental payloads.

Why is SpaceX winning so many Space Force contracts? In less than a week, the US Space Force awarded SpaceX a $5.9 billion deal to make Elon Musk’s space company the Pentagon’s leading launch provider, replacing United Launch Alliance in the top position. Then, the Space Force assigned most of this year’s most lucrative launch contracts to SpaceX. As we mentioned earlier in the Rocket Report, the military also swapped a ULA rocket for a SpaceX launch vehicle for an upcoming GPS mission. So, is SpaceX’s main competitor worried Elon Musk is tipping the playing field for lucrative government contracts by cozying up to President Trump?

It’s all good, man … Tory Bruno, ULA’s chief executive, doesn’t seem too worried in his public statements, Ars reports. In a roundtable with reporters this week at the annual Space Symposium conference in Colorado, Bruno was asked about Musk’s ties with Trump. “We have not been impacted by our competitor’s position advising the president, certainly not yet,” Bruno said. “I expect that the government will follow all the rules and be fair and follow all the laws, and so we’re behaving that way.” The reason Bruno can say Musk’s involvement in the Trump administration so far hasn’t affected ULA is simple. SpaceX is cheaper and has a ready-made line of Falcon 9 and Falcon Heavy rockets available to launch the Pentagon’s satellites. ULA’s Vulcan rocket is now certified to launch military payloads, but it reached this important milestone years behind schedule.

Two Texas lawmakers are still fighting the last war. NASA has a lot to figure out in the next couple of years. Moon or Mars? Should, or when should, the Space Launch System be canceled? Can the agency absorb a potential 50 percent cut to its science budget? If Senators John Cornyn and Ted Cruz get their way, NASA can add moving a space shuttle to its list. The Lone Star State’s two Republican senators introduced the “Bring the Space Shuttle Home Act” on Thursday, CollectSpace reports. If passed by Congress and signed into law, the bill would direct NASA to take the space shuttle Discovery from the national collection at the Smithsonian National Air and Space Museum and transport it to Space Center Houston, a museum and visitor attraction next to Johnson Space Center, home to mission control and NASA’s astronaut training base. Discovery has been on display at the Smithsonian since 2012. NASA awarded museums in California, Florida, and New York the other three surviving shuttle orbiters.

Dollars and nonsense … Moving a space shuttle from Virginia to Texas would be a logistical nightmare, cost an untold amount of money, and would create a distraction for NASA when its focus should be on future space exploration. In a statement, Cruz said Houston deserves one of NASA’s space shuttles because of the city’s “unique relationship” with the program. Cornyn alleged in a statement that the Obama administration blocked Houston from receiving a space shuttle for political reasons. NASA’s inspector general found no evidence of this. On the contrary, transferring a space shuttle to Texas now would be an unequivocal example of political influence. The Boeing 747s that NASA used to move space shuttles across the country are no longer flightworthy, and NASA scrapped the handling equipment needed to prepare a shuttle for transport. Moving the shuttle by land or sea would come with its own challenges. “I can easily see this costing a billion dollars,” Dennis Jenkins, a former shuttle engineer who directed NASA’s shuttle transition and retirement program more than a decade ago, told CollectSpace in an interview. On a personal note, the presentation of Discovery at the Smithsonian is remarkable to see in person, with aerospace icons like the Concorde and the SR-71 spy plane under the same roof. Space Center Houston can’t match that.

Next three launches

April 12: Falcon 9 | Starlink 12-17 | Kennedy Space Center, Florida | 01: 15 UTC

April 12: Falcon 9 | NROL-192 | Vandenberg Space Force Base, California | 12: 17 UTC

April 14: Falcon 9 | Starlink 6-73 | Cape Canaveral Space Force Station, Florida | 01: 59 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: “No man’s land” in rocket wars; Isaacman lukewarm on SLS Read More »

a-guide-to-the-“platonic-ideal”-of-a-negroni-and-other-handy-tips

A guide to the “platonic ideal” of a Negroni and other handy tips


Perfumer by day, mixologist by night, Kevin Peterson specializes in crafting scent-paired cocktails.

Kevin Peterson is a “nose” for his own perfume company, Sfumato Fragrances, by day. By night, Sfumato’s retail store in Detroit transforms into Peterson’s craft cocktail bar, Castalia, where he is chief mixologist and designs drinks that pair with carefully selected aromas. He’s also the author of Cocktail Theory: A Sensory Approach to Transcendent Drinks, which grew out of his many (many!) mixology experiments and popular YouTube series, Objective Proof: The Science of Cocktails.

It’s fair to say that Peterson has had an unusual career trajectory. He worked as a line cook and an auto mechanic, and he worked on the production line of a butter factory, among other gigs, before attending culinary school in hopes of becoming a chef. However, he soon realized it wasn’t really what he wanted out of life and went to college, earning an undergraduate degree in physics from Carleton College and a PhD in mechanical engineering from the University of Michigan.

After 10 years as an engineer, he switched focus again and became more serious about his side hobby, perfumery. “Not being in kitchens anymore, I thought—this is a way to keep that little flavor part of my brain engaged,” Peterson told Ars. “I was doing problem sets all day. It was my escape to the sensory realm. ‘OK, my brain is melting—I need a completely different thing to do. Let me go smell smells, escape to my little scent desk.'” He and his wife, Jane Larson, founded Sfumato, which led to opening Castalia, and Peterson finally found his true calling.

Peterson spent years conducting mixology experiments to gather empirical data about the interplay between scent and flavor, correct ratios of ingredients, temperature, and dilution for all the classic cocktails—seeking a “Platonic ideal,” for each, if you will. He supplemented this with customer feedback data from the drinks served at Castalia. All that culminated in Cocktail Theory, which delves into the chemistry of scent and taste, introducing readers to flavor profiles, textures, visual presentation, and other factors that contribute to one’s enjoyment (or lack thereof) of a cocktail. And yes, there are practical tips for building your own home bar, as well as recipes for many of Castalia’s signature drinks.

In essence, Peterson’s work adds scientific rigor to what is frequently called the “Mr. Potato Head” theory of cocktails, a phrase coined by the folks at Death & Company, who operate several craft cocktail bars in key cities. “Let’s say you’ve got some classic cocktail, a daiquiri, that has this many parts of rum, this many parts of lime, this many parts of sugar,” said Peterson, who admits to having a Mr. Potato Head doll sitting on Castalia’s back bar in honor of the sobriquet. “You can think about each ingredient in a more general way: instead of rum, this is the spirit; instead of lime, this is the citrus; sugars are sweetener. Now you can start to replace those things with other things in the same categories.”

We caught up with Peterson to learn more.

Ars Technica: How did you start thinking about the interplay between perfumery and cocktail design and the role that aroma plays in each?

Kevin Peterson: The first step was from food over to perfumery, where I think about building a flavor for a soup, for a sauce, for a curry, in a certain way. “Oh, there’s a gap here that needs to be filled in by some herbs, some spice.” It’s almost an intuitive kind of thing. When I was making scents, I had those same ideas: “OK, the shape of this isn’t quite right. I need this to roughen it up or to smooth out this edge.”

Then I did the same thing for cocktails and realized that those two worlds didn’t really talk to each other. You’ve got two groups of people that study all the sensory elements and how to create the most intriguing sensory impression, but they use different language; they use different toolkits. They’re going for almost the same thing, but there was very little overlap between the two. So I made that my niche: What can perfumery teach bartenders? What can the cocktail world teach perfumery?

Ars Technica: In perfumery you talk about a top, a middle, and a base note. There must be an equivalent in cocktail theory?

Kevin Peterson: In perfumery, that is mostly talking about the time element: top notes perceived first, then middle notes, then base notes as you wear it over the course of a few hours. In the cocktail realm, there is that time element as well. You get some impression when you bring the glass to your nose, something when you sip, something in the aftertaste. But there can also be a spatial element. Some things you feel right at the tip of your tongue, some things you feel in different parts of your face and head, whether that’s a literal impression or you just kind of feel it somewhere where there’s not a literal nerve ending. It’s about filling up that space, or not filling it up, depending on what impression you’re going for—building out the full sensory space.

Ars Technica: You also talk about motifs and supportive effects or ornamental flourishes: themes that you can build on in cocktails.

Kevin Peterson: Something I see in the cocktail world occasionally is that people just put a bunch of ingredients together and figure, “This tastes fine.” But what were you going for here? There are 17 things in here, and it just kind of tastes like you were finger painting: “Hey, I made brown.” Brown is nice. But the motifs that I think about—maybe there’s just one particular element that I want to highlight. Say I’ve got this really great jasmine essence. Everything else in the blend is just there to highlight the jasmine.

If you’re dealing with a really nice mezcal or bourbon or some unique herb or spice, that’s going to be the centerpiece. You’re not trying to get overpowered by some smoky scotch, by some other more intense ingredient. The motif could just be a harmonious combination of elements. I think the perfect old-fashioned is where everything is present and nothing’s dominating. It’s not like the bitters or the whiskey totally took over. There’s the bitters, there’s a little bit of sugar, there’s the spirit. Everything’s playing nicely.

Another motif, I call it a jazz note. A Sazerac is almost the same as an old-fashioned, but it’s got a little bit of absinthe in it. You get all the harmony of the old-fashioned, but then you’re like, “Wait, what’s this weird thing pulling me off to the side? Oh, this absinthe note is kind of separate from everything else that’s going on in the drink.” It’s almost like that tension in a musical composition: “Well, these notes sound nice, but then there’s one that’s just weird.” But that’s what makes it interesting, that weird note. For me, formalizing some of those motifs help me make it clearer. Even if I don’t tell that to the guest during the composition stage, I know this is the effect I’m going for. It helps me build more intentionally when I’ve got a motif in mind.

Ars Technica: I tend to think about cocktails more in terms of chemistry, but there are many elements to taste and perception and flavor. You talk about ingredient matching, molecular matching, and impression matching, i.e., how certain elements will overlap in the brain. What role do each of those play?

Kevin Peterson: A lot of those ideas relate to how we pair scents with cocktails. At my perfume company, we make eight fragrances as our main line. Each scent then gets a paired drink on the cocktail menu. For example, this scent has coriander, cardamom, and nutmeg. What does it mean that the drink is paired with that? Does it need to literally have coriander, cardamom, and nutmeg in it? Does it need to have every ingredient? If the scent has 15 things, do I need to hit every note?

chart with sad neutral and happy faces showing the optimal temperature and dilution for a dauquiri

Peterson made over 100 daiquiris to find the “Platonic ideal” of the classic cocktail Credit: Kevin Peterson

The literal matching is the most obvious. “This has cardamom, that has cardamom.” I can see how that pairs. The molecular matching is essentially just one more step removed: Rosemary has alpha-pinene in it, and juniper berries have alpha-pinene in them. So if the scent has rosemary and the cocktail has gin, they’re both sharing that same molecule, so it’s still exciting that same scent receptor. What I’m thinking about is kind of resonant effects. You’re approaching the same receptor or the same neural structure in two different ways, and you’re creating a bigger peak with that.

The most hand-wavy one to me is the impression matching. Rosemary smells cold, and Fernet-Branca tastes cold even when it’s room temperature. If the scent has rosemary, is Fernet now a good match for that? Some of the neuroscience stuff that I’ve read has indicated that these more abstract ideas are represented by the same sort of neural-firing patterns. Initially, I was hesitant; cold and cold, it doesn’t feel as fulfilling to me. But then I did some more reading and realized there’s some science behind it and have been more intrigued by that lately.

Ars Technica: You do come up with some surprising flavor combinations, like a drink that combined blueberry and horseradish, which frankly sounds horrifying. 

Kevin Peterson: It was a hit on the menu. I would often give people a little taste of the blueberry and then a little taste of the horseradish tincture, and they’d say, “Yeah, I don’t like this.” And then I’d serve them the cocktail, and they’d be like, “Oh my gosh, it actually worked. I can’t believe it.”  Part of the beauty is you take a bunch of things that are at least not good and maybe downright terrible on their own, and then you stir them all together and somehow it’s lovely. That’s basically alchemy right there.

Ars Technica: Harmony between scent and the cocktail is one thing, but you also talk about constructive interference to get a surprising, unexpected, and yet still pleasurable result.

Kevin Peterson: The opposite is destructive interference, where there’s just too much going on. When I’m coming up with a drink, sometimes that’ll happen, where I’m adding more, but the flavor impression is going down. It’s sort of a weird non-linearity of flavor, where sometimes two plus two equals four, sometimes it equals three, sometimes it equals 17. I now have intuition about that, having been in this world for a lot of years, but I still get surprised sometimes when I put a couple things together.

Often with my end-of-the-shift drink, I’ll think, “Oh, we got this new bottle in. I’m going to try that in a Negroni variation.” Then I lose track and finish mopping, and then I sip, and I’m like, “What? Oh my gosh, I did not see this coming at all.” That little spark, or whatever combo creates that, will then often be the first step on some new cocktail development journey.

man's torso in a long-sleeved button down white shirt, with a small glass filled with juniper berries in front of him

Pairing scents with cocktails involves experimenting with many different ingredients Credit: EE Berger

Ars Technica: Smoked cocktails are a huge trend right now. What’s the best way to get a consistently good smoky element?

Kevin Peterson: Smoke is tricky to make repeatable. How many parts per million of smoke are you getting in the cocktail? You could standardize the amount of time that it’s in the box [filled with smoke]. Or you could always burn, say, exactly three grams of hickory or whatever. One thing that I found, because I was writing the book while still running the bar: People have a lot of expectations around how the drink is going to be served. Big ice cubes are not ideal for serving drinks, but people want a big ice cube in their old-fashioned. So we’re still using big ice cubes. There might be a Platonic ideal in terms of temperature, dilution, etc., but maybe it’s not the ideal in terms of visuals or tactile feel, and that is a part of the experience.

With the smoker, you open the doors, smoke billows out, your drink emerges from the smoke, and people say, “Wow, this is great.” So whether you get 100 PPM one time and 220 PPM the next, maybe that gets outweighed by the awesomeness of the presentation. If I’m trying to be very dialed in about it, I’ll either use a commercial smoky spirit—Laphroaig scotch, a smoky mezcal—where I decide that a quarter ounce is the amount of smokiness that I want in the drink. I can just pour the smoke instead of having to burn and time it.

Or I might even make my own smoke: light something on fire and then hold it under a bottle, tip it back up, put some vodka or something in there, shake it up. Now I’ve got smoke particles in my vodka. Maybe I can say, “OK, it’s always going to be one milliliter,” but then you miss out on the presentation—the showmanship, the human interaction, the garnish. I rarely garnish my own drinks, but I rarely send a drink out to a guest ungarnished, even if it’s just a simple orange peel.

Ars Technica: There’s always going to be an element of subjectivity, particularly when it comes to our sensory perceptions. Sometimes you run into a person who just can’t appreciate a certain note.

Kevin Peterson: That was something I grappled with. On the one hand, we’re all kind of living in our own flavor world. Some people are more sensitive to bitter. Different scent receptors are present in different people. It’s tempting to just say, “Well, everything’s so unique. Maybe we just can’t say anything about it at all.” But that’s not helpful either. Somehow, we keep having delicious food and drink and scents that come our way.

A sample page from Cocktail Theory discussing temperature and dilution

A sample page from Cocktail Theory discussing temperature and dilution. Credit: EE Berger

I’ve been taking a lot of survey data in my bar more recently, and definitely the individuality of preference has shown through in the surveys. But another thing that has shown through is that there are some universal trends. There are certain categories. There’s the spirit-forward, bittersweet drinkers, there’s the bubbly citrus folks, there’s the texture folks who like vodka soda. What is the taste? What is the aroma? It’s very minimal, but it’s a very intense texture. Having some awareness of that is critical when you’re making drinks.

One of the things I was going for in my book was to find, for example, the platonically ideal gin and tonic. What are the ratios? What is the temperature? How much dilution to how much spirit is the perfect amount? But if you don’t like gin and tonics, it doesn’t matter if it’s a platonically ideal gin and tonic. So that’s my next project. It’s not just getting the drink right. How do you match that to the right person? What questions do I have to ask you, or do I have to give you taste tests? How do I draw that information out of the customer to determine the perfect drink for them?

We offer a tasting menu, so our full menu is eight drinks, and you get a mini version of each drink. I started giving people surveys when they would do the tasting menu, asking, “Which drink do you think you like the most? Which drink do you think you like the least?” I would have them rate it. Less than half of people predicted their most liked and least liked, meaning if you were just going to order one drink off the menu, your odds are less than a coin flip that you would get the right drink.

Ars Technica: How does all this tie into your “cocktails as storytelling” philosophy? 

Kevin Peterson: So much of flavor impression is non-verbal. Scent is very hard to describe. You can maybe describe taste, but we only have five-ish words, things like bitter, sour, salty, sweet. There’s not a whole lot to say about that: “Oh, it was perfectly balanced.” So at my bar, when we design menus, we’ll put the drinks together, but then we’ll always give the menu a theme. The last menu that we did was the scientist menu, where every drink was made in honor of some scientist who didn’t get the credit they were due in the time they were alive.

Having that narrative element, I think, helps people remember the drink better. It helps them in the moment to latch onto something that they can more firmly think about. There’s a conceptual element. If I’m just doing chores around the house, I drink a beer, it doesn’t need to have a conceptual element. If I’m going out and spending money and it’s my night and I want this to be a more elevated experience, having that conceptual tie-in is an important part of that.

two martini glasses side by side with a cloudy liquid in them a bright red cherry at the bottom of the glass

My personal favorite drink, Corpse Reviver No. 2, has just a hint of absinthe. Credit: Sean Carroll

Ars Technica: Do you have any simple tips for people who are interested in taking their cocktail game to the next level?

Kevin Peterson:  Old-fashioneds are the most fragile cocktail. You have to get all the ratios exactly right. Everything has to be perfect for an old-fashioned to work. Anecdotally, I’ve gotten a lot of old-fashioneds that were terrible out on the town. In contrast, the Negroni is the most robust drink. You can miss the ratios. It’s got a very wide temperature and dilution window where it’s still totally fine. I kind of thought of them in the same way prior to doing the test. Then I found that this band of acceptability is much bigger for the Negroni. So now I think of old-fashioneds as something that either I make myself or I order when I either trust the bartender or I’m testing someone who wants to come work for me.

My other general piece of advice: It can be a very daunting world to try to get into. You may say, “Oh, there’s all these classics that I’m going to have to memorize, and I’ve got to buy all these weird bottles.” My advice is to pick a drink you like and take baby steps away from that drink. Say you like Negronis. That’s three bottles: vermouth, Campari, and gin. Start with that. When you finish that bottle of gin, buy a different type of gin. When you finish the Campari, try a different bittersweet liqueur. See if that’s going to work. You don’t have to drop hundreds of dollars, thousands of dollars, to build out a back bar. You can do it with baby steps.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

A guide to the “platonic ideal” of a Negroni and other handy tips Read More »

quantum-hardware-may-be-a-good-match-for-ai

Quantum hardware may be a good match for AI

Quantum computers don’t have that sort of separation. While they could include some quantum memory, the data is generally housed directly in the qubits, while computation involves performing operations, called gates, directly on the qubits themselves. In fact, there has been a demonstration that, for supervised machine learning, where a system can learn to classify items after training on pre-classified data, a quantum system can outperform classical ones, even when the data being processed is housed on classical hardware.

This form of machine learning relies on what are called variational quantum circuits. This is a two-qubit gate operation that takes an additional factor that can be held on the classical side of the hardware and imparted to the qubits via the control signals that trigger the gate operation. You can think of this as analogous to the communications involved in a neural network, with the two-qubit gate operation equivalent to the passing of information between two artificial neurons and the factor analogous to the weight given to the signal.

That’s exactly the system that a team from the Honda Research Institute worked on in collaboration with a quantum software company called Blue Qubit.

Pixels to qubits

The focus of the new work was mostly on how to get data from the classical world into the quantum system for characterization. But the researchers ended up testing the results on two different quantum processors.

The problem they were testing is one of image classification. The raw material was from the Honda Scenes dataset, which has images taken from roughly 80 hours of driving in Northern California; the images are tagged with information about what’s in the scene. And the question the researchers wanted the machine learning to handle was a simple one: Is it snowing in the scene?

Quantum hardware may be a good match for AI Read More »

new-simulation-of-titanic’s-sinking-confirms-historical-testimony

New simulation of Titanic’s sinking confirms historical testimony


NatGeo documentary follows a cutting-edge undersea scanning project to make a high-resolution 3D digital twin of the ship.

The bow of the Titanic Digital Twin, seen from above at forward starboard side. Credit: Magellan Limited/Atlantic Productions

In 2023, we reported on the unveiling of the first full-size 3D digital scan of the remains of the RMS Titanic—a “digital twin” that captured the wreckage in unprecedented detail. Magellan Ltd, a deep-sea mapping company, and Atlantic Productions conducted the scans over a six-week expedition. That project is the subject of the new National Geographic documentary Titanic: The Digital Resurrection, detailing several fascinating initial findings from experts’ ongoing analysis of that full-size scan.

Titanic met its doom just four days into the Atlantic crossing, roughly 375 miles (600 kilometers) south of Newfoundland. At 11: 40 pm ship’s time on April 14, 1912, Titanic hit that infamous iceberg and began taking on water, flooding five of its 16 watertight compartments, thereby sealing its fate. More than 1,500 passengers and crew perished; only around 710 of those on board survived.

Titanic remained undiscovered at the bottom of the Atlantic Ocean until an expedition led by Jean-Louis Michel and Robert Ballard reached the wreck on September 1, 1985. The ship split apart as it sank, with the bow and stern sections lying roughly one-third of a mile apart. The bow proved to be surprisingly intact, while the stern showed severe structural damage, likely flattened from the impact as it hit the ocean floor. There is a debris field spanning a 5×3-mile area, filled with furniture fragments, dinnerware, shoes and boots, and other personal items.

The joint mission by Magellan and Atlantic Productions deployed two submersibles nicknamed Romeo and Juliet to map every millimeter of the wreck, including the debris field spanning some three miles. The result was a whopping 16 terabytes of data, along with over 715,000 still images and 4K video footage. That raw data was then processed to create the 3D digital twin. The resolution is so good, one can make out part of the serial number on one of the propellers.

“I’ve seen the wreck in person from a submersible, and I’ve also studied the products of multiple expeditions—everything from the original black-and-white imagery from the 1985 expedition to the most modern, high-def 3D imagery,” deep ocean explorer Parks Stephenson told Ars. “This still managed to blow me away with its immense scale and detail.”

The Juliet ROV scans the bow railing of the Titanic wreck site. Magellan Limited/Atlantic Productions

The NatGeo series focuses on some of the fresh insights gained from analyzing the digital scan, enabling Titanic researchers like Stephenson to test key details from eyewitness accounts. For instance, some passengers reported ice coming into their cabins after the collision. The scan shows there is a broken porthole that could account for those reports.

One of the clearest portions of the scan is Titanic‘s enormous boiler rooms right at the rear bow section where the ship snapped in half. Eyewitness accounts reported that the ship’s lights were still on right up until the sinking, thanks to the tireless efforts of Joseph Bell and his team of engineers, all of whom perished. The boilers show up as concave on the digital replica of Titanic, and one of the valves is in an open position, supporting those accounts.

The documentary spends a significant chunk of time on a new simulation of the actual sinking, taking into account the ship’s original blueprints, as well as information on speed, direction, and position. Researchers at University College London were also able to extrapolate how the flooding progressed. Furthermore, a substantial portion of the bow hit the ocean floor with so much force that much of it remains buried under mud. Romeo’s scans of the debris field scattered across the ocean floor enabled researchers to reconstruct the damage to the buried portion.

Titanic was famously designed to stay afloat if up to four of its watertight compartments flooded. But the ship struck the iceberg from the side, causing a series of punctures along the hull across 18 feet, affecting six of the compartments. Some of those holes were quite small, about the size of a piece of paper, but water could nonetheless seep in and eventually flood the compartments. So the analysis confirmed the testimony of naval architect Edward Wilding—who helped design Titanic—as to how a ship touted as unsinkable could have met such a fate. And as Wilding hypothesized, the simulations showed that had Titanic hit the iceberg head-on, she would have stayed afloat.

These are the kinds of insights that can be gleaned from the 3D digital model, according to Atlantic Productions CEO Anthony Geffen, who produced the NatGeo series. “It’s not really a replica. It is a digital twin, down to the last rivet,” he told Ars. “That’s the only way that you can start real research. The detail here is what we’ve never had. It’s like a crime scene. If you can see what the evidence is, in the context of where it is, you can actually piece together what happened. You can extrapolate what you can’t see as well. Maybe we can’t physically go through the sand or the silt, but we can simulate anything because we’ve actually got the real thing.”

Ars caught up with Stephenson and Geffen to learn more.

A CGI illustration of the bow of the Titanic as it sinks into the ocean. National Geographic

Ars Technica: What is so unique and memorable about experiencing the full-size 3D scan of Titanic, especially for those lucky enough to have seen the actual wreckage first-hand via submersible?

Parks Stephenson: When you’re in the submersible, you are restricted to a 7-inch viewport and as far as your light can travel, which is less than 100 meters or so. If you have a camera attached to the exterior of the submersible, you can only get what comes into the frame of the camera. In order to get the context, you have to stitch it all together somehow, and, even then, you still have human bias that tends to make the wreck look more like the original Titanic of 1912 than it actually does today. So in addition to seeing it full-scale and well-lit wherever you looked, able to wander around the wreck site, you’re also seeing it for the first time as a purely data-driven product that has no human bias. As an analyst, this is an analytical dream come true.

Ars Technica: One of the most visually arresting images from James Cameron’s blockbuster film Titanic was the ship’s stern sticking straight up out of the water after breaking apart from the bow. That detail was drawn from eyewitness accounts, but a 2023 computer simulation called it into question. What might account for this discrepancy? 

Parks Stephenson: One thing that’s not included in most pictures of Titanic sinking is the port heel that she had as she’s going under. Most of them show her sinking on an even keel. So when she broke with about a 10–12-degree port heel that we’ve reconstructed from eyewitness testimony, that stern would tend to then roll over on her side and go under that way. The eyewitness testimony talks about the stern sticking up as a finger pointing to the sky. If you even take a shallow angle and look at it from different directions—if you put it in a 3D environment and put lifeboats around it and see the perspective of each lifeboat—there is a perspective where it does look like she’s sticking up like a finger in the sky.

Titanic analyst Parks Stephenson, metallurgist Jennifer Hooper, and master mariner Captain Chris Hearn find evidence exonerating First Officer William Murdoch, long accused of abandoning his post.

This points to a larger thing: the Titanic narrative as we know it today can be challenged. I would go as far as to say that most of what we know about Titanic now is wrong. With all of the human eyewitnesses having passed away, the wreck is our only remaining witness to the disaster. This photogrammetry scan is providing all kinds of new evidence that will help us reconstruct that timeline and get closer to the truth.

Ars Technica: What more are you hoping to learn about Titanic‘s sinking going forward? And how might those lessons apply more broadly?

Parks Stephenson: The data gathered in this 2022 expedition yielded more new information that could be put into this program. There’s enough material already to have a second show. There are new indicators about the condition of the wreck and how long she’s going to be with us and what happens to these wrecks in the deep ocean environment. I’ve already had a direct application of this. My dives to Titanic led me to another shipwreck, which led me to my current position as executive director of a museum ship in Louisiana, the USS Kidd.

She’s now in dry dock, and there’s a lot that I’m understanding about some of the corrosion issues that we experienced with that ship based on corrosion experiments that have been conducted at the Titanic wreck sites—specifically how metal acts underwater over time if it’s been stressed on the surface. It corrodes differently than just metal that’s been submerged. There’s all kinds of applications for this information. This is a new ecosystem that has taken root in Titanic. I would say between my dive in 2005 and 2019, I saw an explosion of life over that 14-year period. It’s its own ecosystem now. It belongs more to the creatures down there than it does to us anymore.

The bow of the Titanic Digital Twin. Magellan Limited/Atlantic Productions

As far as Titanic itself is concerned, this is key to establishing the wreck site, which is one of the world’s largest archeological sites, as an archeological site that follows archeological rigor and standards. This underwater technology—that Titanic has accelerated because of its popularity—is the way of the future for deep-ocean exploration. And the deep ocean is where our future is. It’s where green technology is going to continue to get its raw elements and minerals from. If we don’t do it responsibly, we could screw up the ocean bottom in ways that would destroy our atmosphere faster than all the cars on Earth could do. So it’s not just for the Titanic story, it’s for the future of deep-ocean exploration.

Anthony Geffen: This is the beginning of the work on the digital scan. It’s a world first. Nothing’s ever been done like this under the ocean before. This film looks at the first set of things [we’ve learned], and they’re very substantial. But what’s exciting about the digital twin is, we’ll be able to take it to location-based experiences where the public will be able to engage with the digital twin themselves, walk on the ocean floor. Headset technology will allow the audience to do what Parks did. I think that’s really important for citizen science. I also think the next generation is going to engage with the story differently. New tech and new platforms are going to be the way the next generation understands the Titanic. Any kid, anywhere on the planet, will be able to walk in and engage with the story. I think that’s really powerful.

Titanic: The Digital Resurrection premieres on April 11, 2025, on National Geographic. It will be available for streaming on Disney+ and Hulu on April 12, 2025.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

New simulation of Titanic’s sinking confirms historical testimony Read More »