Physics

the-physics-of-bowling-strike-after-strike

The physics of bowling strike after strike

More than 45 million people in the US are fans of bowling, with national competitions awarding millions of dollars. Bowlers usually rely on instinct and experience, earned through lots and lots of practice, to boost their strike percentage. A team of physicists has come up with a mathematical model to better predict ball trajectories, outlined in a new paper published in the journal AIP Advances. The resulting equations take into account such factors as the composition and resulting pattern of the oil used on bowling lanes, as well as the inevitable asymmetries of bowling balls and player variability.

The authors already had a strong interest in bowling. Three are regular bowlers and quite skilled at the sport; a fourth, Curtis Hooper of Longborough University in the UK, is a coach for Team England at the European Youth Championships. Hooper has been studying the physics of bowling for several years, including an analysis of the 2017 Weber Cup, as well as papers devising mathematical models for the application of lane conditioners and oil patterns in bowling.

The calculations involved in such research are very complicated because there are so many variables that can affect a ball’s trajectory after being thrown. Case in point: the thin layer of oil that is applied to bowling lanes, which Hooper found can vary widely in volume and shape among different venues, plus the lack of uniformity in applying the layer, which creates an uneven friction surface.

Per the authors, most research to date has relied on statistically analyzing empirical data, such as a 2018 report by the US Bowling Congress that looked at data generated by 37 bowlers. (Hooper relied on ball-tracking data for his 2017 Weber Cup analysis.) A 2009 analysis showed that the optimal location for the ball to strike the headpin is about 6 centimeters off-center, while the optimal entry angle for the ball to hit is about 6 degrees. However, such an approach struggles to account for the inevitable player variability. No bowler hits their target 100 percent of the time, and per Hooper et al., while the best professionals can come within 0.1 degrees from the optimal launch angle, this slight variation can nonetheless result in a difference of several centimeters down-lane.

The physics of bowling strike after strike Read More »

quantum-hardware-may-be-a-good-match-for-ai

Quantum hardware may be a good match for AI

Quantum computers don’t have that sort of separation. While they could include some quantum memory, the data is generally housed directly in the qubits, while computation involves performing operations, called gates, directly on the qubits themselves. In fact, there has been a demonstration that, for supervised machine learning, where a system can learn to classify items after training on pre-classified data, a quantum system can outperform classical ones, even when the data being processed is housed on classical hardware.

This form of machine learning relies on what are called variational quantum circuits. This is a two-qubit gate operation that takes an additional factor that can be held on the classical side of the hardware and imparted to the qubits via the control signals that trigger the gate operation. You can think of this as analogous to the communications involved in a neural network, with the two-qubit gate operation equivalent to the passing of information between two artificial neurons and the factor analogous to the weight given to the signal.

That’s exactly the system that a team from the Honda Research Institute worked on in collaboration with a quantum software company called Blue Qubit.

Pixels to qubits

The focus of the new work was mostly on how to get data from the classical world into the quantum system for characterization. But the researchers ended up testing the results on two different quantum processors.

The problem they were testing is one of image classification. The raw material was from the Honda Scenes dataset, which has images taken from roughly 80 hours of driving in Northern California; the images are tagged with information about what’s in the scene. And the question the researchers wanted the machine learning to handle was a simple one: Is it snowing in the scene?

Quantum hardware may be a good match for AI Read More »

fewer-beans-=-great-coffee-if-you-get-the-pour-height-right

Fewer beans = great coffee if you get the pour height right

Based on their findings, the authors recommend pouring hot water over your coffee grounds slowly to give the beans more time immersed in the water. But pour the water too slowly and the resulting jet will stick to the spout (the “teapot effect”) and there won’t be sufficient mixing of the grounds; they’ll just settle to the bottom instead, decreasing extraction yield. “If you have a thin jet, then it tends to break up into droplets,” said co-author Margot Young. “That’s what you want to avoid in these pour-overs, because that means the jet cannot mix the coffee grounds effectively.”

Smaller jet diameter impact on dynamics.

Smaller jet diameter impact on dynamics. Credit: E. Park et al., 2025

That’s where increasing the height from which you pour comes in. This imparts more energy from gravity, per the authors, increasing the mixing of the granular coffee grounds. But again, there’s such a thing as pouring from too great a height, causing the water jet to break apart. The ideal height is no more than 50 centimeters (about 20 inches) above the filter. The classic goosenecked tea kettle turns out to be ideal for achieving that optimal height. Future research might explore the effects of varying the grain size of the coffee grounds.

Increasing extraction yields and, by extension, reducing how much coffee grounds one uses matters because it is becoming increasingly difficult to cultivate the most common species of coffee because of ongoing climate change. “Coffee is getting harder to grow, and so, because of that, prices for coffee will likely increase in coming years,” co-author Arnold Mathijssen told New Scientist. “The idea for this research was really to see if we could help do something by reducing the amount of coffee beans that are needed while still keeping the same amount of extraction, so that you get the same strength of coffee.”

But the potential applications aren’t limited to brewing coffee. The authors note that this same liquid jet/submerged granular bed interplay is also involved in soil erosion from waterfalls, for example, as well as wastewater treatment—using liquid jets to aerate wastewater to enhance biodegradation of organic matter—and dam scouring, where the solid ground behind a dam is slowly worn away by water jets. “Although dams operate on a much larger scale, they may undergo similar dynamics, and finding ways to decrease the jet height in dams may decrease erosion and elongate dam health,” they wrote.

Physics of Fluids, 2025. DOI: 10.1063/5.0257924 (About DOIs).

Fewer beans = great coffee if you get the pour height right Read More »

first-tokamak-component-installed-in-a-commercial-fusion-plant

First tokamak component installed in a commercial fusion plant


A tokamak moves forward as two companies advance plans for stellarators.

There are a remarkable number of commercial fusion power startups, considering that it’s a technology that’s built a reputation for being perpetually beyond the horizon. Many of them focus on radically new technologies for heating and compressing plasmas, or fusing unusual combinations of isotopes. These technologies are often difficult to evaluate—they can clearly generate hot plasmas, but it’s tough to determine whether they can get hot enough, often enough to produce usable amounts of power.

On the other end of the spectrum are a handful of companies that are trying to commercialize designs that have been extensively studied in the academic world. And there have been some interesting signs of progress here. Recently, Commonwealth Fusion, which is building a demonstration tokamak in Massachussets, started construction of the cooling system that will keep its magnets superconducting. And two companies that are hoping to build a stellarator did some important validation of their concepts.

Doing donuts

A tokamak is a donut-shaped fusion chamber that relies on intense magnetic fields to compress and control the plasma within it. A number of tokamaks have been built over the years, but the big one that is expected to produce more energy than required to run it, ITER, has faced many delays and now isn’t expected to achieve its potential until the 2040s. Back in 2015, however, some physicists calculated that high-temperature superconductors would allow ITER-style performance in a far smaller and easier-to-build package. That idea was commercialized as Commonwealth Fusion.

The company is currently trying to build an ITER equivalent: a tokamak that can achieve fusion but isn’t large enough and lacks some critical hardware needed to generate electricity from that reaction. The planned facility, SPARC, is already in progress, with most of the supporting facility in place and superconducting magnets being constructed. But in late March, the company took a major step by installing the first component of the tokamak itself, the cryostat base, which will support the hardware that keeps its magnets cool.

Alex Creely, Commonwealth Fusion’s tokamak operations director and SPARC’s chief engineer, told Ars that the cryostat’s materials have to be chosen to be capable of handling temperatures in the area of 20 Kelvin, and be able to tolerate neutron exposure. Fortunately, stainless steel is still up to the task. It will also be part of a structure that has to handle an extreme temperature gradient. Creely said that it only takes about 30 centimeters to go from the hundreds of millions of degrees C of the plasma down to about 1,000° C, after which it becomes relatively simple to reach cryostat temperatures.

He said that construction is expected to wrap up about a year from now, after which there will be about a year of commissioning the hardware, with fusion experiments planned for 2027. And, while ITER may be facing ongoing delays, Creely said that it was critical for keeping Commonwealth on a tight schedule. Not only is most of the physics of SPARC the same as that of ITER, but some of the hardware will be as well. “We’ve learned a lot from their supply chain development,” Creely said. “So some of the same vendors that are supplying components for the ITER tokamak, we are also working with those same vendors, which has been great.”

Great in the sense that Commonwealth is now on track to see plasma well in advance of ITER. “Seeing all of this go from a bunch of sketches or boxes on slides—clip art effectively—to real metal and concrete that’s all coming together,” Creely said. “You’re transitioning from building the facility, building the plant around the tokamak to actually starting to build the tokamak itself. That is an awesome milestone.”

Seeing stars?

The plasma inside a tokamak is dynamic, meaning that it requires a lot of magnetic intervention to keep it stable, and fusion comes in pulses. There’s an alternative approach called a stellarator, which produces an extremely complex magnetic field that can support a simpler, stable plasma and steady fusion. As implemented by the Wendelstein 7-X stellarator in Germany, this meant a series of complex-shaped magnets manufactured with extremely low tolerance for deviation. But a couple of companies have decided they’re up for the challenge.

One of those, Type One Energy, has basically reached the stage that launched Commonwealth Fusion: It has made a detailed case for the physics underlying its stellarator design. In this instance, the case may even be considerably more detailed: six peer-reviewed articles in the Journal of Plasma Physics. The papers detail the structural design, the behavior of the plasma within it, handling of the helium produced by fusion, generation of tritium from the neutrons produced, and obtaining heat from the whole thing.

The company is partnering with Oak Ridge National Lab and the Tennessee Valley Authority to build a demonstration reactor on the site of a former fossil fuel power plant. (It’s also cooperating with Commonwealth on magnet development.) As with the SPARC tokamak, this will be a mix of technology demonstration and learning experience, rather than a functioning power plant.

Another company that’s pursuing a stellarator design is called Thea Energy. Brian Berzin, its CEO, told Ars that the company’s focus is on simplifying the geometry of the magnets needed for a stellarator and is using software to get them to produce an equivalent magnetic field. “The complexity of this device has always been really, really limiting,” he said, referring to the stellarator. “That’s what we’re really focused on: How can you make simpler hardware? Our way of allowing for simpler hardware is using really, really complicated software, which is something that has taken over the world.”

He said that the simplicity of the hardware will be helpful for an operational power plant, since it allows them to build multiple identical segments as spares, so things can be swapped out and replaced when maintenance is needed.

Like Commonwealth Fusion, Thea Energy is using high-temperature superconductors to build its magnets, with a flat array of smaller magnets substituting for the three-dimensional magnets used at Wendelstein. “We are able to really precisely recreate those magnetic fields required for accelerator, but without any wiggly, complicated, precise, expensive, costly, time-consuming hardware,” Berzin said. And the company recently released a preprint of some testing with the magnet array.

Thea is also planning on building a test stellarator. In its case, however, it’s going to be using deuterium-deuterium fusion, which is much less efficient than deuterium-tritium that will be needed for a power plant. But Berzin said that the design will incorporate a layer of lithium that will form tritium when bombarded by neutrons from the stellarator. If things go according to plan, the reactor will validate Thea’s design and be a fuel source for the rest of the industry.

Of course, nobody will operate a fusion power plant until sometime in the next decade—probably about at the same time that we might expect some of the first small modular fission plants to be built. Given the vast expansion in renewable production that is in progress, it’s difficult to predict what the energy market will look like at that point. So, these test reactors will be built in a very uncertain environment. But that uncertainty hasn’t stopped these companies from pursuing fusion.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

First tokamak component installed in a commercial fusion plant Read More »

hints-grow-stronger-that-dark-energy-changes-over-time

Hints grow stronger that dark energy changes over time

In its earliest days, the Universe was a hot, dense soup of subatomic particles, including hydrogen and helium nuclei, aka baryons. Tiny fluctuations created a rippling pattern through that early ionized plasma, which froze into a three-dimensional place as the Universe expanded and cooled. Those ripples, or bubbles, are known as baryon acoustic oscillations (BAO). It’s possible to use BAOs as a kind of cosmic ruler to investigate the effects of dark energy over the history of the Universe.

DESI is a state-of-the-art instrument and can capture light from up to 5,000 celestial objects simultaneously.

DESI is a state-of-the-art instrument that can capture light from up to 5,000 celestial objects simultaneously.

That’s what DESI was designed to do: take precise measurements of the apparent size of these bubbles (both near and far) by determining the distances to galaxies and quasars over 11 billion years. That data can then be sliced into chunks to determine how fast the Universe was expanding at each point of time in the past, the better to model how dark energy was affecting that expansion.

An upward trend

Last year’s results were based on analysis of a full year’s worth of data taken from seven different slices of cosmic time and include 450,000 quasars, the largest ever collected, with a record-setting precision of the most distant epoch (between 8 to 11 billion years back) of 0.82 percent. While there was basic agreement with the Lamba CDM model, when those first-year results were combined with data from other studies (involving the cosmic microwave background radiation and Type Ia supernovae), some subtle differences cropped up.

Essentially, those differences suggested that the dark energy might be getting weaker. In terms of confidence, the results amounted to a 2.6-sigma level for the DESI’s data combined with CMB datasets. When adding the supernovae data, those numbers grew to 2.5-sigma, 3.5-sigma, or 3.9-sigma levels, depending on which particular supernova dataset was used.

It’s important to combine the DESI data with other independent measurements because “we want consistency,” said DESI co-spokesperson Will Percival of the University of Waterloo. “All of the different experiments should give us the same answer to how much matter there is in the Universe at present day, how fast the Universe is expanding. It’s no good if all the experiments agree with the Lambda-CDM model, but then give you different parameters. That just doesn’t work. Just saying it’s consistent to the Lambda-CDM, that’s not enough in itself. It has to be consistent with Lambda-CDM and give you the same parameters for the basic properties of that model.”

Hints grow stronger that dark energy changes over time Read More »

spiderbot-experiments-hint-at-“echolocation”-to-locate-prey

SpiderBot experiments hint at “echolocation” to locate prey

It’s well understood that spiders have poor eyesight and thus sense the vibrations in their webs whenever prey (like a fly) gets caught; the web serves as an extension of their sensory system. But spiders also exhibit less-understood behaviors to locate struggling prey. Most notably, they take on a crouching position, sometimes moving up and down to shake the web or plucking at the web by pulling in with one leg. The crouching seems to be triggered when prey is stationary and stops when the prey starts moving.

But it can be difficult to study the underlying mechanisms of this behavior because there are so many variables at play when observing live spiders. To simplify matters, researchers at Johns Hopkins University’s Terradynamics Laboratory are building crouching spider robots and testing them on synthetic webs. The results provide evidence for the hypothesis that spiders crouch to sense differences in web frequencies to locate prey that isn’t moving—something analogous to echolocation. The researchers presented their initial findings today at the American Physical Society’s Global Physics Summit in Anaheim, California.

“Our lab investigates biological problems using robot physical models,” team member Eugene Lin told Ars. “Animal experiments are really hard to reproduce because it’s hard to get the animal to do what you want to do.” Experiments with robot physical models, by contrast, “are completely repeatable. And while you’re building them, you get a better idea of the actual [biological] system and how certain behaviors happen.” The lab has also built robots inspired by cockroaches and fish.

The research was done in collaboration with two other labs at JHU. Andrew Gordus’ lab studies spider behavior, particularly how they make their webs, and provided biological expertise as well as videos of the particular spider species (U. diversus) of interest. Jochen Mueller’s lab provided expertise in silicone molding, allowing the team to use their lab to 3D-print their spider robot’s flexible joints.

Crouching spider, good vibrations

A spider exhibiting crouching behavior.

A spider exhibiting crouching behavior. Credit: YouTube/Terradynamics Lab/JHU

The first spider robot model didn’t really move or change its posture; it was designed to sense vibrations in the synthetic web. But Lin et al. later modified it with actuators so it could move up and down. Also, there were only four legs, with two joints in each and two accelerometers on each leg; real spiders have eight legs and many more joints. But the model was sufficient for experimental proof of principle. There was also a stationary prey robot.

SpiderBot experiments hint at “echolocation” to locate prey Read More »

physicists-unlock-another-clue-to-brewing-the-perfect-espresso

Physicists unlock another clue to brewing the perfect espresso

The team initially tried to use a simple home coffee machine for their experiments but eventually partnered with Coffeelab, a major roaster in Poland, and CoffeeMachineSale, the largest global distributor of roasting gear. This brought industrial-grade equipment and much professional coffee expertise to the project: state-of-the-art grinders, for instance, and a cafe-grade espresso machine, tricked out with a pressure sensor, flow meter, and a set of scales. The entire setup was connected to laboratory laptops via a microchip and controlled with custom software that allowed the scientists to precisely monitor pressure, mass, and water flowing through the coffee.

The scientists measured the total dissolved solids to determine the rate at which coffee is dissolved, comparing brews without a channel to those with artificially induced channels. They found that, indeed, channeling adversely affected extraction yields. However, channeling does not have an impact on the rate at which water flows through the espresso puck.

“That is mostly due to the structural rearrangement of coffee grounds under pressure,” Lisicki said. “When the dry coffee puck is hit with water under high pressure—as high as 10 times the atmospheric pressure, so roughly the pressure 100 meters below the sea surface—it compacts and swells up. So even though water can find a preferential path, there is still significant resistance limiting the flow.”

The team is now factoring their results into numerical and theoretical models of porous bed extraction. They are also compiling an atlas of the different kinds of espresso pucks based on micro-CT imaging of the coffee.

“What we have found can help the coffee industry brew with more knowledge,” said Myck. “Many people follow procedures based on unconfirmed intuitions or claims which prove to have confirmation. What’s more, we have really interesting data regarding pressure-induced flow in coffee, the results of which have been a surprise to us as well. Our approach may let us finally understand the magic that happens inside your coffee machine.”

Physicists unlock another clue to brewing the perfect espresso Read More »

small-charges-in-water-spray-can-trigger-the-formation-of-key-biochemicals

Small charges in water spray can trigger the formation of key biochemicals

Once his team nailed how droplets become electrically charged and how the micro-lightning phenomenon works, they recreated the Miller-Urey experiment. Only without the spark plugs.

Ingredients of life

After micro-lightnings started jumping between droplets in a mixture of gases similar to that used by Miller and Urey, the team examined their chemical composition with a mass spectrometer. They confirmed glycine, uracil, urea, cyanoethylene, and lots of other chemical compounds were made. “Micro-lightnings made all organic molecules observed previously in the Miller-Urey experiment without any external voltage applied,” Zare claims.

But does it really bring us any closer to explaining the beginnings of life? After all, Miller and Urey already demonstrated those molecules could be produced by electrical discharges in a primordial Earth’s atmosphere—does it matter all that much where those discharges came from?  Zare argues that it does.

“Lightning is intermittent, so it would be hard for these molecules to concentrate. But if you look at waves crashing into rocks, you can think the spray would easily go into the crevices in these rocks,” Zare suggests. He suggests that the water in these crevices would evaporate, new spray would enter and evaporate again and again. The cyclic drying would allow the chemical precursors to build into more complex molecules. “When you go through such a dry cycle, it causes polymerization, which is how you make DNA,” Zare argues. Since sources of spray were likely common on the early Earth, Zare thinks this process could produce far more organic chemicals than potential alternatives like lightning strikes, hydrothermal vents, or impacting comets.

But even if micro-lightning really produced the basic building blocks of life on Earth, we’re still not sure how those combined into living organisms. “We did not make life. We just demonstrated a possible mechanism that gives us some chemical compounds you find in life,” Zare says. “It’s very important to have a lot of humility with this stuff.”

Science Advances, 2025.  DOI: 10.1126/sciadv.adt8979

Small charges in water spray can trigger the formation of key biochemicals Read More »

d-wave-quantum-annealers-solve-problems-classical-algorithms-struggle-with

D-Wave quantum annealers solve problems classical algorithms struggle with


The latest claim of a clear quantum supremacy solves a useful problem.

Right now, quantum computers are small and error-prone compared to where they’ll likely be in a few years. Even within those limitations, however, there have been regular claims that the hardware can perform in ways that are impossible to match with classical computation (one of the more recent examples coming just last year). In most cases to date, however, those claims were quickly followed by some tuning and optimization of classical algorithms that boosted their performance, making them competitive once again.

Today, we have a new entry into the claims department—or rather a new claim by an old entry. D-Wave is a company that makes quantum annealers, specialized hardware that is most effective when applied to a class of optimization problems. The new work shows that the hardware can track the behavior of a quantum system called an Ising model far more efficiently than any of the current state-of-the-art classical algorithms.

Knowing what will likely come next, however, the team behind the work writes, “We hope and expect that our results will inspire novel numerical techniques for quantum simulation.”

Real physics vs. simulation

Most of the claims regarding quantum computing superiority have come from general-purpose quantum hardware, like that of IBM and Google. These can solve a wide range of algorithms, but have been limited by the frequency of errors in their qubits. Those errors also turned out to be the reason classical algorithms have often been able to catch up with the claims from the quantum side. They limit the size of the collection of qubits that can be entangled at once, allowing algorithms that focus on interactions among neighboring qubits to perform reasonable simulations of the hardware’s behavior.

In any case, most of these claims have involved quantum computers that weren’t solving any particular algorithm, but rather simply behaving like a quantum computer. Google’s claims, for example, are based around what are called “random quantum circuits,” which is exactly what it sounds like.

Off in its own corner is a company called D-Wave, which makes hardware that relies on quantum effects to perform calculations, but isn’t a general-purpose quantum computer. Instead, its collections of qubits, once configured and initialized, are left to find their way to a ground energy state, which will correspond to a solution to a problem. This approach, called quantum annealing, is best suited to solving problems that involve finding optimal solutions to complex scheduling problems.

D-Wave was likely to have been the first company to experience the “we can outperform classical” followed by an “oh no you can’t” from algorithm developers, and since then it has typically been far more circumspect. In the meantime, a number of companies have put D-Wave’s computers to use on problems that align with where the hardware is most effective.

But on Thursday, D-Wave will release a paper that will once again claim, as its title indicates, “beyond classical computation.” And it will be doing it on a problem that doesn’t involve random circuits.

You sing, Ising

The new paper describes using D-Wave’s hardware to compute the evolution over time of something called an Ising model. A simple version of this model is a two-dimensional grid of objects, each of which can be in two possible states. The state that any one of these objects occupies is influenced by the state of its neighbors. So, it’s easy to put an Ising model into an unstable state, after which values of the objects within it will flip until it reaches a low-energy, stable state. Since this is also a quantum system, however, random noise can sometimes flip bits, so the system will continue to evolve over time. You can also connect the objects into geometries that are far more complicated than a grid, allowing more complex behaviors.

Someone took great notes from a physics lecture on Ising models that explains their behavior and role in physics in more detail. But there are two things you need to know to understand this news. One is that Ising models don’t involve a quantum computer merely acting like an array of qubits—it’s a problem that people have actually tried to find solutions to. The second is that D-Wave’s hardware, which provides a well-connected collection of quantum devices that can flip between two values, is a great match for Ising models.

Back in 2023, D-Wave used its 5,000-qubit annealer to demonstrate that its output when performing Ising model evolution was best described using Schrödinger’s equation, a central way of describing the behavior of quantum systems. And, as quantum systems become increasingly complex, Schrödinger’s equation gets much, much harder to solve using classical hardware—the implication being that modeling the behavior of 5,000 of these qubits could quite possibly be beyond the capacity of classical algorithms.

Still, having been burned before by improvements to classical algorithms, the D-Wave team was very cautious about voicing that implication. As they write in their latest paper, “It remains important to establish that within the parametric range studied, despite the limited correlation length and finite experimental precision, approximate classical methods cannot match the solution quality of the [D-Wave hardware] in a reasonable amount of time.”

So it’s important that they now have a new paper that indicates that classical methods in fact cannot do that in a reasonable amount of time.

Testing alternatives

The team, which is primarily based at D-Wave but includes researchers from a handful of high-level physics institutions from around the world, focused on three different methods of simulating quantum systems on classical hardware. They were put up against a smaller version of what will be D-Wave’s Advantage 2 system, designed to have a higher qubit connectivity and longer coherence times than its current Advantage. The work essentially involved finding where the classical simulators bogged down as either the simulation went on for too long, or the complexity of the Ising model’s geometry got too high (all while showing that D-Wave’s hardware could perform the same calculation).

Three different classical approaches were tested. Two of them involved a tensor network, one called MPS, for matrix product of states, and the second called projected entangled-pair states (PEPS). They also tried a neural network, as a number of these have been trained successfully to predict the output of Schrödinger’s equation for different systems.

These approaches were first tested on a simple 8×8 grid of objects rolled up into a cylinder, which increases the connectivity by eliminating two of the edges. And, for this simple system that evolved over a short period, the classical methods and the quantum hardware produced answers that were effectively indistinguishable.

Two of the classical algorithms, however, were relatively easy to eliminate from serious consideration. The neural network provided good results for short simulations but began to diverge rapidly once the system was allowed to evolve for longer times. And PEPS works by focusing on local entanglement and failed as entanglement was spread to ever-larger systems. That left MPS as the classical representative as more complex geometries were run for longer times.

By identifying where MPS started to fail, the researchers could estimate the amount of classical hardware that would be needed to allow the algorithm to keep pace with the Advantage 2 hardware on the most complex systems. And, well, it’s not going to be realistic any time soon. “On the largest problems, MPS would take millions of years on the Frontier supercomputer per input to match [quantum hardware] quality,” they conclude. “Memory requirements would exceed its 700PB storage, and electricity requirements would exceed annual global consumption.” By contrast, it took a few minutes on D-Wave’s hardware.

Again, in the paper, the researchers acknowledge that this may lead to another round of optimizations that bring classical algorithms back into competition. And, apparently those have already started once a draft of this upcoming paper was placed on the arXiv. At a press conference happening as this report was being prepared, one of D-Wave’s scientists, Andrew King, noted that two pre-prints have already appeared on the arXiv that described improvements to classical algorithms.

While these allow classical simulations to perform more of the results demonstrated in the new paper, they don’t involve simulating the most complicated geometries, and require shorter times and fewer total qubits. Nature talked to one of the people behind these algorithm improvements, who was optimistic that they could eventually replicate all of D-Wave’s results using non-quantum algorithms. D-Wave, obviously, is skeptical. And King said that a new, larger Advantage 2 test chip with over 4,000 qubits available had recently been calibrated, and he had already tested even larger versions of these same Ising models on it—ones that would be considerably harder for classical methods to catch up to.

In any case, the company is acting like things are settled. During the press conference describing the new results, people frequently referred to D-Wave having achieved quantum supremacy, and its CEO, Alan Baratz, in responding to skepticism sparked by the two draft manuscripts, said, “Our work should be celebrated as a significant milestone.”

Science, 2025. DOI: 10.1126/science.ado6285  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

D-Wave quantum annealers solve problems classical algorithms struggle with Read More »

these-hot-oil-droplets-can-bounce-off-any-surface

These hot oil droplets can bounce off any surface

The Hong Kong physicists were interested in hot droplets striking cold surfaces. Prior research showed there was less of a bouncing effect in such cases involving heated water droplets, with the droplets sticking to the surface instead thanks to various factors such as reduced droplet surface tension. The Hong Kong team discovered they could achieve enhanced bouncing by using hot droplets of less volatile liquids—namely, n-hexadecane, soybean oil, and silicon oil, which have lower saturation pressures compared to water.

Follow the bouncing droplet

The researchers tested these hot droplets (as well as burning and normal temperature droplets) on various solid, cold surfaces, including scratched glass, smooth glass, acrylic surfaces, surfaces with liquid-repellant coatings from candle soot, and surfaces coated with nanoparticles with varying “wettability” (i.e., how well particles stick to the surface). They captured the droplet behavior with both high-speed and thermal cameras, augmented with computer modeling.

The room-temperature droplets stuck to all the surfaces as expected, but the hot and burning droplets bounced. The team found that the bottom of a hot droplet cools faster than the top as it approaches a room-temperature surface, which causes hotter liquid within the droplet to flow from the edges toward the bottom. The air that is dragged to the bottom with it forms a thin cushion there and prevents the droplet from making contact with the surface, bouncing off instead. They dubbed the behavior “self-lubricated bouncing.”

“It is now clear that droplet-bouncing strategies are not isolated to engineering the substrate and that the thermophysical properties of droplets themselves are critical,” Jonathan B. Boreyko of Virginia Tech, who was not involved in the research, wrote in an accompanying commentary.

Future applications include improving the combustion efficiency of fuels or developing better fire-retardant coatings. “If burning droplets can’t stick to surfaces, they won’t be able to ignite new materials and allow fires to propagate,” co-author Pingan Zhu said. “Our study could help protect flammable materials like textiles from burning droplets. Confining fires to a smaller area and slowing their spread could give firefighters more time to put them out.”

DOI: Newton, 2025. 10.1016/j.newton.2025.100014  (About DOIs).

These hot oil droplets can bounce off any surface Read More »

research-roundup:-7-cool-science-stories-from-february

Research roundup: 7 cool science stories from February


Dancing sea turtles, the discovery of an Egyptian pharaoh’s tomb, perfectly boiled eggs, and more.

X-ray image of the PHerc.172 scroll Credit: Vesuvius Challenge

It’s a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. February’s list includes dancing sea turtles, the secret to a perfectly boiled egg, the latest breakthrough in deciphering the Herculaneum scrolls, the discovery of an Egyptian pharaoh’s tomb, and more.

Dancing sea turtles

There is growing evidence that certain migratory animal species (turtles, birds, some species of fish) are able to exploit the Earth’s magnetic field for navigation, using it both as a compass to determine direction and as a kind of “map” to track their geographical position while migrating. A paper published in the journal Nature offers evidence of a possible mechanism for this unusual ability, at least in loggerhead sea turtles, who perform an energetic “dance” when they follow magnetic fields to a tasty snack.

Sea turtles make impressive 8,000-mile migrations across oceans and tend to return to the same feeding and nesting sites. The authors believe they achieve this through their ability to remember the magnetic signature of those areas and store them in a mental map. To test that hypothesis, the scientists placed juvenile sea turtles into two large tanks of water outfitted with large coils to create magnetic signatures at specific locations within the tanks. One tank features such a location that had food; the other had a similar location without food.

They found that the sea turtles in the first tank performed distinctive “dancing” moves when they arrived at the area associated with food: tilting their bodies, dog-paddling, spinning in place, or raising their head near or above the surface of the water. When they ran a second experiment using different radio frequencies, they found that the change interfered with the turtles’ internal compass, and they could not orient themselves while swimming. The authors concluded that this is compelling evidence that the sea turtles can distinguish between magnetic fields, possibly relying on complex chemical reactions, i.e., “magnetoreception.” The map sense, however, likely relies on a different mechanism.

Nature, 2025. DOI: 10.1038/s41586-024-08554-y  (About DOIs).

Long-lost tomb of Thutmose II

Archaeologists found a simple tomb near Luxor and identified it as the 3,500-year-old burial site of King Thutmose II.

Archaeologists found a simple tomb near Luxor and identified it as the 3,500-year-old burial site of King Thutmose II. Credit: Egypt’s Ministry of Tourism and Antiquities

Thutmose II was the fourth pharaoh of the Tutankhamun (18th) dynasty. He reigned only about 13 years and married his half-sister Hatshepsut (who went on to become the sixth pharaoh in the dynasty). Archaeologists have now confirmed that a tomb built underneath a waterfall in the mountains in Luxor and discovered in 2022 is the final resting place of Thutmose II. It’s the last of the 18th dynasty royal tombs to be found, more than a century after Tutankhamun’s tomb was found in 1922.

When it was first found, archaeologists thought the tomb might be that of a king’s wife, given its close proximity to Hatshepsut’s tomb and those of the wives of Thutmose III. But they found fragments of alabaster vases inscribed with Thutmose II’s name, along with scraps of religious burial texts and plaster fragments on the partially intact ceiling with traces of blue paint and yellow stars—typically only found in kings’ tombs. Something crucial was missing, however: the actual mummy and grave goods of Thutmose II.

It’s long been assumed that the king’s mummy was discovered in the 19th century at another site called Deir el-Bahari. But archaeologist Piers Litherland, who headed the British team that discovered the tomb, thinks that identification was in error. An inscription stated that Hatshepsut had the tomb’s contents relocated due to flooding. Litherland believes the pharaoh’s actual mummy is buried in a second tomb. Confirmation (or not) of his hypothesis won’t come until after archaeologists finish excavating what he thinks is the site of that second tomb, which is currently buried under multiple layers of rock and plaster.

Hidden images in Pollock paintings

“Troubled Queen” reveals a “hidden” figure, possibly a soldier. Credit: D.A. Morrissette et al., CNS Spectrums 2025

Physicists have long been fascinated by the drip paintings of “splatter master” Jackson Pollock, pondering the presence of fractal patterns (or lack thereof), as well as the presence of curls and coils in his work and whether the artist deliberately exploited a well-known fluid dynamics effect to achieve them—or deliberately avoided them. Now psychiatrists are getting into the game, arguing in a paper published in CNS Spectrums that Pollock—known to incorporate images into his early pre-drip paintings—also used many of the same images repeatedly in his later abstract drip paintings.

People have long claimed to see images in those drip paintings, but the phenomenon is usually dismissed by art critics as a trick of human perception, much like the fractal edges of Rorschach ink blots can fool the eye and mind. The authors of this latest paper analyzed Pollock’s early painting “Troubled Queen” and found multiple images incorporated into the painting, which they believe establishes a basis for their argument that Pollock also incorporated such images into his later drip painting, albeit possibly subconsciously.

“Seeing an image once in a drip painting could be random,” said co-author Stephen M. Stahl of the University of California, San Diego. “Seeing the same image twice in different paintings could be a coincidence. Seeing it three or more times—as is the case for booze bottles, monkeys and gorillas, elephants, and many other subjects and objects in Pollock’s paintings—makes those images very unlikely to be randomly provoked perceptions without any basis in reality.”

CNS Spectrums, 2025. DOI: 10.1017/S1092852924001470

Solving a fluid dynamics mystery

Soap opera in the maze: Geometry matters in Marangoni flows.

Every fall, the American Physical Society exhibits a Gallery of Fluid Motion, which recognizes the innate artistry of images and videos derived from fluid dynamics research. Several years ago, physicists at the University of California, Santa Barbara (UCSB) submitted an entry featuring a pool of red dye, propelled by a few drops of soap acting as a surfactant, that seemed to “know” how to solve a maze whose corridors were filled with milk. This is unusual since one would expect the dye to diffuse more uniformly. The team has now solved that puzzle, according to a paper published in Physical Review Letters.

The key factor is surface tension, specifically a phenomenon known as the Marangoni effect, which also drives the “coffee ring effect” and the “tears of wine” phenomenon. If you spread a thin film of water on your kitchen counter and place a single drop of alcohol in the center, you’ll see the water flow outward, away from the alcohol. The difference in their alcohol concentrations creates a surface tension gradient, driving the flow.

In the case of the UCSB experiment, the soap reduces local surface tension around the red dye to set the dye in motion. There are also already surfactants in the milk that work in combination with the soapy surfactant to “solve” the maze. The milk surfactants create varying points of resistance as the dye makes its way through the maze. A dead end or a small space will have more resistance, redirecting the dye toward routes with less resistance—and ultimately to the maze’s exit. “That means the added surfactant instantly knows the layout of the maze,” said co-author Paolo Luzzatto-Fegiz.

Physical Review Letters, 2025. DOI: 10.1073/pnas.1802831115

How to cook a perfectly boiled egg

Credit: YouTube/Epicurious

There’s more than one way to boil an egg, whether one likes it hard-boiled, soft-boiled, or somewhere in between. The challenge is that eggs have what physicists call a “two-phase” structure: The yolk cooks at 65° Celsius, while the white (albumen) cooks at 85° Celsius. This often results in overcooked yolks or undercooked whites when conventional methods are used. Physicists at the Italian National Research Council think they’ve cracked the case: The perfectly cooked egg is best achieved via a painstaking process called “periodic cooking,” according to a paper in the journal Communications Engineering.

They started with a few fluid dynamics simulations to develop a method and then tested that method in the laboratory. The process involves transferring a cooking egg every two minutes—for 32 minutes—between a pot of boiling water (100° Celsius) and a bowl of cold water (30° Celsius). They compared their periodically cooked eggs with traditionally prepared hard-boiled and soft-boiled eggs, as well as eggs prepared using sous vide. The periodically cooked eggs ended up with soft yolks (typical of sous vide eggs) and a solidified egg white with a consistency between sous vide and soft-boiled eggs. Chemical analysis showed the periodically cooked eggs also contained more healthy polyphenols. “Periodic cooking clearly stood out as the most advantageous cooking method in terms of egg nutritional content,” the authors concluded.

Communications Engineering, 2025. DOI: 10.1038/s44172-024-00334-w

More progress on deciphering Herculaneum scrolls

X-ray scans and AI reveal the inside of ancient scroll

X-ray scans and AI reveal the inside of an ancient scroll. Credit: Vesuvius Challenge

The Vesuvius Challenge is an ongoing project that employs “digital unwrapping” and crowd-sourced machine learning to decipher the first letters from previously unreadable ancient scrolls found in an ancient Roman villa at Herculaneum. The 660-plus scrolls stayed buried under volcanic mud until they were excavated in the 1700s from a single room that archaeologists believe held the personal working library of an Epicurean philosopher named Philodemus. The badly singed, rolled-up scrolls were so fragile that it was long believed they would never be readable, as even touching them could cause them to crumble.

In 2023, the Vesuvius Challenge made its first award for deciphering the first letters, and last year, the project awarded the grand prize of $700,000 for producing the first readable text. The latest breakthrough is the successful generation of the first X-ray image of the inside of a scroll (PHerc. 172) housed in Oxford University’s Bodleian Libraries—a collaboration with the Vesuvius Challenge. The scroll’s ink has a unique chemical composition, possibly containing lead, which means it shows up more clearly in X-ray scans than other Herculaneum scrolls that have been scanned.

The machine learning aspect of this latest breakthrough focused primarily on detecting the presence of ink, not deciphering the characters or text. Oxford scholars are currently working to interpret the text. The first word to be translated was the Greek word for “disgust,” which appears twice in nearby columns of text. Meanwhile, the Vesuvius Challenge collaborators continue to work to further refine the image to make the characters even more legible and hope to digitally “unroll” the scroll all the way to the end, where the text likely indicates the title of the work.

What ancient Egyptian mummies smell like

mummified bodies in the exhibition area of the Egyptian museum in Cairo.

Mummified bodies in the exhibition area of the Egyptian Museum in Cairo. Credit: Emma Paolin

Much of what we know about ancient Egyptian embalming methods for mummification comes from ancient texts, but there are very few details about the specific spices, oils, resins, and other ingredients used. Science can help tease out the secret ingredients. For instance, a 2018 study analyzed organic residues from a mummy’s wrappings with gas chromatography-mass spectrometry and found that the wrappings were saturated with a mixture of plant oil, an aromatic plant extract, a gum or sugar, and heated conifer resin. Researchers at University College London have now identified the distinctive smells associated with Egyptian mummies—predominantly”woody,” “spicy,” and “sweet,” according to a paper published in the Journal of the American Chemical Society.

The team coupled gas chromatography with mass spectrometry to measure chemical molecules emitted by nine mummified bodies on display at the Egyptian Museum in Cairo and then asked a panel of trained human “sniffers” to describe the samples smells, rating them by quality, intensity, and pleasantness. This enabled them to identify whether a given odor molecule came from the mummy itself, conservation products, pesticides, or the body’s natural deterioration. The work offers additional clues into the materials used in mummification, as well as making it possible for the museum to create interactive “smellscapes” in future displays so visitors can experience the scents as well as the sights of ancient Egyptian mummies.

Journal of the American Chemical Society, 2025. DOI: 10.1021/jacs.4c15769

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 cool science stories from February Read More »

amazon-uses-quantum-“cat-states”-with-error-correction

Amazon uses quantum “cat states” with error correction


The company shows off a mix of error-resistant hardware and error correction.

Following up on Microsoft’s announcement of a qubit based on completely new physics, Amazon is publishing a paper describing a very different take on quantum computing hardware. The system mixes two different types of qubit hardware to improve the stability of the quantum information they hold. The idea is that one type of qubit is resistant to errors, while the second can be used for implementing an error-correction code that catches the problems that do happen.

While there have been more effective demonstrations of error correction in the past, a number of companies are betting that Amazon’s general approach is the best route to getting logical qubits that are capable of complex algorithms. So, in that sense, it’s an important proof of principle.

Herding cats

The basic idea behind Amazon’s approach is to use one type of qubit to hold data and a second to enable error correction. The data qubit is extremely resistant to one type of error, but prone to a second. Those errors are where the second type of qubit comes in; it’s used to run an error-correction code that’s effective at picking up the problems the data qubits are prone to. Combined, the two are hoped to allow error correction to be handled by far fewer hardware qubits.

In a standard computer, there’s really only one type of error to worry about: a bit that no longer holds the value it was set to. This is called a bit flip, since the value goes from either zero to one, or one to zero. As with most things quantum computing, things are considerably more complicated with qubits. Since they don’t hold binary values, but rather probabilities, you can’t just flip the value of the qubit. Instead, bit flips in quantum land involve inverting the probabilities—going from 60: 40 to 40: 60 or similar.

But bit flips aren’t the only problems that can occur. Qubits can also suffer from what are called phase flip errors. These have no equivalent in classical computers, but they can also keep quantum computers from operating as expected.

In the past, Amazon demonstrated qubits that made it trivially easy to detect when a bit flip error occurred. For the new work, they moved on to something different: a qubit that greatly reduces the probability of bit flip errors.

They do this by using what are called “cat qubits,” after the famed Schrödinger’s cat, which existed in two states at once. While most qubits are based on a single quantum object being placed in this sort of superposition of states, a cat qubit has a collection of objects in a single superposition. (Put differently, the superposition state is distributed across the collection of objects.) In the case of the cat qubits demonstrated so far by companies like Alice and Bob, the objects are photons, which are all held in a single resonator, and Amazon is using similar tech.

Cat qubits have a distinctive feature compared to other options: bit flips are improbable, and get even less probable as you pump more photons into the resonator. But this has a drawback: more photons mean that phase flips become more probable.

Flipping cats

Those phase flips are why a second set of qubits, called transmons were brought in. (Transmons are a commonly used type of qubit based on a loop of superconducting wire linked to a microwave resonator and used by companies like IBM and Google.) These were used to create a chain of qubits, alternating between cat and transmon. This allowed the team to create a logical, error-corrected qubit using a simple error-correction code called a repetition code.

Image of a zig-zagging chain of alternating orange and blue circles.

The layout of Amazon’s hardware. Data-holding cat qubits (blue) alternate with transmons (orange), which can be measured to detect errors. Credit: Putterman et. al.

Here, each of the cat qubits starts off in the same state and is entangled with its neighboring transmons. This allowed the transmons to track what was going on in the cat qubits by performing what are called weak measurements. These don’t destroy the quantum state like a full measurement would but can allow the detection of changes in the neighboring cat qubits and provide the information needed to fix any errors.

So, the combination of the two means that almost all the errors that occur are phase flips, and the phase flips are detected and fixed.

In more typical error-correction schemes, you need enough qubits around to do measurements to identify both the location of an error and the nature of the error (phase or bit flip). Here, Amazon is assuming all errors are phase flips, and its team can identify the location of the flip based on which of the transmons detects an error, as shown by the red flags in the diagram above. It allows for a logical qubit that uses far fewer hardware qubits and measurements to get a given level of error correction.

The challenge of any error-correction setup is that each hardware qubit involved is error-prone. Adding too many into the error-correction system will mean that multiple errors are likely to occur simultaneously in a way that causes error correction to become impossible. Once the error rate of the hardware qubits gets low enough, however, adding additional qubits will bring the error rate down.

So, the key measurement done here is comparing a chain that has three cat qubits and two transmons to one that has five cat qubits and four transmons. These measurements showed that the five qubit chain had a lower error rate than the smaller one. This shows that the hardware is now at a state where error correction provides a benefit.

The characterization of the system indicated a couple of major limits, though. Cat qubits make bit flips extremely unlikely, but not impossible. By focusing error correction only on phase flips, any bit flips that do occur inescapably trigger the failure of the entire logical, error-corrected qubit. “Achieving long logical bit-flip times is challenging because any single cat qubit bit flip event in any part of the repetition code directly causes a logical bit flip error,” the authors note. The other issue is that the transmons used for error correction still suffer from both bit and phase flips, which can also mess up the entire error-corrected qubit.

Where does this leave us?

There are a number of companies like Amazon that are betting that using a somehow less error-prone hardware qubit will allow them to get effective error correction using fewer total hardware qubits. If they’re correct, they’ll be able to build error-corrected quantum computers using far fewer qubits, and so potentially perform useful computation sooner. For them, this paper is an important validation of the idea. You can do a sort of mixed-mode error correction, with a robust hardware qubit paired with a compact error-correction code.

But beyond that, the messages are pretty mixed. The hardware still had to rely on less robust hardware qubits (the transmons) to do error correction, and the very low error rate was still not low enough to avoid having occasional bit flips. And, ultimately, the error rate improvements gained by increasing the size of the logical qubit aren’t on a trajectory that would get you a useful level of error correction without needing an unrealistically large number of hardware qubits.

In short, the underlying hardware isn’t currently good enough to enable any sort of complex calculation, and it would need radical improvements before it can be. And there’s not an obvious alternate route to effective error correction. The potential of this approach is still there, but it’s not obvious how we’re going to build hardware that lives up to that potential.

As for Amazon, the picture is even less clear, given that this is the second qubit technology that it has talked about publicly. It’s unclear whether the company is going to go all-in on this approach, or is still looking for a technology that it’s willing to commit to.

Nature, 2025. DOI: 10.1038/s41586-025-08642-7  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Amazon uses quantum “cat states” with error correction Read More »