Physics

ibm-is-now-detailing-what-its-first-quantum-compute-system-will-look-like

IBM is now detailing what its first quantum compute system will look like


Company is moving past focus on qubits, shifting to functional compute units.

A rendering of what IBM expects will be needed to house a Starling quantum computer. Credit: IBM

On Tuesday, IBM released its plans for building a system that should push quantum computing into entirely new territory: a system that can both perform useful calculations while catching and fixing errors and be utterly impossible to model using classical computing methods. The hardware, which will be called Starling, is expected to be able to perform 100 million operations without error on a collection of 200 logical qubits. And the company expects to have it available for use in 2029.

Perhaps just as significant, IBM is also committing to a detailed description of the intermediate steps to Starling. These include a number of processors that will be configured to host a collection of error-corrected qubits, essentially forming a functional compute unit. This marks a major transition for the company, as it involves moving away from talking about collections of individual hardware qubits and focusing instead on units of functional computational hardware. If all goes well, it should be possible to build Starling by chaining a sufficient number of these compute units together.

“We’re updating [our roadmap] now with a series of deliverables that are very precise,” IBM VP Jay Gambetta told Ars, “because we feel that we’ve now answered basically all the science questions associated with error correction and it’s becoming more of a path towards an engineering problem.”

New architectures

Error correction on quantum hardware involves entangling a groups of qubits in a way that distributes one or more quantum bit values among them and includes additional qubits that can be used to check the state of the system. It can be helpful to think of these as data and measurement qubits. Performing weak quantum measurements on the measurement qubits produces what’s called “syndrome data,” which can be interpreted to determine whether anything about the data qubits has changed (indicating an error) and how to correct it.

There are lots of potential ways to arrange different combinations of data and measurement qubits for this to work, each referred to as a code. But, as a general rule, the more hardware qubits committed to the code, the more robust it will be to errors, and the more logical qubits that can be distributed among its hardware qubits.

Some quantum hardware, like that based on trapped ions or neutral atoms, are relatively flexible when it comes to hosting error-correction codes. The hardware qubits can be moved around so that any two can be entangled, so it’s possible to adopt a huge range of configurations, albeit at the cost of the time spent moving atoms around. IBM’s technology is quite different. It relies on qubits made of superconducting electronics laid out on a chip, with entanglement mediated by wiring that runs between qubits. The layout of this wiring is set during the chip’s manufacture, and so the design of the chip commits it to a limited number of potential error-correction codes.

Unfortunately, this wiring can also enable cross-talk between neighboring qubits, causing them to lose their state. To avoid this, existing IBM processors have their qubits wired in what they term a “heavy hex” configuration, named for its hexagonal arrangements of connections among its qubits. This has worked well to keep the error rate of its hardware down, but it also poses a challenge, since IBM has decided to go with an error-correction code that’s incompatible with the heavy hex geometry.

A couple of years back, an IBM team described a compact error correction code called a low-density parity check (LDPC). This requires a square grid of nearest-neighbor connections among its qubits, as well as wiring to connect qubits that are relatively distant on the chip. To get its chips and error-correction scheme in sync, IBM has made two key advances. The first is in its chip packaging, which now uses several layers of wiring sitting above the hardware qubits to enable all of the connections needed for the LDPC code.

We’ll see that first in a processor called Loon that’s on the company’s developmental roadmap. “We’ve already demonstrated these three things: high connectivity, long-range couplers, and couplers that break the plane [of the chip] and connect to other qubits,” Gambetta said. “We have to combine them all as a single demonstration showing that all these parts of packaging can be done, and that’s what I want to achieve with Loon.” Loon will be made public later this year.

Two diagrams of blue objects linked by red lines. The one on the left is sparse and simple, while the one on the right is a complicated mesh of red lines.

On the left, the simple layout of the connections in a current-generation Heron processor. At right, the complicated web of connections that will be present in Loon. Credit: IBM

The second advance IBM has made is to eliminate the crosstalk that the heavy hex geometry was used to minimize, so heavy hex will be going away. “We are releasing this year a bird for near term experiments that is a square array that has almost zero crosstalk,” Gambetta said, “and that is Nighthawk.” The more densely connected qubits cut the overhead needed to perform calculations by a factor of 15, Gambetta told Ars.

Nighthawk is a 2025 release on a parallel roadmap that you can think of as user-facing. Iterations on its basic design will be released annually through 2028, each enabling more operations without error (going from 5,000 gate operations this year to 15,000 in 2028). Each individual Nighthawk processor will host 120 hardware qubits, but 2026 will see three of them chained together and operating as a unit, providing 360 hardware qubits. That will be followed in 2027 by a machine with nine linked Nighthawk processors, boosting the hardware qubit number over 1,000.

Riding the bicycle

The real future of IBM’s hardware, however, will be happening over on the developmental line of processors, where talk about hardware qubit counts will become increasingly irrelevant. In a technical document released today, IBM is describing the specific LDPC code it will be using, termed a bivariate bicycle code due to some cylindrical symmetries in its details that vaguely resemble bicycle wheels. The details of the connections matter less than the overall picture of what it takes to use this error code in practice.

IBM describes two implementations of this form of LDPC code. In the first, 144 hardware qubits are arranged so that they play host to 12 logical qubits and all of the measurement qubits needed to perform error checks. The standard measure of a code’s ability to catch and correct errors is called its distance, and in this case, the distance is 12. As an alternative, they also describe a code that uses 288 hardware qubits to host the same 12 logical qubits but boost the distance to 18, meaning it’s more resistant to errors. IBM will make one of these collections of logical qubits available as a Kookaburra processor in 2026, which will use them to enable stable quantum memory.

The follow-on will bundle these with a handful of additional qubits that can produce quantum states that are needed for some operations. Those, plus hardware needed for the quantum memory, form a single, functional computation unit, built on a single chip, that is capable of performing all the operations needed to implement any quantum algorithm.

That will appear with the Cockatoo chip, which will also enable multiple processing units to be linked on a single bus, allowing the logical qubit count to grow beyond 12. (The company says that one of the dozen logical qubits in each unit will be used to mediate entanglement with other units and so won’t be available for computation.) That will be followed by the first test versions of Starling, which will allow universal computations on a limited number of logical qubits spread across multiple chips.

Separately, IBM is releasing a document that describes a key component of the system that will run on classical computing hardware. Full error correction requires evaluating the syndrome data derived from the state of all the measurement qubits in order to determine the state of the logical qubits and whether any corrections need to be made. As the complexity of the logical qubits grows, the computational burden of evaluating grows with it. If this evaluation can’t be executed in real time, then it becomes impossible to perform error-corrected calculations.

To address this, IBM has developed a message-passing decoder that can perform parallel evaluations of the syndrome data. The system explores more of the solution space by a combination of randomizing the weight given to the memory of past solutions and by handing any seemingly non-optimal solutions on to new instances for additional evaluation. The key thing is that IBM estimates that this can be run in real time using FPGAs, ensuring that the system works.

A quantum architecture

There are a lot more details beyond those, as well. Gambetta described the linkage between each computational unit—IBM is calling it a Universal Bridge—which requires one microwave cable for each code distance of the logical qubits being linked. (In other words, a distance 12 code would need 12 microwave-carrying cables to connect each chip.) He also said that IBM is developing control hardware that can operate inside the refrigeration hardware, based on what they’re calling “cold CMOS,” which is capable of functioning at 4 Kelvin.

The company is also releasing renderings of what it expects Starling to look like: a series of dilution refrigerators, all connected by a single pipe that contains the Universal Bridge. “It’s an architecture now,” Gambetta said. “I have never put details in the roadmap that I didn’t feel we could hit, and now we’re putting a lot more details.”

The striking thing to me about this is that it marks a shift away from a focus on individual qubits, their connectivity, and their error rates. The error hardware rates are now good enough (4 x 10-4) for this to work, although Gambetta felt that a few more improvements should be expected. And connectivity will now be directed exclusively toward creating a functional computational unit.

That said, there’s still a lot of space beyond Starling on IBM’s roadmap. The 200 logical qubits it promises will be enough to handle some problems, but not enough to perform the complex algorithms needed to do things like break encryption. That will need to wait for something closer to Blue Jay, a 2033 system that IBM expects will have 2,000 logical qubits. And, as of right now, it’s the only thing listed beyond Starling.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

IBM is now detailing what its first quantum compute system will look like Read More »

startup-puts-a-logical-qubit-in-a-single-piece-of-hardware

Startup puts a logical qubit in a single piece of hardware

This time around, the company is showing that it can get an actual logical qubit into a variant of the same hardware. In the earlier version of its equipment, the resonator cavity had a single post and supported a single frequency. In the newer iteration, there were two posts and two frequencies. Each of those frequencies creates its own quantum resonator in the same cavity, with its own set of modes. “It’s this ensemble of photons inside this cavity that creates the logical qubit,” Lemyre told Ars.

The additional quantum information that can now be stored in the system enables it to identify more complex errors than the loss of a photon.

Catching, but not fixing errors

The company did two experiments with this new hardware. First, it ran multiple rounds of error detection on data stored in the logical qubit, essentially testing its ability to act like a quantum memory and retain the information stored there. The system corrected for photon loss, but left other errors uncorrected. These occurred at a rate of a bit over two percent each round, so by the time the system reached the 25th measurement, many instances had already encountered an error.

The second time through, the company repeated the process, discarding any instances in which an error occurred. In almost every instance, that meant the results were discarded long before they got through two dozen rounds of measurement. But at these later stages, none of the remaining instances were in an erroneous state. That indicates that a successful correction of all of the errors—something the team didn’t try—would be able to fix all the detected problems.

“When we do this, we don’t have any errors left,” Lemyre said. “And this builds confidence into this approach, meaning that if we build the next generation of codes that is now able to correct these errors that were detected in this two-mode approach, we should have that flat line of no errors occurring over a long period of time.”

Startup puts a logical qubit in a single piece of hardware Read More »

research-roundup:-7-stories-we-almost-missed

Research roundup: 7 stories we almost missed


Ping-pong bots, drumming chimps, picking styles of two jazz greats, and an ancient underground city’s soundscape

Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim

It’s a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. May’s list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights.

Special relativity made visible

The Terrell-Penrose-Effect: Fast objects appear rotated

Credit: TU Wien

Perhaps the most well-known feature of Albert Einstein’s special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It’s not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics.

They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera.

Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere’s North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959.

DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  (About DOIs).

Drumming chimpanzees

A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025.

Chimpanzees are known to “drum” on the roots of trees as a means of communication, often combining that action with what are known as “pant-hoot” vocalizations (see above video). Scientists have found that the chimps’ drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms.

Back in 2022, the same team observed that individual chimps had unique styles of “buttress drumming,” which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africa (Uganda) and West Africa (Ivory Coast), amounting to 371 drumming bouts.

Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  (About DOIs).

Distinctive styles of two jazz greats

Wes Montgomery (left)) and Joe Pass (right) playing guitars

Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn’t use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn’t want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and “flat picking.”

Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA.

Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass’s rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumb (Montgomery) produced more of a “pluck” compared to the pick (Pass), which produced more of a “strike.” Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery.

Sounds of an ancient underground city

A collection of images from the underground tunnels of Derinkuyu.

Credit: Sezin Nas

Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channels (and some 50,000 smaller shafts) serving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions.

The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu’s most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site’s acoustic environment.

Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves.

MIT’s latest ping-pong robot

Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to “learn” from prior data to improve their performance.

MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid’s arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras.

The new bot can execute three different swing types (loop, drive, and chip) and during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot’s strike speed up to 19 meters per second (about 42 MPH), close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming.

Why orange cats are orange

an orange tabby kitten

Cat lovers know orange cats are special for more than their unique coloring, but that’s the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology.

Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshell (partially orange) coloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resources (including complete sequenced genomes) for cats which greatly aided the team’s research, along with taking additional DNA samples from cats at spay and neuter clinics.

From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn’t known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutation (sex-linked orange) turns on Arhgap36 expression in pigment cells (and only pigment cells), thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  (About DOIs).

Not a Roman “massacre” after all

Two of the skeletons excavated by Mortimer Wheeler in the 1930s, dating from the 1st century AD.

Credit: Martin Smith

In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that’s the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries.

But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn’t die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It’s possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle.

DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 stories we almost missed Read More »

the-key-to-a-successful-egg-drop-experiment?-drop-it-on-its-side.

The key to a successful egg drop experiment? Drop it on its side.

There was a key difference, however, between how vertically and horizontally squeezed eggs deformed in the compression experiments—namely, the former deformed less than the latter. The shell’s greater rigidity along its long axis was an advantage because the heavy load was distributed over the surface. (It’s why the one-handed egg-cracking technique targets the center of a horizontally held egg.)

But the authors found that this advantage when under static compression proved to be a disadvantage when dropping eggs from a height, with the horizontal position emerging as the optimal orientation.  It comes down to the difference between stiffness—how much force is needed to deform the egg—and toughness, i.e., how much energy the egg can absorb before it cracks.

Cohen et al.’s experiments showed that eggs are tougher when loaded horizontally along their equator, and stiffer when compressed vertically, suggesting that “an egg dropped on its equator can likely sustain greater drop heights without cracking,” they wrote. “Even if eggs could sustain a higher force when loaded in the vertical direction, it does not necessarily imply that they are less likely to break when dropped in that orientation. In contrast to static loading, to remain intact following a dynamic impact, a body must be able to absorb all of its kinetic energy by transferring it into reversible deformation.”

“Eggs need to be tough, not stiff, in order to survive a fall,” Cohen et al. concluded, pointing to our intuitive understanding that we should bend our knees rather than lock them into a straightened position when landing after a jump, for example. “Our results and analysis serve as a cautionary tale about how language can affect our understanding of a system, and improper framing of a problem can lead to misunderstanding and miseducation.”

DOI: Communications Physics, 2025. 10.1038/s42005-025-02087-0  (About DOIs).

The key to a successful egg drop experiment? Drop it on its side. Read More »

the-physics-of-frilly-swiss-cheese-“flowers”

The physics of frilly Swiss cheese “flowers”

For their experiments, the authors of the PRL paper selected samples of Monk’s head cheese wheels from the Fromagerie de Bellelay brand that had been aged between three and six months. They cut each cheese wheel in half and mounted each half on a Girolle, motorizing the base to ensure a constant speed of rotation and making sure the blade was in a fixed position. Their measurements of how the cheese deformed during scraping enabled them to build a model based on metal dynamics on a two-dimensional surface that had “cheese-like properties.”

The results showed that there was a variable friction between the core and the edge of the cheese wheel, because the core stayed fresher during the ripening process. Because the harder outer edge had lower friction with the blade, the edges of the cheese shavings were uneven in thickness—hence the resemblance to frilly rosettes.

This essentially amounts to a new shaping mechanism with the possibility of being able to one day program complex shaping from “a simple scraping process,” per the authors. “Our analysis provides the tools for a better control of flower chip morphogenesis through plasticity in the shaping of other delicacies, but also in metal cutting,” they concluded. Granted, “flower-shaped chips have never been reported in metal cutting. But even in such uniform materials, the fact that friction properties control the metric change is particularly interesting for material shaping.”

Physical Review Letters, 2025. DOI: 10.1103/PhysRevLett.134.208201  (About DOIs).

The physics of frilly Swiss cheese “flowers” Read More »

beyond-qubits:-meet-the-qutrit-(and-ququart)

Beyond qubits: Meet the qutrit (and ququart)

The world of computers is dominated by binary. Silicon transistors are either conducting or they’re not, and so we’ve developed a whole world of math and logical operations around those binary capabilities. And, for the most part, quantum computing has been developing along similar lines, using qubits that, when measured, will be found in one of two states.

In some cases, the use of binary values is a feature of the object being used to hold the qubit. For example, a technology called dual-rail qubits takes its value from which of two linked resonators holds a photon. But there are many other quantum objects that have access to far more than two states—think of something like all the possible energy states an electron could occupy when orbiting an atom. We can use things like this as qubits by only relying on the lowest two energy levels. But there’s nothing stopping us from using more than two.

In Wednesday’s issue of Nature, researchers describe creating qudits, the generic term for systems that hold quantum information—it’s short for quantum digits. Using a system that can be in three or four possible states (qutrits and ququarts, respectively), they demonstrate the first error correction of higher-order quantum memory.

Making ququarts

More complex qudits haven’t been as popular with the people who are developing quantum computing hardware for a number of reasons; one of them is simply that some hardware only has access to two possible states. In other cases, the energy differences between additional states become small and difficult to distinguish. Finally, some operations can be challenging to execute when you’re working with qudits that can hold multiple values—a completely different programming model than qubit operations might be required.

Still, there’s a strong case to be made that moving beyond qubits could be valuable: it lets us do a lot more with less hardware. Right now, all the major quantum computing efforts are hardware-constrained—we can’t build enough qubits and link them together for error correction to let us do useful calculations. But if we could fit more information in less hardware, it should, in theory, let us get to useful calculations sooner.

Beyond qubits: Meet the qutrit (and ququart) Read More »

the-physics-of-bowling-strike-after-strike

The physics of bowling strike after strike

More than 45 million people in the US are fans of bowling, with national competitions awarding millions of dollars. Bowlers usually rely on instinct and experience, earned through lots and lots of practice, to boost their strike percentage. A team of physicists has come up with a mathematical model to better predict ball trajectories, outlined in a new paper published in the journal AIP Advances. The resulting equations take into account such factors as the composition and resulting pattern of the oil used on bowling lanes, as well as the inevitable asymmetries of bowling balls and player variability.

The authors already had a strong interest in bowling. Three are regular bowlers and quite skilled at the sport; a fourth, Curtis Hooper of Longborough University in the UK, is a coach for Team England at the European Youth Championships. Hooper has been studying the physics of bowling for several years, including an analysis of the 2017 Weber Cup, as well as papers devising mathematical models for the application of lane conditioners and oil patterns in bowling.

The calculations involved in such research are very complicated because there are so many variables that can affect a ball’s trajectory after being thrown. Case in point: the thin layer of oil that is applied to bowling lanes, which Hooper found can vary widely in volume and shape among different venues, plus the lack of uniformity in applying the layer, which creates an uneven friction surface.

Per the authors, most research to date has relied on statistically analyzing empirical data, such as a 2018 report by the US Bowling Congress that looked at data generated by 37 bowlers. (Hooper relied on ball-tracking data for his 2017 Weber Cup analysis.) A 2009 analysis showed that the optimal location for the ball to strike the headpin is about 6 centimeters off-center, while the optimal entry angle for the ball to hit is about 6 degrees. However, such an approach struggles to account for the inevitable player variability. No bowler hits their target 100 percent of the time, and per Hooper et al., while the best professionals can come within 0.1 degrees from the optimal launch angle, this slight variation can nonetheless result in a difference of several centimeters down-lane.

The physics of bowling strike after strike Read More »

quantum-hardware-may-be-a-good-match-for-ai

Quantum hardware may be a good match for AI

Quantum computers don’t have that sort of separation. While they could include some quantum memory, the data is generally housed directly in the qubits, while computation involves performing operations, called gates, directly on the qubits themselves. In fact, there has been a demonstration that, for supervised machine learning, where a system can learn to classify items after training on pre-classified data, a quantum system can outperform classical ones, even when the data being processed is housed on classical hardware.

This form of machine learning relies on what are called variational quantum circuits. This is a two-qubit gate operation that takes an additional factor that can be held on the classical side of the hardware and imparted to the qubits via the control signals that trigger the gate operation. You can think of this as analogous to the communications involved in a neural network, with the two-qubit gate operation equivalent to the passing of information between two artificial neurons and the factor analogous to the weight given to the signal.

That’s exactly the system that a team from the Honda Research Institute worked on in collaboration with a quantum software company called Blue Qubit.

Pixels to qubits

The focus of the new work was mostly on how to get data from the classical world into the quantum system for characterization. But the researchers ended up testing the results on two different quantum processors.

The problem they were testing is one of image classification. The raw material was from the Honda Scenes dataset, which has images taken from roughly 80 hours of driving in Northern California; the images are tagged with information about what’s in the scene. And the question the researchers wanted the machine learning to handle was a simple one: Is it snowing in the scene?

Quantum hardware may be a good match for AI Read More »

fewer-beans-=-great-coffee-if-you-get-the-pour-height-right

Fewer beans = great coffee if you get the pour height right

Based on their findings, the authors recommend pouring hot water over your coffee grounds slowly to give the beans more time immersed in the water. But pour the water too slowly and the resulting jet will stick to the spout (the “teapot effect”) and there won’t be sufficient mixing of the grounds; they’ll just settle to the bottom instead, decreasing extraction yield. “If you have a thin jet, then it tends to break up into droplets,” said co-author Margot Young. “That’s what you want to avoid in these pour-overs, because that means the jet cannot mix the coffee grounds effectively.”

Smaller jet diameter impact on dynamics.

Smaller jet diameter impact on dynamics. Credit: E. Park et al., 2025

That’s where increasing the height from which you pour comes in. This imparts more energy from gravity, per the authors, increasing the mixing of the granular coffee grounds. But again, there’s such a thing as pouring from too great a height, causing the water jet to break apart. The ideal height is no more than 50 centimeters (about 20 inches) above the filter. The classic goosenecked tea kettle turns out to be ideal for achieving that optimal height. Future research might explore the effects of varying the grain size of the coffee grounds.

Increasing extraction yields and, by extension, reducing how much coffee grounds one uses matters because it is becoming increasingly difficult to cultivate the most common species of coffee because of ongoing climate change. “Coffee is getting harder to grow, and so, because of that, prices for coffee will likely increase in coming years,” co-author Arnold Mathijssen told New Scientist. “The idea for this research was really to see if we could help do something by reducing the amount of coffee beans that are needed while still keeping the same amount of extraction, so that you get the same strength of coffee.”

But the potential applications aren’t limited to brewing coffee. The authors note that this same liquid jet/submerged granular bed interplay is also involved in soil erosion from waterfalls, for example, as well as wastewater treatment—using liquid jets to aerate wastewater to enhance biodegradation of organic matter—and dam scouring, where the solid ground behind a dam is slowly worn away by water jets. “Although dams operate on a much larger scale, they may undergo similar dynamics, and finding ways to decrease the jet height in dams may decrease erosion and elongate dam health,” they wrote.

Physics of Fluids, 2025. DOI: 10.1063/5.0257924 (About DOIs).

Fewer beans = great coffee if you get the pour height right Read More »

first-tokamak-component-installed-in-a-commercial-fusion-plant

First tokamak component installed in a commercial fusion plant


A tokamak moves forward as two companies advance plans for stellarators.

There are a remarkable number of commercial fusion power startups, considering that it’s a technology that’s built a reputation for being perpetually beyond the horizon. Many of them focus on radically new technologies for heating and compressing plasmas, or fusing unusual combinations of isotopes. These technologies are often difficult to evaluate—they can clearly generate hot plasmas, but it’s tough to determine whether they can get hot enough, often enough to produce usable amounts of power.

On the other end of the spectrum are a handful of companies that are trying to commercialize designs that have been extensively studied in the academic world. And there have been some interesting signs of progress here. Recently, Commonwealth Fusion, which is building a demonstration tokamak in Massachussets, started construction of the cooling system that will keep its magnets superconducting. And two companies that are hoping to build a stellarator did some important validation of their concepts.

Doing donuts

A tokamak is a donut-shaped fusion chamber that relies on intense magnetic fields to compress and control the plasma within it. A number of tokamaks have been built over the years, but the big one that is expected to produce more energy than required to run it, ITER, has faced many delays and now isn’t expected to achieve its potential until the 2040s. Back in 2015, however, some physicists calculated that high-temperature superconductors would allow ITER-style performance in a far smaller and easier-to-build package. That idea was commercialized as Commonwealth Fusion.

The company is currently trying to build an ITER equivalent: a tokamak that can achieve fusion but isn’t large enough and lacks some critical hardware needed to generate electricity from that reaction. The planned facility, SPARC, is already in progress, with most of the supporting facility in place and superconducting magnets being constructed. But in late March, the company took a major step by installing the first component of the tokamak itself, the cryostat base, which will support the hardware that keeps its magnets cool.

Alex Creely, Commonwealth Fusion’s tokamak operations director and SPARC’s chief engineer, told Ars that the cryostat’s materials have to be chosen to be capable of handling temperatures in the area of 20 Kelvin, and be able to tolerate neutron exposure. Fortunately, stainless steel is still up to the task. It will also be part of a structure that has to handle an extreme temperature gradient. Creely said that it only takes about 30 centimeters to go from the hundreds of millions of degrees C of the plasma down to about 1,000° C, after which it becomes relatively simple to reach cryostat temperatures.

He said that construction is expected to wrap up about a year from now, after which there will be about a year of commissioning the hardware, with fusion experiments planned for 2027. And, while ITER may be facing ongoing delays, Creely said that it was critical for keeping Commonwealth on a tight schedule. Not only is most of the physics of SPARC the same as that of ITER, but some of the hardware will be as well. “We’ve learned a lot from their supply chain development,” Creely said. “So some of the same vendors that are supplying components for the ITER tokamak, we are also working with those same vendors, which has been great.”

Great in the sense that Commonwealth is now on track to see plasma well in advance of ITER. “Seeing all of this go from a bunch of sketches or boxes on slides—clip art effectively—to real metal and concrete that’s all coming together,” Creely said. “You’re transitioning from building the facility, building the plant around the tokamak to actually starting to build the tokamak itself. That is an awesome milestone.”

Seeing stars?

The plasma inside a tokamak is dynamic, meaning that it requires a lot of magnetic intervention to keep it stable, and fusion comes in pulses. There’s an alternative approach called a stellarator, which produces an extremely complex magnetic field that can support a simpler, stable plasma and steady fusion. As implemented by the Wendelstein 7-X stellarator in Germany, this meant a series of complex-shaped magnets manufactured with extremely low tolerance for deviation. But a couple of companies have decided they’re up for the challenge.

One of those, Type One Energy, has basically reached the stage that launched Commonwealth Fusion: It has made a detailed case for the physics underlying its stellarator design. In this instance, the case may even be considerably more detailed: six peer-reviewed articles in the Journal of Plasma Physics. The papers detail the structural design, the behavior of the plasma within it, handling of the helium produced by fusion, generation of tritium from the neutrons produced, and obtaining heat from the whole thing.

The company is partnering with Oak Ridge National Lab and the Tennessee Valley Authority to build a demonstration reactor on the site of a former fossil fuel power plant. (It’s also cooperating with Commonwealth on magnet development.) As with the SPARC tokamak, this will be a mix of technology demonstration and learning experience, rather than a functioning power plant.

Another company that’s pursuing a stellarator design is called Thea Energy. Brian Berzin, its CEO, told Ars that the company’s focus is on simplifying the geometry of the magnets needed for a stellarator and is using software to get them to produce an equivalent magnetic field. “The complexity of this device has always been really, really limiting,” he said, referring to the stellarator. “That’s what we’re really focused on: How can you make simpler hardware? Our way of allowing for simpler hardware is using really, really complicated software, which is something that has taken over the world.”

He said that the simplicity of the hardware will be helpful for an operational power plant, since it allows them to build multiple identical segments as spares, so things can be swapped out and replaced when maintenance is needed.

Like Commonwealth Fusion, Thea Energy is using high-temperature superconductors to build its magnets, with a flat array of smaller magnets substituting for the three-dimensional magnets used at Wendelstein. “We are able to really precisely recreate those magnetic fields required for accelerator, but without any wiggly, complicated, precise, expensive, costly, time-consuming hardware,” Berzin said. And the company recently released a preprint of some testing with the magnet array.

Thea is also planning on building a test stellarator. In its case, however, it’s going to be using deuterium-deuterium fusion, which is much less efficient than deuterium-tritium that will be needed for a power plant. But Berzin said that the design will incorporate a layer of lithium that will form tritium when bombarded by neutrons from the stellarator. If things go according to plan, the reactor will validate Thea’s design and be a fuel source for the rest of the industry.

Of course, nobody will operate a fusion power plant until sometime in the next decade—probably about at the same time that we might expect some of the first small modular fission plants to be built. Given the vast expansion in renewable production that is in progress, it’s difficult to predict what the energy market will look like at that point. So, these test reactors will be built in a very uncertain environment. But that uncertainty hasn’t stopped these companies from pursuing fusion.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

First tokamak component installed in a commercial fusion plant Read More »

hints-grow-stronger-that-dark-energy-changes-over-time

Hints grow stronger that dark energy changes over time

In its earliest days, the Universe was a hot, dense soup of subatomic particles, including hydrogen and helium nuclei, aka baryons. Tiny fluctuations created a rippling pattern through that early ionized plasma, which froze into a three-dimensional place as the Universe expanded and cooled. Those ripples, or bubbles, are known as baryon acoustic oscillations (BAO). It’s possible to use BAOs as a kind of cosmic ruler to investigate the effects of dark energy over the history of the Universe.

DESI is a state-of-the-art instrument and can capture light from up to 5,000 celestial objects simultaneously.

DESI is a state-of-the-art instrument that can capture light from up to 5,000 celestial objects simultaneously.

That’s what DESI was designed to do: take precise measurements of the apparent size of these bubbles (both near and far) by determining the distances to galaxies and quasars over 11 billion years. That data can then be sliced into chunks to determine how fast the Universe was expanding at each point of time in the past, the better to model how dark energy was affecting that expansion.

An upward trend

Last year’s results were based on analysis of a full year’s worth of data taken from seven different slices of cosmic time and include 450,000 quasars, the largest ever collected, with a record-setting precision of the most distant epoch (between 8 to 11 billion years back) of 0.82 percent. While there was basic agreement with the Lamba CDM model, when those first-year results were combined with data from other studies (involving the cosmic microwave background radiation and Type Ia supernovae), some subtle differences cropped up.

Essentially, those differences suggested that the dark energy might be getting weaker. In terms of confidence, the results amounted to a 2.6-sigma level for the DESI’s data combined with CMB datasets. When adding the supernovae data, those numbers grew to 2.5-sigma, 3.5-sigma, or 3.9-sigma levels, depending on which particular supernova dataset was used.

It’s important to combine the DESI data with other independent measurements because “we want consistency,” said DESI co-spokesperson Will Percival of the University of Waterloo. “All of the different experiments should give us the same answer to how much matter there is in the Universe at present day, how fast the Universe is expanding. It’s no good if all the experiments agree with the Lambda-CDM model, but then give you different parameters. That just doesn’t work. Just saying it’s consistent to the Lambda-CDM, that’s not enough in itself. It has to be consistent with Lambda-CDM and give you the same parameters for the basic properties of that model.”

Hints grow stronger that dark energy changes over time Read More »

spiderbot-experiments-hint-at-“echolocation”-to-locate-prey

SpiderBot experiments hint at “echolocation” to locate prey

It’s well understood that spiders have poor eyesight and thus sense the vibrations in their webs whenever prey (like a fly) gets caught; the web serves as an extension of their sensory system. But spiders also exhibit less-understood behaviors to locate struggling prey. Most notably, they take on a crouching position, sometimes moving up and down to shake the web or plucking at the web by pulling in with one leg. The crouching seems to be triggered when prey is stationary and stops when the prey starts moving.

But it can be difficult to study the underlying mechanisms of this behavior because there are so many variables at play when observing live spiders. To simplify matters, researchers at Johns Hopkins University’s Terradynamics Laboratory are building crouching spider robots and testing them on synthetic webs. The results provide evidence for the hypothesis that spiders crouch to sense differences in web frequencies to locate prey that isn’t moving—something analogous to echolocation. The researchers presented their initial findings today at the American Physical Society’s Global Physics Summit in Anaheim, California.

“Our lab investigates biological problems using robot physical models,” team member Eugene Lin told Ars. “Animal experiments are really hard to reproduce because it’s hard to get the animal to do what you want to do.” Experiments with robot physical models, by contrast, “are completely repeatable. And while you’re building them, you get a better idea of the actual [biological] system and how certain behaviors happen.” The lab has also built robots inspired by cockroaches and fish.

The research was done in collaboration with two other labs at JHU. Andrew Gordus’ lab studies spider behavior, particularly how they make their webs, and provided biological expertise as well as videos of the particular spider species (U. diversus) of interest. Jochen Mueller’s lab provided expertise in silicone molding, allowing the team to use their lab to 3D-print their spider robot’s flexible joints.

Crouching spider, good vibrations

A spider exhibiting crouching behavior.

A spider exhibiting crouching behavior. Credit: YouTube/Terradynamics Lab/JHU

The first spider robot model didn’t really move or change its posture; it was designed to sense vibrations in the synthetic web. But Lin et al. later modified it with actuators so it could move up and down. Also, there were only four legs, with two joints in each and two accelerometers on each leg; real spiders have eight legs and many more joints. But the model was sufficient for experimental proof of principle. There was also a stationary prey robot.

SpiderBot experiments hint at “echolocation” to locate prey Read More »