quantum computing

microsoft-lays-out-its-path-to-useful-quantum-computing

Microsoft lays out its path to useful quantum computing


Its platform needs error correction that works with different hardware.

Some of the optical hardware needed to make Atom Computing’s machines work. Credit: Atom Computing

On Thursday, Microsoft’s Azure Quantum group announced that it has settled on a plan for getting error correction on quantum computers. While the company pursues its own hardware efforts, the Azure team is a platform provider that currently gives access to several distinct types of hardware qubits. So it has chosen a scheme that is suitable for several different quantum computing technologies (notably excluding its own). The company estimates that the system it has settled on can take hardware qubits with an error rate of about 1 in 1,000 and use them to build logical qubits where errors are instead 1 in 1 million.

While it’s describing the scheme in terms of mathematical proofs and simulations, it hasn’t shown that it works using actual hardware yet. But one of its partners, Atom Computing, is accompanying the announcement with a description of how its machine is capable of performing all the operations that will be needed.

Arbitrary connections

There are similarities and differences between what the company is talking about today and IBM’s recent update of its roadmap, which described another path to error-resistant quantum computing. In IBM’s case, it makes both the software stack that will perform the error correction and the hardware needed to implement it. It uses chip-based hardware, with the connections among qubits mediated by wiring that’s laid out when the chip is fabricated. Since error correction schemes require a very specific layout of connections among qubits, once IBM decides on a quantum error correction scheme, it can design chips with the wiring needed to implement that scheme.

Microsoft’s Azure, in contrast, provides its users with access to hardware from several different quantum computing companies, each based on different technology. Some of them, like Rigetti and Microsoft’s own planned processor, are similar to IBM’s in that they have a fixed layout during manufacturing, and so can only handle codes that are compatible with their wiring layout. But others, such as those provided by Quantinuum and Atom Computing, store their qubits in atoms that can be moved around and connected in arbitrary ways. Those arbitrary connections allow very different types of error correction schemes to be considered.

It can be helpful to think of this using an analogy to geometry. A chip is like a plane, where it’s easiest to form the connections needed for error correction among neighboring qubits; longer connections are possible, but not as easy. Things like trapped ions and atoms provide a higher-dimensional system where far more complicated patterns of connections are possible. (Again, this is an analogy. IBM is using three-dimensional wiring in its processing chips, while Atom Computing stores all its atoms in a single plane.)

Microsoft’s announcement is focused on the sorts of processors that can form the more complicated, arbitrary connections. And, well, it’s taking full advantage of that, building an error correction system with connections that form a four-dimensional hypercube. “We really have focused on the four-dimensional codes due to their amenability to current and near term hardware designs,” Microsoft’s Krysta Svore told Ars.

The code not only describes the layout of the qubits and their connections, but also the purpose of each hardware qubit. Some of them are used to hang on to the value of the logical qubit(s) stored in a single block of code. Others are used for what are called “weak measurements.” These measurements tell us something about the state of the ones that are holding on to the data—not enough to know their values (a measurement that would end the entanglement), but enough to tell if something has changed. The details of the measurement allow corrections to be made that restore the original value.

Microsoft’s error correction system is described in a preprint that the company recently released. It includes a family of related geometries, each of which provides different degrees of error correction, based on how many simultaneous errors they can identify and fix. The descriptions are about what you’d expect for complicated math and geometry—”Given a lattice Λ with an HNF L, the code subspace of the 4D geometric code CΛ is spanned by the second homology H2(T4Λ,F2) of the 4-torus T4Λ—but the gist is that all of them convert collections of physical qubits into six logical qubits that can be error corrected.

The more hardware qubits you add to host those six logical qubits, the greater error protection each of them gets. That becomes important because some more sophisticated algorithms will need more than the one-in-a-million error protection that Svore said Microsoft’s favored version will provide. That favorite is what’s called the Hadamard version, which bundles 96 hardware qubits to form six logical qubits, and has a distance of eight (distance being a measure of how many simultaneous errors it can tolerate). You can compare that with IBM’s announcement, which used 144 hardware qubits to host 12 logical qubits at a distance of 12 (so, more hardware, but more logical qubits and greater error resistance).

The other good stuff

On its own, a description of the geometry is not especially exciting. But Microsoft argues that this family of error correction codes has a couple of significant advantages. “All of these codes in this family are what we call single shot,” Svore said. “And that means that, with a very low constant number of rounds of getting information about the noise, one can decode and correct the errors. This is not true of all codes.”

Limiting the number of measurements needed to detect errors is important. For starters, measurements themselves can create errors, so making fewer makes the system more robust. In addition, in things like neutral atom computers, the atoms have to be moved to specific locations where measurements take place, and the measurements heat them up so that they can’t be reused until cooled. So, limiting the measurements needed can be very important for the performance of the hardware.

The second advantage of this scheme, as described in the draft paper, is the fact that you can perform all the operations needed for quantum computing on the logical qubits these schemes host. Just like in regular computers, all the complicated calculations performed on a quantum computer are built up from a small number of simple logical operations. But not every possible logical operation works well with any given error correction scheme. So it can be non-trivial to show that an error correction scheme is compatible with enough of the small operations to enable universal quantum computation.

So, the paper describes how some logical operations can be performed relatively easily, while a few others require manipulations of the error correction scheme in order to work. (These manipulations have names like lattice surgery and magic state distillation, which are good signs that the field doesn’t take itself that seriously.)

So, in sum, Microsoft feels that it has identified an error correction scheme that is fairly compact, can be implemented efficiently on hardware that stores qubits in photons, atoms, or trapped ions, and enables universal computation. What it hasn’t done, however, is show that it actually works. And that’s because it simply doesn’t have the hardware right now. Azure is offering trapped ion machines from IonQ and Qantinuum, but these top out at 56 qubits—well below the 96 needed for their favored version of these 4D codes. The largest it has access to is a 100-qubit machine from a company called PASQAL, which barely fits the 96 qubits needed, leaving no room for error.

While it should be possible to test smaller versions of codes in the same family, the Azure team has already demonstrated its ability to work with error correction codes based on hypercubes, so it’s unclear whether there’s anything to gain from that approach.

More atoms

Instead, it appears to be waiting for another partner, Atom Computing, to field its next-generation machine, one it’s designing in partnership with Microsoft. “This first generation that we are building together between Atom Computing and Microsoft will include state-of-the-art quantum capabilities, will have 1,200 physical qubits,” Svore said “And then the next upgrade of that machine will have upwards of 10,000. And so you’re looking at then being able to go to upwards of a hundred logical qubits with deeper and more reliable computation available. “

So, today’s announcement was accompanied by an update on progress from Atom Computing, focusing on a process called “midcircuit measurement.” Normally, during quantum computing algorithms, you have to resist performing any measurements of the value of qubits until the entire calculation is complete. That’s because quantum calculations depend on things like entanglement and each qubit being in a superposition between its two values; measurements can cause all that to collapse, producing definitive values and ending entanglement.

Quantum error correction schemes, however, require that some of the hardware qubits undergo weak measurements multiple times while the computation is in progress. Those are quantum measurements taking place in the middle of a computation—midcircuit measurements, in other words. To show that its hardware will be up to the task that Microsoft expects of it, the company decided to demonstrate mid-circuit measurements on qubits implementing a simple error correction code.

The process reveals a couple of notable features that are distinct from doing this with neutral atoms. To begin with, the atoms being used for error correction have to be moved to a location—the measurement zone—where they can be measured without disturbing anything else. Then, the measurement typically heats up the atom slightly, meaning they have to be cooled back down afterward. Neither of these processes is perfect, and so sometimes an atom gets lost and needs to be replaced with one from a reservoir of spares. Finally, the atom’s value needs to be reset, and it has to be sent back to its place in the logical qubit.

Testing revealed that about 1 percent of the atoms get lost each cycle, but the system successfully replaces them. In fact, they set up a system where the entire collection of atoms is imaged during the measurement cycle, and any atom that goes missing is identified by an automated system and replaced.

Overall, without all these systems in place, the fidelity of a qubit is about 98 percent in this hardware. With error correction turned on, even this simple logical qubit saw its fidelity rise over 99.5 percent. All of which suggests their next computer should be up to some significant tests of Microsoft’s error correction scheme.

Waiting for the lasers

The key questions are when it will be released, and when its successor, which should be capable of performing some real calculations, will follow it? That’s something that’s a challenging question to ask because, more so than some other quantum computing technologies, neutral atom computing is dependent on something that’s not made by the people who build the computers: lasers. Everything about this system—holding atoms in place, moving them around, measuring, performing manipulations—is done with a laser. The lower the noise of the laser (in terms of things like frequency drift and energy fluctuations), the better performance it’ll have.

So, while Atom can explain its needs to its suppliers and work with them to get things done, it has less control over its fate than some other companies in this space.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft lays out its path to useful quantum computing Read More »

cybersecurity-takes-a-big-hit-in-new-trump-executive-order

Cybersecurity takes a big hit in new Trump executive order

Cybersecurity practitioners are voicing concerns over a recent executive order issued by the White House that guts requirements for: securing software the government uses, punishing people who compromise sensitive networks, preparing new encryption schemes that will withstand attacks from quantum computers, and other existing controls.

The executive order (EO), issued on June 6, reverses several key cybersecurity orders put in place by President Joe Biden, some as recently as a few days before his term ended in January. A statement that accompanied Donald Trump’s EO said the Biden directives “attempted to sneak problematic and distracting issues into cybersecurity policy” and amounted to “political football.”

Pro-business, anti-regulation

Specific orders Trump dropped or relaxed included ones mandating (1) federal agencies and contractors adopt products with quantum-safe encryption as they become available in the marketplace, (2) a stringent Secure Software Development Framework (SSDF) for software and services used by federal agencies and contractors, (3) the adoption of phishing-resistant regimens such as the WebAuthn standard for logging into networks used by contractors and agencies, (4) the implementation new tools for securing Internet routing through the Border Gateway Protocol, and (5) the encouragement of digital forms of identity.

In many respects, executive orders are at least as much performative displays as they are a vehicle for creating sound policy. Biden’s cybersecurity directives were mostly in this second camp.

The provisions regarding the secure software development framework, for instance, was born out of the devastating consequences of the SolarWinds supply chain attack of 2020. During the event, hackers linked to the Russian government breached the network of a widely used cloud service, SolarWinds. The hackers went on to push a malicious update that distributed a backdoor to more than 18,000 customers, many of whom were contractors and agencies of the federal government.

Cybersecurity takes a big hit in new Trump executive order Read More »

ibm-is-now-detailing-what-its-first-quantum-compute-system-will-look-like

IBM is now detailing what its first quantum compute system will look like


Company is moving past focus on qubits, shifting to functional compute units.

A rendering of what IBM expects will be needed to house a Starling quantum computer. Credit: IBM

On Tuesday, IBM released its plans for building a system that should push quantum computing into entirely new territory: a system that can both perform useful calculations while catching and fixing errors and be utterly impossible to model using classical computing methods. The hardware, which will be called Starling, is expected to be able to perform 100 million operations without error on a collection of 200 logical qubits. And the company expects to have it available for use in 2029.

Perhaps just as significant, IBM is also committing to a detailed description of the intermediate steps to Starling. These include a number of processors that will be configured to host a collection of error-corrected qubits, essentially forming a functional compute unit. This marks a major transition for the company, as it involves moving away from talking about collections of individual hardware qubits and focusing instead on units of functional computational hardware. If all goes well, it should be possible to build Starling by chaining a sufficient number of these compute units together.

“We’re updating [our roadmap] now with a series of deliverables that are very precise,” IBM VP Jay Gambetta told Ars, “because we feel that we’ve now answered basically all the science questions associated with error correction and it’s becoming more of a path towards an engineering problem.”

New architectures

Error correction on quantum hardware involves entangling a groups of qubits in a way that distributes one or more quantum bit values among them and includes additional qubits that can be used to check the state of the system. It can be helpful to think of these as data and measurement qubits. Performing weak quantum measurements on the measurement qubits produces what’s called “syndrome data,” which can be interpreted to determine whether anything about the data qubits has changed (indicating an error) and how to correct it.

There are lots of potential ways to arrange different combinations of data and measurement qubits for this to work, each referred to as a code. But, as a general rule, the more hardware qubits committed to the code, the more robust it will be to errors, and the more logical qubits that can be distributed among its hardware qubits.

Some quantum hardware, like that based on trapped ions or neutral atoms, are relatively flexible when it comes to hosting error-correction codes. The hardware qubits can be moved around so that any two can be entangled, so it’s possible to adopt a huge range of configurations, albeit at the cost of the time spent moving atoms around. IBM’s technology is quite different. It relies on qubits made of superconducting electronics laid out on a chip, with entanglement mediated by wiring that runs between qubits. The layout of this wiring is set during the chip’s manufacture, and so the design of the chip commits it to a limited number of potential error-correction codes.

Unfortunately, this wiring can also enable cross-talk between neighboring qubits, causing them to lose their state. To avoid this, existing IBM processors have their qubits wired in what they term a “heavy hex” configuration, named for its hexagonal arrangements of connections among its qubits. This has worked well to keep the error rate of its hardware down, but it also poses a challenge, since IBM has decided to go with an error-correction code that’s incompatible with the heavy hex geometry.

A couple of years back, an IBM team described a compact error correction code called a low-density parity check (LDPC). This requires a square grid of nearest-neighbor connections among its qubits, as well as wiring to connect qubits that are relatively distant on the chip. To get its chips and error-correction scheme in sync, IBM has made two key advances. The first is in its chip packaging, which now uses several layers of wiring sitting above the hardware qubits to enable all of the connections needed for the LDPC code.

We’ll see that first in a processor called Loon that’s on the company’s developmental roadmap. “We’ve already demonstrated these three things: high connectivity, long-range couplers, and couplers that break the plane [of the chip] and connect to other qubits,” Gambetta said. “We have to combine them all as a single demonstration showing that all these parts of packaging can be done, and that’s what I want to achieve with Loon.” Loon will be made public later this year.

Two diagrams of blue objects linked by red lines. The one on the left is sparse and simple, while the one on the right is a complicated mesh of red lines.

On the left, the simple layout of the connections in a current-generation Heron processor. At right, the complicated web of connections that will be present in Loon. Credit: IBM

The second advance IBM has made is to eliminate the crosstalk that the heavy hex geometry was used to minimize, so heavy hex will be going away. “We are releasing this year a bird for near term experiments that is a square array that has almost zero crosstalk,” Gambetta said, “and that is Nighthawk.” The more densely connected qubits cut the overhead needed to perform calculations by a factor of 15, Gambetta told Ars.

Nighthawk is a 2025 release on a parallel roadmap that you can think of as user-facing. Iterations on its basic design will be released annually through 2028, each enabling more operations without error (going from 5,000 gate operations this year to 15,000 in 2028). Each individual Nighthawk processor will host 120 hardware qubits, but 2026 will see three of them chained together and operating as a unit, providing 360 hardware qubits. That will be followed in 2027 by a machine with nine linked Nighthawk processors, boosting the hardware qubit number over 1,000.

Riding the bicycle

The real future of IBM’s hardware, however, will be happening over on the developmental line of processors, where talk about hardware qubit counts will become increasingly irrelevant. In a technical document released today, IBM is describing the specific LDPC code it will be using, termed a bivariate bicycle code due to some cylindrical symmetries in its details that vaguely resemble bicycle wheels. The details of the connections matter less than the overall picture of what it takes to use this error code in practice.

IBM describes two implementations of this form of LDPC code. In the first, 144 hardware qubits are arranged so that they play host to 12 logical qubits and all of the measurement qubits needed to perform error checks. The standard measure of a code’s ability to catch and correct errors is called its distance, and in this case, the distance is 12. As an alternative, they also describe a code that uses 288 hardware qubits to host the same 12 logical qubits but boost the distance to 18, meaning it’s more resistant to errors. IBM will make one of these collections of logical qubits available as a Kookaburra processor in 2026, which will use them to enable stable quantum memory.

The follow-on will bundle these with a handful of additional qubits that can produce quantum states that are needed for some operations. Those, plus hardware needed for the quantum memory, form a single, functional computation unit, built on a single chip, that is capable of performing all the operations needed to implement any quantum algorithm.

That will appear with the Cockatoo chip, which will also enable multiple processing units to be linked on a single bus, allowing the logical qubit count to grow beyond 12. (The company says that one of the dozen logical qubits in each unit will be used to mediate entanglement with other units and so won’t be available for computation.) That will be followed by the first test versions of Starling, which will allow universal computations on a limited number of logical qubits spread across multiple chips.

Separately, IBM is releasing a document that describes a key component of the system that will run on classical computing hardware. Full error correction requires evaluating the syndrome data derived from the state of all the measurement qubits in order to determine the state of the logical qubits and whether any corrections need to be made. As the complexity of the logical qubits grows, the computational burden of evaluating grows with it. If this evaluation can’t be executed in real time, then it becomes impossible to perform error-corrected calculations.

To address this, IBM has developed a message-passing decoder that can perform parallel evaluations of the syndrome data. The system explores more of the solution space by a combination of randomizing the weight given to the memory of past solutions and by handing any seemingly non-optimal solutions on to new instances for additional evaluation. The key thing is that IBM estimates that this can be run in real time using FPGAs, ensuring that the system works.

A quantum architecture

There are a lot more details beyond those, as well. Gambetta described the linkage between each computational unit—IBM is calling it a Universal Bridge—which requires one microwave cable for each code distance of the logical qubits being linked. (In other words, a distance 12 code would need 12 microwave-carrying cables to connect each chip.) He also said that IBM is developing control hardware that can operate inside the refrigeration hardware, based on what they’re calling “cold CMOS,” which is capable of functioning at 4 Kelvin.

The company is also releasing renderings of what it expects Starling to look like: a series of dilution refrigerators, all connected by a single pipe that contains the Universal Bridge. “It’s an architecture now,” Gambetta said. “I have never put details in the roadmap that I didn’t feel we could hit, and now we’re putting a lot more details.”

The striking thing to me about this is that it marks a shift away from a focus on individual qubits, their connectivity, and their error rates. The error hardware rates are now good enough (4 x 10-4) for this to work, although Gambetta felt that a few more improvements should be expected. And connectivity will now be directed exclusively toward creating a functional computational unit.

That said, there’s still a lot of space beyond Starling on IBM’s roadmap. The 200 logical qubits it promises will be enough to handle some problems, but not enough to perform the complex algorithms needed to do things like break encryption. That will need to wait for something closer to Blue Jay, a 2033 system that IBM expects will have 2,000 logical qubits. And, as of right now, it’s the only thing listed beyond Starling.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

IBM is now detailing what its first quantum compute system will look like Read More »

startup-puts-a-logical-qubit-in-a-single-piece-of-hardware

Startup puts a logical qubit in a single piece of hardware

This time around, the company is showing that it can get an actual logical qubit into a variant of the same hardware. In the earlier version of its equipment, the resonator cavity had a single post and supported a single frequency. In the newer iteration, there were two posts and two frequencies. Each of those frequencies creates its own quantum resonator in the same cavity, with its own set of modes. “It’s this ensemble of photons inside this cavity that creates the logical qubit,” Lemyre told Ars.

The additional quantum information that can now be stored in the system enables it to identify more complex errors than the loss of a photon.

Catching, but not fixing errors

The company did two experiments with this new hardware. First, it ran multiple rounds of error detection on data stored in the logical qubit, essentially testing its ability to act like a quantum memory and retain the information stored there. The system corrected for photon loss, but left other errors uncorrected. These occurred at a rate of a bit over two percent each round, so by the time the system reached the 25th measurement, many instances had already encountered an error.

The second time through, the company repeated the process, discarding any instances in which an error occurred. In almost every instance, that meant the results were discarded long before they got through two dozen rounds of measurement. But at these later stages, none of the remaining instances were in an erroneous state. That indicates that a successful correction of all of the errors—something the team didn’t try—would be able to fix all the detected problems.

“When we do this, we don’t have any errors left,” Lemyre said. “And this builds confidence into this approach, meaning that if we build the next generation of codes that is now able to correct these errors that were detected in this two-mode approach, we should have that flat line of no errors occurring over a long period of time.”

Startup puts a logical qubit in a single piece of hardware Read More »

windows-11’s-most-important-new-feature-is-post-quantum-cryptography-here’s-why.

Windows 11’s most important new feature is post-quantum cryptography. Here’s why.

Microsoft is updating Windows 11 with a set of new encryption algorithms that can withstand future attacks from quantum computers in a move aimed at jump-starting what’s likely to be the most formidable and important technology transition in modern history.

Computers that are based on the physics of quantum mechanics don’t yet exist outside of sophisticated labs, but it’s well-established science that they eventually will. Instead of processing data in the binary state of zeros and ones, quantum computers run on qubits, which encompass myriad states all at once. This new capability promises to bring about new discoveries of unprecedented scale in a host of fields, including metallurgy, chemistry, drug discovery, and financial modeling.

Averting the cryptopocalypse

One of the most disruptive changes quantum computing will bring is the breaking of some of the most common forms of encryption, specifically, the RSA cryptosystem and those based on elliptic curves. These systems are the workhorses that banks, governments, and online services around the world have relied on for more than four decades to keep their most sensitive data confidential. RSA and elliptic curve encryption keys securing web connections would require millions of years to be cracked using today’s computers. A quantum computer could crack the same keys in a matter of hours or minutes.

At Microsoft’s BUILD 2025 conference on Monday, the company announced the availability of quantum-resistant algorithms to SymCrypt, the core cryptographic code library in Windows. The updated library is available in Build 27852 and higher versions of Windows 11. Additionally, Microsoft has updated SymCrypt-OpenSSL, its open source project that allows the widely used OpenSSL library to use SymCrypt for cryptographic operations.

Windows 11’s most important new feature is post-quantum cryptography. Here’s why. Read More »

beyond-qubits:-meet-the-qutrit-(and-ququart)

Beyond qubits: Meet the qutrit (and ququart)

The world of computers is dominated by binary. Silicon transistors are either conducting or they’re not, and so we’ve developed a whole world of math and logical operations around those binary capabilities. And, for the most part, quantum computing has been developing along similar lines, using qubits that, when measured, will be found in one of two states.

In some cases, the use of binary values is a feature of the object being used to hold the qubit. For example, a technology called dual-rail qubits takes its value from which of two linked resonators holds a photon. But there are many other quantum objects that have access to far more than two states—think of something like all the possible energy states an electron could occupy when orbiting an atom. We can use things like this as qubits by only relying on the lowest two energy levels. But there’s nothing stopping us from using more than two.

In Wednesday’s issue of Nature, researchers describe creating qudits, the generic term for systems that hold quantum information—it’s short for quantum digits. Using a system that can be in three or four possible states (qutrits and ququarts, respectively), they demonstrate the first error correction of higher-order quantum memory.

Making ququarts

More complex qudits haven’t been as popular with the people who are developing quantum computing hardware for a number of reasons; one of them is simply that some hardware only has access to two possible states. In other cases, the energy differences between additional states become small and difficult to distinguish. Finally, some operations can be challenging to execute when you’re working with qudits that can hold multiple values—a completely different programming model than qubit operations might be required.

Still, there’s a strong case to be made that moving beyond qubits could be valuable: it lets us do a lot more with less hardware. Right now, all the major quantum computing efforts are hardware-constrained—we can’t build enough qubits and link them together for error correction to let us do useful calculations. But if we could fit more information in less hardware, it should, in theory, let us get to useful calculations sooner.

Beyond qubits: Meet the qutrit (and ququart) Read More »

quantum-hardware-may-be-a-good-match-for-ai

Quantum hardware may be a good match for AI

Quantum computers don’t have that sort of separation. While they could include some quantum memory, the data is generally housed directly in the qubits, while computation involves performing operations, called gates, directly on the qubits themselves. In fact, there has been a demonstration that, for supervised machine learning, where a system can learn to classify items after training on pre-classified data, a quantum system can outperform classical ones, even when the data being processed is housed on classical hardware.

This form of machine learning relies on what are called variational quantum circuits. This is a two-qubit gate operation that takes an additional factor that can be held on the classical side of the hardware and imparted to the qubits via the control signals that trigger the gate operation. You can think of this as analogous to the communications involved in a neural network, with the two-qubit gate operation equivalent to the passing of information between two artificial neurons and the factor analogous to the weight given to the signal.

That’s exactly the system that a team from the Honda Research Institute worked on in collaboration with a quantum software company called Blue Qubit.

Pixels to qubits

The focus of the new work was mostly on how to get data from the classical world into the quantum system for characterization. But the researchers ended up testing the results on two different quantum processors.

The problem they were testing is one of image classification. The raw material was from the Honda Scenes dataset, which has images taken from roughly 80 hours of driving in Northern California; the images are tagged with information about what’s in the scene. And the question the researchers wanted the machine learning to handle was a simple one: Is it snowing in the scene?

Quantum hardware may be a good match for AI Read More »

d-wave-quantum-annealers-solve-problems-classical-algorithms-struggle-with

D-Wave quantum annealers solve problems classical algorithms struggle with


The latest claim of a clear quantum supremacy solves a useful problem.

Right now, quantum computers are small and error-prone compared to where they’ll likely be in a few years. Even within those limitations, however, there have been regular claims that the hardware can perform in ways that are impossible to match with classical computation (one of the more recent examples coming just last year). In most cases to date, however, those claims were quickly followed by some tuning and optimization of classical algorithms that boosted their performance, making them competitive once again.

Today, we have a new entry into the claims department—or rather a new claim by an old entry. D-Wave is a company that makes quantum annealers, specialized hardware that is most effective when applied to a class of optimization problems. The new work shows that the hardware can track the behavior of a quantum system called an Ising model far more efficiently than any of the current state-of-the-art classical algorithms.

Knowing what will likely come next, however, the team behind the work writes, “We hope and expect that our results will inspire novel numerical techniques for quantum simulation.”

Real physics vs. simulation

Most of the claims regarding quantum computing superiority have come from general-purpose quantum hardware, like that of IBM and Google. These can solve a wide range of algorithms, but have been limited by the frequency of errors in their qubits. Those errors also turned out to be the reason classical algorithms have often been able to catch up with the claims from the quantum side. They limit the size of the collection of qubits that can be entangled at once, allowing algorithms that focus on interactions among neighboring qubits to perform reasonable simulations of the hardware’s behavior.

In any case, most of these claims have involved quantum computers that weren’t solving any particular algorithm, but rather simply behaving like a quantum computer. Google’s claims, for example, are based around what are called “random quantum circuits,” which is exactly what it sounds like.

Off in its own corner is a company called D-Wave, which makes hardware that relies on quantum effects to perform calculations, but isn’t a general-purpose quantum computer. Instead, its collections of qubits, once configured and initialized, are left to find their way to a ground energy state, which will correspond to a solution to a problem. This approach, called quantum annealing, is best suited to solving problems that involve finding optimal solutions to complex scheduling problems.

D-Wave was likely to have been the first company to experience the “we can outperform classical” followed by an “oh no you can’t” from algorithm developers, and since then it has typically been far more circumspect. In the meantime, a number of companies have put D-Wave’s computers to use on problems that align with where the hardware is most effective.

But on Thursday, D-Wave will release a paper that will once again claim, as its title indicates, “beyond classical computation.” And it will be doing it on a problem that doesn’t involve random circuits.

You sing, Ising

The new paper describes using D-Wave’s hardware to compute the evolution over time of something called an Ising model. A simple version of this model is a two-dimensional grid of objects, each of which can be in two possible states. The state that any one of these objects occupies is influenced by the state of its neighbors. So, it’s easy to put an Ising model into an unstable state, after which values of the objects within it will flip until it reaches a low-energy, stable state. Since this is also a quantum system, however, random noise can sometimes flip bits, so the system will continue to evolve over time. You can also connect the objects into geometries that are far more complicated than a grid, allowing more complex behaviors.

Someone took great notes from a physics lecture on Ising models that explains their behavior and role in physics in more detail. But there are two things you need to know to understand this news. One is that Ising models don’t involve a quantum computer merely acting like an array of qubits—it’s a problem that people have actually tried to find solutions to. The second is that D-Wave’s hardware, which provides a well-connected collection of quantum devices that can flip between two values, is a great match for Ising models.

Back in 2023, D-Wave used its 5,000-qubit annealer to demonstrate that its output when performing Ising model evolution was best described using Schrödinger’s equation, a central way of describing the behavior of quantum systems. And, as quantum systems become increasingly complex, Schrödinger’s equation gets much, much harder to solve using classical hardware—the implication being that modeling the behavior of 5,000 of these qubits could quite possibly be beyond the capacity of classical algorithms.

Still, having been burned before by improvements to classical algorithms, the D-Wave team was very cautious about voicing that implication. As they write in their latest paper, “It remains important to establish that within the parametric range studied, despite the limited correlation length and finite experimental precision, approximate classical methods cannot match the solution quality of the [D-Wave hardware] in a reasonable amount of time.”

So it’s important that they now have a new paper that indicates that classical methods in fact cannot do that in a reasonable amount of time.

Testing alternatives

The team, which is primarily based at D-Wave but includes researchers from a handful of high-level physics institutions from around the world, focused on three different methods of simulating quantum systems on classical hardware. They were put up against a smaller version of what will be D-Wave’s Advantage 2 system, designed to have a higher qubit connectivity and longer coherence times than its current Advantage. The work essentially involved finding where the classical simulators bogged down as either the simulation went on for too long, or the complexity of the Ising model’s geometry got too high (all while showing that D-Wave’s hardware could perform the same calculation).

Three different classical approaches were tested. Two of them involved a tensor network, one called MPS, for matrix product of states, and the second called projected entangled-pair states (PEPS). They also tried a neural network, as a number of these have been trained successfully to predict the output of Schrödinger’s equation for different systems.

These approaches were first tested on a simple 8×8 grid of objects rolled up into a cylinder, which increases the connectivity by eliminating two of the edges. And, for this simple system that evolved over a short period, the classical methods and the quantum hardware produced answers that were effectively indistinguishable.

Two of the classical algorithms, however, were relatively easy to eliminate from serious consideration. The neural network provided good results for short simulations but began to diverge rapidly once the system was allowed to evolve for longer times. And PEPS works by focusing on local entanglement and failed as entanglement was spread to ever-larger systems. That left MPS as the classical representative as more complex geometries were run for longer times.

By identifying where MPS started to fail, the researchers could estimate the amount of classical hardware that would be needed to allow the algorithm to keep pace with the Advantage 2 hardware on the most complex systems. And, well, it’s not going to be realistic any time soon. “On the largest problems, MPS would take millions of years on the Frontier supercomputer per input to match [quantum hardware] quality,” they conclude. “Memory requirements would exceed its 700PB storage, and electricity requirements would exceed annual global consumption.” By contrast, it took a few minutes on D-Wave’s hardware.

Again, in the paper, the researchers acknowledge that this may lead to another round of optimizations that bring classical algorithms back into competition. And, apparently those have already started once a draft of this upcoming paper was placed on the arXiv. At a press conference happening as this report was being prepared, one of D-Wave’s scientists, Andrew King, noted that two pre-prints have already appeared on the arXiv that described improvements to classical algorithms.

While these allow classical simulations to perform more of the results demonstrated in the new paper, they don’t involve simulating the most complicated geometries, and require shorter times and fewer total qubits. Nature talked to one of the people behind these algorithm improvements, who was optimistic that they could eventually replicate all of D-Wave’s results using non-quantum algorithms. D-Wave, obviously, is skeptical. And King said that a new, larger Advantage 2 test chip with over 4,000 qubits available had recently been calibrated, and he had already tested even larger versions of these same Ising models on it—ones that would be considerably harder for classical methods to catch up to.

In any case, the company is acting like things are settled. During the press conference describing the new results, people frequently referred to D-Wave having achieved quantum supremacy, and its CEO, Alan Baratz, in responding to skepticism sparked by the two draft manuscripts, said, “Our work should be celebrated as a significant milestone.”

Science, 2025. DOI: 10.1126/science.ado6285  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

D-Wave quantum annealers solve problems classical algorithms struggle with Read More »

amazon-uses-quantum-“cat-states”-with-error-correction

Amazon uses quantum “cat states” with error correction


The company shows off a mix of error-resistant hardware and error correction.

Following up on Microsoft’s announcement of a qubit based on completely new physics, Amazon is publishing a paper describing a very different take on quantum computing hardware. The system mixes two different types of qubit hardware to improve the stability of the quantum information they hold. The idea is that one type of qubit is resistant to errors, while the second can be used for implementing an error-correction code that catches the problems that do happen.

While there have been more effective demonstrations of error correction in the past, a number of companies are betting that Amazon’s general approach is the best route to getting logical qubits that are capable of complex algorithms. So, in that sense, it’s an important proof of principle.

Herding cats

The basic idea behind Amazon’s approach is to use one type of qubit to hold data and a second to enable error correction. The data qubit is extremely resistant to one type of error, but prone to a second. Those errors are where the second type of qubit comes in; it’s used to run an error-correction code that’s effective at picking up the problems the data qubits are prone to. Combined, the two are hoped to allow error correction to be handled by far fewer hardware qubits.

In a standard computer, there’s really only one type of error to worry about: a bit that no longer holds the value it was set to. This is called a bit flip, since the value goes from either zero to one, or one to zero. As with most things quantum computing, things are considerably more complicated with qubits. Since they don’t hold binary values, but rather probabilities, you can’t just flip the value of the qubit. Instead, bit flips in quantum land involve inverting the probabilities—going from 60: 40 to 40: 60 or similar.

But bit flips aren’t the only problems that can occur. Qubits can also suffer from what are called phase flip errors. These have no equivalent in classical computers, but they can also keep quantum computers from operating as expected.

In the past, Amazon demonstrated qubits that made it trivially easy to detect when a bit flip error occurred. For the new work, they moved on to something different: a qubit that greatly reduces the probability of bit flip errors.

They do this by using what are called “cat qubits,” after the famed Schrödinger’s cat, which existed in two states at once. While most qubits are based on a single quantum object being placed in this sort of superposition of states, a cat qubit has a collection of objects in a single superposition. (Put differently, the superposition state is distributed across the collection of objects.) In the case of the cat qubits demonstrated so far by companies like Alice and Bob, the objects are photons, which are all held in a single resonator, and Amazon is using similar tech.

Cat qubits have a distinctive feature compared to other options: bit flips are improbable, and get even less probable as you pump more photons into the resonator. But this has a drawback: more photons mean that phase flips become more probable.

Flipping cats

Those phase flips are why a second set of qubits, called transmons were brought in. (Transmons are a commonly used type of qubit based on a loop of superconducting wire linked to a microwave resonator and used by companies like IBM and Google.) These were used to create a chain of qubits, alternating between cat and transmon. This allowed the team to create a logical, error-corrected qubit using a simple error-correction code called a repetition code.

Image of a zig-zagging chain of alternating orange and blue circles.

The layout of Amazon’s hardware. Data-holding cat qubits (blue) alternate with transmons (orange), which can be measured to detect errors. Credit: Putterman et. al.

Here, each of the cat qubits starts off in the same state and is entangled with its neighboring transmons. This allowed the transmons to track what was going on in the cat qubits by performing what are called weak measurements. These don’t destroy the quantum state like a full measurement would but can allow the detection of changes in the neighboring cat qubits and provide the information needed to fix any errors.

So, the combination of the two means that almost all the errors that occur are phase flips, and the phase flips are detected and fixed.

In more typical error-correction schemes, you need enough qubits around to do measurements to identify both the location of an error and the nature of the error (phase or bit flip). Here, Amazon is assuming all errors are phase flips, and its team can identify the location of the flip based on which of the transmons detects an error, as shown by the red flags in the diagram above. It allows for a logical qubit that uses far fewer hardware qubits and measurements to get a given level of error correction.

The challenge of any error-correction setup is that each hardware qubit involved is error-prone. Adding too many into the error-correction system will mean that multiple errors are likely to occur simultaneously in a way that causes error correction to become impossible. Once the error rate of the hardware qubits gets low enough, however, adding additional qubits will bring the error rate down.

So, the key measurement done here is comparing a chain that has three cat qubits and two transmons to one that has five cat qubits and four transmons. These measurements showed that the five qubit chain had a lower error rate than the smaller one. This shows that the hardware is now at a state where error correction provides a benefit.

The characterization of the system indicated a couple of major limits, though. Cat qubits make bit flips extremely unlikely, but not impossible. By focusing error correction only on phase flips, any bit flips that do occur inescapably trigger the failure of the entire logical, error-corrected qubit. “Achieving long logical bit-flip times is challenging because any single cat qubit bit flip event in any part of the repetition code directly causes a logical bit flip error,” the authors note. The other issue is that the transmons used for error correction still suffer from both bit and phase flips, which can also mess up the entire error-corrected qubit.

Where does this leave us?

There are a number of companies like Amazon that are betting that using a somehow less error-prone hardware qubit will allow them to get effective error correction using fewer total hardware qubits. If they’re correct, they’ll be able to build error-corrected quantum computers using far fewer qubits, and so potentially perform useful computation sooner. For them, this paper is an important validation of the idea. You can do a sort of mixed-mode error correction, with a robust hardware qubit paired with a compact error-correction code.

But beyond that, the messages are pretty mixed. The hardware still had to rely on less robust hardware qubits (the transmons) to do error correction, and the very low error rate was still not low enough to avoid having occasional bit flips. And, ultimately, the error rate improvements gained by increasing the size of the logical qubit aren’t on a trajectory that would get you a useful level of error correction without needing an unrealistically large number of hardware qubits.

In short, the underlying hardware isn’t currently good enough to enable any sort of complex calculation, and it would need radical improvements before it can be. And there’s not an obvious alternate route to effective error correction. The potential of this approach is still there, but it’s not obvious how we’re going to build hardware that lives up to that potential.

As for Amazon, the picture is even less clear, given that this is the second qubit technology that it has talked about publicly. It’s unclear whether the company is going to go all-in on this approach, or is still looking for a technology that it’s willing to commit to.

Nature, 2025. DOI: 10.1038/s41586-025-08642-7  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Amazon uses quantum “cat states” with error correction Read More »

microsoft-demonstrates-working-qubits-based-on-exotic-physics

Microsoft demonstrates working qubits based on exotic physics

Microsoft’s first entry into quantum hardware comes in the form of Majorana 1, a processor with eight of these qubits.

Given that some of its competitors have hardware that supports over 1,000 qubits, why does the company feel it can still be competitive? Nayak described three key features of the hardware that he feels will eventually give Microsoft an advantage.

The first has to do with the fundamental physics that governs the energy needed to break apart one of the Cooper pairs in the topological superconductor, which could destroy the information held in the qubit. There are a number of ways to potentially increase this energy, from lowering the temperature to making the indium arsenide wire longer. As things currently stand, Nayak said that small changes in any of these can lead to a large boost in the energy gap, making it relatively easy to boost the system’s stability.

Another key feature, he argued, is that the hardware is relatively small. He estimated that it should be possible to place a million qubits on a single chip. “Even if you put in margin for control structures and wiring and fan out, it’s still a few centimeters by a few centimeters,” Nayak said. “That was one of the guiding principles of our qubits.” So unlike some other technologies, the topological qubits won’t require anyone to figure out how to link separate processors into a single quantum system.

Finally, all the measurements that control the system run through the quantum dot, and controlling that is relatively simple. “Our qubits are voltage-controlled,” Nayak told Ars. “What we’re doing is just turning on and off coupling of quantum dots to qubits to topological nano wires. That’s a digital signal that we’re sending, and we can generate those digital signals with a cryogenic controller. So we actually put classical control down in the cold.”

Microsoft demonstrates working qubits based on exotic physics Read More »

quantum-teleportation-used-to-distribute-a-calculation

Quantum teleportation used to distribute a calculation

The researchers showed that this setup allowed them to teleport with a specific gate operation (controlled-Z), which can serve as the basis for any other two-qubit gate operation—any operation you might want to do can be done by using a specific combination of these gates. After performing multiple rounds of these gates, the team found that the typical fidelity was in the area of 70 percent. But they also found that errors typically had nothing to do with the teleportation process and were the product of local operations at one of the two ends of the network. They suspect that using commercial hardware, which has far lower error rates, would improve things dramatically.

Finally, they performed a version of Grover’s algorithm, which can, with a single query, identify a single item from an arbitrarily large unordered list. The “arbitrary” aspect is set by the number of available qubits; in this case, having only two qubits, the list maxed out at four items. Still, it worked, again with a fidelity of about 70 percent.

While the work was done with trapped ions, almost every type of qubit in development can be controlled with photons, so the general approach is hardware-agnostic. And, given the sophistication of our optical hardware, it should be possible to link multiple chips at various distances, all using hardware that doesn’t require the best vacuum or the lowest temperatures we can generate.

That said, the error rate of the teleportation steps may still be a problem, even if it was lower than the basic hardware rate in these experiments. The fidelity there was 97 percent, which is lower than the hardware error rates of most qubits and high enough that we couldn’t execute too many of these before the probability of errors gets unacceptably high.

Still, our current hardware error rates started out far worse than they are today; successive rounds of improvements between generations of hardware have been the rule. Given that this is the first demonstration of teleported gates, we may have to wait before we can see if the error rates there follow a similar path downward.

Nature, 2025. DOI: 10.1038/s41586-024-08404-x  (About DOIs).

Quantum teleportation used to distribute a calculation Read More »

researchers-optimize-simulations-of-molecules-on-quantum-computers

Researchers optimize simulations of molecules on quantum computers

The net result is a much faster operation involving far fewer gates. That’s important because errors in quantum hardware increase as a function of both time and the number of operations.

The researchers then used this approach to explore a chemical, Mn4O5Ca, that plays a key role in photosynthesis. Using this approach, they showed it’s possible to calculate what’s called the “spin ladder,” or the list of the lowest-energy states the electrons can occupy. The energy differences between these states correspond to the wavelengths of light they can absorb or emit, so this also defines the spectrum of the molecule.

Faster, but not quite fast enough

We’re not quite ready to run this system on today’s quantum computers, as the error rates are still a bit too high. But because the operations needed to run this sort of algorithm can be done so efficiently, the error rates don’t have to come down very much before the system will become viable. The primary determinant of whether it will run into an error is how far down the time dimension you run the simulation, plus the number of measurements of the system you take over that time.

“The algorithm is especially promising for near-term devices having favorable resource requirements quantified by the number of snapshots (sample complexity) and maximum evolution time (coherence) required for accurate spectral computation,” the researchers wrote.

But the work also makes a couple of larger points. The first is that quantum computers are fundamentally unlike other forms of computation we’ve developed. They’re capable of running things that look like traditional algorithms, where operations are performed and a result is determined. But they’re also quantum systems that are growing in complexity with each new generation of hardware, which makes them great at simulating other quantum systems. And there are a number of hard problems involving quantum systems we’d like to solve.

In some ways, we may only be starting to scratch the surface of quantum computers’ potential. Up until quite recently, there were a lot of hypotheticals; it now appears we’re on the cusp of using one for some potentially useful computations. And that means more people will start thinking about clever ways we can solve problems with them—including cases like this, where the hardware would be used in ways its designers might not have even considered.

Nature Physics, 2025. DOI: 10.1038/s41567-024-02738-z  (About DOIs).

Researchers optimize simulations of molecules on quantum computers Read More »