Author name: Tim Belzer

prepare-to-bid-farewell-to-the-sandman-with-s2-trailer

Prepare to bid farewell to The Sandman with S2 trailer

There are some design changes from S1. “Design-wise, Dream has a new palace which symbolizes his intense desire to move on from the events of S1,” said Heinberg. “Which means his throne room has had a remodel. As has the outer lobby. We also explore a number of entirely new time periods, worlds, and realms. And all the designs—the sets, the costumes, the props, the VFX—have their roots in the comics.”

Naturally, Sturridge is returning as Morpheus/Dream, as are Kirby as Death; Mason Alexander Park as Desire; Donna Preston as Despair; and Christie as Lucifer. Judging by the trailer, we’ll also see Stephen Fry returning as Fiddler’s Green, Holbrook’s The Corinthian, Mervyn Pumpkinhead (voiced by Mark Hamill), and The Fates (Dinita Gohil, Nina Wadia, and Souad Faress played Maiden, Mother, and Crone, respectively, in S1).

New cast members include Adrian Lester as Destiny, Esmé Creed-Miles as Delirium, and Barry Sloane as Destruction, aka The Prodigal, as well as Ruairi O’Connor as Orpheus, Freddie Fox as Loki, Clive Russell as Odin, Laurence O’Fuarain as Thor, Ann Skelly as Nuala, Douglas Booth as Cluracan, Jack Gleeson as Puck, Indya Moore as Wanda, and Steve Coogan as the voice of Barnabas, canine companion to Destruction.

The second season of The Sandman will premiere on Netflix in two parts. Volume 1 drops on July 13, 2025; Volume II starts streaming on July 25, 2025. The bonus episode, “Death: The High Cost of Living,” will air on July 31, 2025.

Prepare to bid farewell to The Sandman with S2 trailer Read More »

everything-we-know-about-the-2026-nissan-leaf

Everything we know about the 2026 Nissan Leaf

The first-generation Nissan Leaf was an incredible achievement for the company and for the industry. A mass-market EV that wasn’t priced out of reach was something the industry needed at the time.

That’s important. Since then, things have stagnated. To say that the 2026 Leaf is the most important EV launch for Nissan since the original car would be an understatement. It must get it right, because the competition is too good not to.

Starting things off, the car is available with two battery options. There is a 52 kWh base pack and a 75 kWh longer-range option. Each option has an active thermal management system—a first for Leaf—to address DC fast-charging concerns. Those batteries also deliver more range, with up to 303 miles (488 km) on the S+ model.

The 2026 Leaf is 3 inches shorter (76 mm) than the current hatchback, although the wheelbase is only 0.4 inches (10 mm) shorter. Nissan

The 52-kWh version makes 174 hp (130 kW), and the 75-kWh motor generates 215 hp (160 kW).

The Leaf adopts Nissan’s new 3-in-1 EV powertrain, which integrates the motor, inverter, and reducer. This reduces packaging by 10 percent, and Nissan claims it improves responsiveness and refines the powertrain.

Native NACS

Instead of a slow and clunky CHAdeMO connector, the Leaf rocks a Tesla-style NACS port for DC fast charging. Interestingly, the car also has a SAE J-1772 connector for AC charging. The driver’s side fender has the J plug, while the passenger side fender has the NACS.

Confusingly, the NACS connector is only for DC fast charging. If you’re going to level 2 charge, you must use the J plug or a NACS connector with an adapter. It’s weird, but the car will make it obvious to owners if they plug into the wrong connector.

When connected to a DC fast charger that can deliver 150 kW, both battery sizes will charge from 10 to 80 percent in 35 minutes. While not class-leading, it wipes the floor with the old model. It also supports a peak charging rate that is higher than its bigger sibling, the Ariya.

Everything we know about the 2026 Nissan Leaf Read More »

paramount-drops-trailer-for-the-naked-gun-reboot

Paramount drops trailer for The Naked Gun reboot

Liam Neeson stars as Lt. Frank Drebin Jr. in The Naked Gun.

Thirty years after the last film in The Naked Gun crime-spoof comedy franchise, we’re finally getting a new installment, The Naked Gun, described as a “legacy sequel.” And it’s Liam Neeson stepping into Leslie Nielsen’s fumbling shoes, playing that character’s son. Judging by the official trailer, Neeson is up to the task, showcasing his screwball comedy chops.

(Some spoilers for the first three films in the franchise below.)

The original Naked Gun: From the Files of Police Squad! debuted in 1988, with Leslie Nielsen starring as Detective Frank Drebin, trying to foil an assassination attempt on Queen Elizabeth II during her visit to the US. It proved successful enough to launch two sequels. Naked Gun 2-1/2: The Smell of Fear (1991) found Drebin battling an evil plan to kidnap a prominent nuclear scientist. Naked Gun 33-1/3: The Final Insult (1994) found Drebin coming out of retirement and going undercover to take down a crime syndicate planning to blow up the Academy Awards.

The franchise rather lost steam after that, but by 2013, Paramount was planning a reboot starring Ed Helms as “Frank Drebin, no relation.” David Zucker, who produced the prior Naked Gun films and directed the first two, declined to be involved, feeling it could only be “inferior” to his originals. He was briefly involved in the 2017 rewrites, featuring Frank’s son as a secret agent rather than a policeman. That film never transpired either.  The project was revived again in 2021 by Seth MacFarlane (without Zucker’s involvement), and Neeson was cast as Frank Drebin Jr.—a police lieutenant in this incarnation.

In addition to Neeson, the film stars Paul Walter Hauser as Captain Ed Hocken, Jr.—Hauser will also appear as Mole Man in the forthcoming Fantastic Four: First Steps—and Pamela Anderson as a sultry femme fatale named Beth. The cast also includes Kevin Durand, Danny Huston, Liza Koshy, Cody Rhodes, CCH Pounder, Busta Rhymes, and Eddy Yu.

Paramount drops trailer for The Naked Gun reboot Read More »

the-“online-monkey-torture-video”-arrests-just-keep-coming

The “online monkey torture video” arrests just keep coming

So the group tried again. “Million tears” had been booted by its host, but the group reconstituted on another platform and renamed itself “the trail of trillion tears.” They reached out to another Indonesian videographer and asked for a more graphic version of the same video. But this version, more sadistic than the last, still didn’t satisfy. As one of the Americans allegedly said to another, “honey that’s not what you asked for. Thats the village idiot version. But I’m talking with someone about getting a good vo [videographer] to do it.”

Arrests continue

In 2021, someone leaked communications from the “million tears” group to animals rights organizations like Lady Freethinker and Action for Primates, which handed it over to authorities. Still, it took several years to arrest and prosecute the torture group’s leaders.

In 2024, one of these leaders—Ronald Bedra of Ohio—pled guilty to commissioning the videos and to mailing “a thumb drive containing 64 videos of monkey torture to a co-conspirator in Wisconsin.” His mother, in a sentencing letter to the judge, said that her son must “have been undergoing some mental crisis when he decided to create the website.” As a boy, he had loved all of the family pets, she said, even providing a funeral for a fish.

Bedra was sentenced late last year to 54 months in prison. According to letters from family members, he has also lost his job, his wife, and his kids.

In April 2025, two more alleged co-conspirators were indicted and subsequently arrested; their cases were unsealed only this week. Two other co-conspirators from this group still appear to be uncharged.

In May 2025, 11 other Americans were indicted for their participation in monkey torture groups, though they appear to come from a different network. This group allegedly “paid a minor in Indonesia to commit the requested acts on camera.”

As for the Indonesian side of this equation, arrests have been happening there, too. Following complaints from animal rights groups, police in Indonesia have arrested multiple videographers over the last two years.

Update: Showing the international nature of these torture groups, the Scottish Sun just ran a rather lurid piece about the “sadistic Scots mum jailed for helping to run a horrific global monkey torture network.” The 39-year-old was apparently caught after US authorities broke up another monkey torturing network based around outsourcing the torture to Indonesia.

The “online monkey torture video” arrests just keep coming Read More »

these-va-tech-scientists-are-building-a-better-fog-harp

These VA Tech scientists are building a better fog harp

Unlike standard fog harvesting technologies, “We’re trying to use clever geometric designs in place of chemistry,” Boreyko told Ars. “When I first came into this field, virtually everyone was using nets, but they were just trying to make more and more clever chemical coatings to put on the nets to try to reduce the clogging. We found that simply going from a net to a harp, with no chemicals or coatings whatsoever—just the change in geometry solved the clogging problem much better.”

Jimmy Kaindu inspects a new collecting prototype beside the original fog harp.

Jimmy Kaindu inspects a new collecting prototype beside the original fog harp. Credit: Alex Parrish for Virginia Tech

For their scale prototypes in the lab, Boreyko’s team 3D printed their harp “strings” out of a weakly hydrophobic plastic. “But in general, the harp works fantastic with uncoated stainless steel wires and definitely doesn’t require any kind of fancy coating,” said Boreyko. And the hybrid harp can be scaled up with relative ease, just like classic nets. It just means stringing together a bunch of harps of smaller heights, meter by meter, to get the desired size. “There is no limit to how big this thing could be,” he said.

Scaling up the model is the next obvious step, along with testing larger prototypes outdoors. Boreyko would also like to test an electric version of the hybrid fog harp. “If you apply a voltage, it turns out you can catch even more water,” he said. “Because our hybrid’s non-clogging, you can have the best of both worlds: using an electric field to boost the harvesting amount in real-life systems and at the same time preventing clogging.”

While the hybrid fog harp is well-suited for harvesting water in any coastal region that receives a lot of fog, Boreyko also envisions other, less obvious potential applications for high-efficiency fog harvesters, such as roadways, highways, or airport landing strips that are prone to fog that can pose safety hazards. “There’s even industrial chemical supply manufacturers creating things like pressurized nitrogen gas,” he said. “The process cools the surrounding air into an ice fog that can drift across the street and wreak havoc on city blocks.”

Journal of Materials Chemistry A, 2025. DOI: 10.1039/d5ta02686e  (About DOIs).

These VA Tech scientists are building a better fog harp Read More »

another-one-for-the-graveyard:-google-to-kill-instant-apps-in-december

Another one for the graveyard: Google to kill Instant Apps in December

But that was then, and this is now. Today, an increasing number of mobile apps are functionally identical to the mobile websites they are intended to replace, and developer uptake of Instant Apps was minimal. Even in 2017, loading an app instead of a website had limited utility. As a result, most of us probably only encountered Instant Apps a handful of times in all the years it was an option for developers.

To use the feature, which was delivered to virtually all Android devices by Google Play Services, developers had to create a special “instant” version of their app that was under 15MB. The additional legwork to get an app in front of a subset of new users meant this was always going to be a steep climb, and Google struggles to incentivize developers to adopt new features. Plus, there’s no way to cram in generative AI! So it’s not a shock to see Google retiring the feature.

This feature is currently listed in the collection of Google services in your phone settings as “Google Play Instant.” Unfortunately, there aren’t many examples still available if you’re curious about what Instant Apps were like—the Finnish publisher Ilta-Sanomat is one of the few still offering it. Make sure the settings toggle for Instant Apps is on if you want a little dose of nostalgia.

Another one for the graveyard: Google to kill Instant Apps in December Read More »

google-left-months-old-dark-mode-bug-in-android-16,-fix-planned-for-next-pixel-drop

Google left months-old dark mode bug in Android 16, fix planned for next Pixel Drop

Google’s Pixel phones got a big update this week with the release of Android 16 and a batch of Pixel Drop features. Pixels now have enhanced security, new contact features, and improved button navigation. However, some of the most interesting features, like desktop windowing and Material 3 Expressive, are coming later. Another thing that’s coming later, it seems, is a fix for an annoying bug Google introduced a few months back.

Google broke the system dark mode schedule in its March Pixel update and did not address it in time for Android 16. The company confirms a fix is coming, though.

The system-level dark theme arrives in Android 10 to offer a less eye-searing option, which is particularly handy in dark environments. It took a while for even Google’s apps to fully adopt this feature, but support is solid five years later. Google even offers a scheduling feature to switch between light and dark mode at custom times or based on sunrise/sunset. However, the scheduling feature was busted in the March update.

Currently, if you manually toggle dark mode on or off, schedules stop working. The only way to get them back is to set up your schedule again and then never toggle dark mode. Google initially marked this as “intended behavior,” but a more recent bug report was accepted as a valid issue.

Google left months-old dark mode bug in Android 16, fix planned for next Pixel Drop Read More »

Why is digital sovereignty important right now?

Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders.

Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure.

The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself.

But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades (according to historical surveys I’ve run), most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack.

Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas.

Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty.

As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems.

What does the digital sovereignty landscape look like today?

Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts.

We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas others (the UK included) are adopting a risk-based approach based on trusted locales.

We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data?

This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks.

Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP.

How Are Cloud Providers Responding?

Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoring (in the French sense) its spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now.

We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France (Microsoft has similar in Germany). However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue.

Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players.

What Can Enterprise Organizations Do About It?

First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience.

If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that.

This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture.

It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency.

Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate.

Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing.

Where to start? Look after your own organization first

Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once.

Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario.

Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it.

Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience.

The post Why is digital sovereignty important right now? appeared first on Gigaom.

Why is digital sovereignty important right now? Read More »

after-a-series-of-tumors,-woman’s-odd-looking-tongue-explains-everything

After a series of tumors, woman’s odd-looking tongue explains everything

Breast cancer. Colon cancer. An enlarged thyroid gland. A family history of tumors and cancers as well. It wasn’t until the woman developed an annoying case of dry mouth that doctors put it all together. By then, she was in her 60s.

According to a new case study in JAMA Dermatology, the woman presented to a dermatology clinic in Spain after three months of oral unpleasantness. They noted the cancers in her medical history. When she opened wide, doctors immediately saw the problem: Her tongue was covered in little wart-like bumps that resembled a slippery, flesh-colored cobblestone path. (Image here.)

Such a cobblestone tongue is a telltale sign of a rare genetic condition called Cowden syndrome. It’s caused by inherited mutations that break a protein, called PTEN, leading to tumors and cancers.

PTEN, which stands for phosphatase and tensin homolog, generally helps keep cells from growing out of control. Specifically, PTEN deactivates a signaling lipid called PIP3 (phosphatidylinositol 3,4,5-trisphosphate), and that deactivation blocks a signaling pathway (the PI3K/AKT/mTOR pathway) involved in regulating cell growth, survival, and migration. When PTEN is broken, PIP3 activity ramps up, and tumors can grow unchecked.

Not-so-rare mutation

In Cowden syndrome, PTEN mutations lead to noncancerous tumors or masses called hamartomas, which can occur in any organ. But, people with the syndrome are also at high risk of developing a slew of cancerous growths—most commonly cancers of the breast, thyroid, and uterus—over their lifetime. That’s why people diagnosed with the condition are advised to undergo intensive cancer screenings, including annual ultrasounds of the thyroid starting at age 7 and annual mammograms and MRIs (magnetic resonance imaging) starting at age 30 at the latest.

After a series of tumors, woman’s odd-looking tongue explains everything Read More »

5-things-in-trump’s-budget-that-won’t-make-nasa-great-again

5 things in Trump’s budget that won’t make NASA great again

If signed into law as written, the White House’s proposal to slash nearly 25 percent from NASA’s budget would have some dire consequences.

It would cut the agency’s budget from $24.8 billion to $18.8 billion. Adjusted for inflation, this would be the smallest NASA budget since 1961, when the first American launched into space.

The proposed funding plan would halve NASA’s funding for robotic science missions and technology development next year, scale back research on the International Space Station, turn off spacecraft already exploring the Solar System, and cancel NASA’s Space Launch System rocket and Orion spacecraft after two more missions in favor of procuring lower-cost commercial transportation to the Moon and Mars.

The SLS rocket and Orion spacecraft have been targets for proponents of commercial spaceflight for several years. They are single-use, and their costs are exorbitant, with Moon missions on SLS and Orion projected to cost more than $4 billion per flight. That price raises questions about whether these vehicles will ever be able to support a lunar space station or Moon base where astronauts can routinely rotate in and out on long-term expeditions, like researchers do in Antarctica today.

Reusable rockets and spaceships offer a better long-term solution, but they won’t be ready to ferry people to the Moon for a while longer. The Trump administration proposes flying SLS and Orion two more times on NASA’s Artemis II and Artemis III missions, then retiring the vehicles. Artemis II’s rocket is currently being assembled at Kennedy Space Center in Florida for liftoff next year, carrying a crew of four around the far side of the Moon. Artemis III would follow with the first attempt to land humans on the Moon since 1972.

The cuts are far from law

Every part of Trump’s budget proposal for fiscal year 2026 remains tentative. Lawmakers in each house of Congress will write their own budget bills, which must go to the White House for Trump’s signature. A Senate bill released last week includes language that would claw back funding for SLS and Orion to support the Artemis IV and Artemis V missions.

5 things in Trump’s budget that won’t make NASA great again Read More »

ibm-is-now-detailing-what-its-first-quantum-compute-system-will-look-like

IBM is now detailing what its first quantum compute system will look like


Company is moving past focus on qubits, shifting to functional compute units.

A rendering of what IBM expects will be needed to house a Starling quantum computer. Credit: IBM

On Tuesday, IBM released its plans for building a system that should push quantum computing into entirely new territory: a system that can both perform useful calculations while catching and fixing errors and be utterly impossible to model using classical computing methods. The hardware, which will be called Starling, is expected to be able to perform 100 million operations without error on a collection of 200 logical qubits. And the company expects to have it available for use in 2029.

Perhaps just as significant, IBM is also committing to a detailed description of the intermediate steps to Starling. These include a number of processors that will be configured to host a collection of error-corrected qubits, essentially forming a functional compute unit. This marks a major transition for the company, as it involves moving away from talking about collections of individual hardware qubits and focusing instead on units of functional computational hardware. If all goes well, it should be possible to build Starling by chaining a sufficient number of these compute units together.

“We’re updating [our roadmap] now with a series of deliverables that are very precise,” IBM VP Jay Gambetta told Ars, “because we feel that we’ve now answered basically all the science questions associated with error correction and it’s becoming more of a path towards an engineering problem.”

New architectures

Error correction on quantum hardware involves entangling a groups of qubits in a way that distributes one or more quantum bit values among them and includes additional qubits that can be used to check the state of the system. It can be helpful to think of these as data and measurement qubits. Performing weak quantum measurements on the measurement qubits produces what’s called “syndrome data,” which can be interpreted to determine whether anything about the data qubits has changed (indicating an error) and how to correct it.

There are lots of potential ways to arrange different combinations of data and measurement qubits for this to work, each referred to as a code. But, as a general rule, the more hardware qubits committed to the code, the more robust it will be to errors, and the more logical qubits that can be distributed among its hardware qubits.

Some quantum hardware, like that based on trapped ions or neutral atoms, are relatively flexible when it comes to hosting error-correction codes. The hardware qubits can be moved around so that any two can be entangled, so it’s possible to adopt a huge range of configurations, albeit at the cost of the time spent moving atoms around. IBM’s technology is quite different. It relies on qubits made of superconducting electronics laid out on a chip, with entanglement mediated by wiring that runs between qubits. The layout of this wiring is set during the chip’s manufacture, and so the design of the chip commits it to a limited number of potential error-correction codes.

Unfortunately, this wiring can also enable cross-talk between neighboring qubits, causing them to lose their state. To avoid this, existing IBM processors have their qubits wired in what they term a “heavy hex” configuration, named for its hexagonal arrangements of connections among its qubits. This has worked well to keep the error rate of its hardware down, but it also poses a challenge, since IBM has decided to go with an error-correction code that’s incompatible with the heavy hex geometry.

A couple of years back, an IBM team described a compact error correction code called a low-density parity check (LDPC). This requires a square grid of nearest-neighbor connections among its qubits, as well as wiring to connect qubits that are relatively distant on the chip. To get its chips and error-correction scheme in sync, IBM has made two key advances. The first is in its chip packaging, which now uses several layers of wiring sitting above the hardware qubits to enable all of the connections needed for the LDPC code.

We’ll see that first in a processor called Loon that’s on the company’s developmental roadmap. “We’ve already demonstrated these three things: high connectivity, long-range couplers, and couplers that break the plane [of the chip] and connect to other qubits,” Gambetta said. “We have to combine them all as a single demonstration showing that all these parts of packaging can be done, and that’s what I want to achieve with Loon.” Loon will be made public later this year.

Two diagrams of blue objects linked by red lines. The one on the left is sparse and simple, while the one on the right is a complicated mesh of red lines.

On the left, the simple layout of the connections in a current-generation Heron processor. At right, the complicated web of connections that will be present in Loon. Credit: IBM

The second advance IBM has made is to eliminate the crosstalk that the heavy hex geometry was used to minimize, so heavy hex will be going away. “We are releasing this year a bird for near term experiments that is a square array that has almost zero crosstalk,” Gambetta said, “and that is Nighthawk.” The more densely connected qubits cut the overhead needed to perform calculations by a factor of 15, Gambetta told Ars.

Nighthawk is a 2025 release on a parallel roadmap that you can think of as user-facing. Iterations on its basic design will be released annually through 2028, each enabling more operations without error (going from 5,000 gate operations this year to 15,000 in 2028). Each individual Nighthawk processor will host 120 hardware qubits, but 2026 will see three of them chained together and operating as a unit, providing 360 hardware qubits. That will be followed in 2027 by a machine with nine linked Nighthawk processors, boosting the hardware qubit number over 1,000.

Riding the bicycle

The real future of IBM’s hardware, however, will be happening over on the developmental line of processors, where talk about hardware qubit counts will become increasingly irrelevant. In a technical document released today, IBM is describing the specific LDPC code it will be using, termed a bivariate bicycle code due to some cylindrical symmetries in its details that vaguely resemble bicycle wheels. The details of the connections matter less than the overall picture of what it takes to use this error code in practice.

IBM describes two implementations of this form of LDPC code. In the first, 144 hardware qubits are arranged so that they play host to 12 logical qubits and all of the measurement qubits needed to perform error checks. The standard measure of a code’s ability to catch and correct errors is called its distance, and in this case, the distance is 12. As an alternative, they also describe a code that uses 288 hardware qubits to host the same 12 logical qubits but boost the distance to 18, meaning it’s more resistant to errors. IBM will make one of these collections of logical qubits available as a Kookaburra processor in 2026, which will use them to enable stable quantum memory.

The follow-on will bundle these with a handful of additional qubits that can produce quantum states that are needed for some operations. Those, plus hardware needed for the quantum memory, form a single, functional computation unit, built on a single chip, that is capable of performing all the operations needed to implement any quantum algorithm.

That will appear with the Cockatoo chip, which will also enable multiple processing units to be linked on a single bus, allowing the logical qubit count to grow beyond 12. (The company says that one of the dozen logical qubits in each unit will be used to mediate entanglement with other units and so won’t be available for computation.) That will be followed by the first test versions of Starling, which will allow universal computations on a limited number of logical qubits spread across multiple chips.

Separately, IBM is releasing a document that describes a key component of the system that will run on classical computing hardware. Full error correction requires evaluating the syndrome data derived from the state of all the measurement qubits in order to determine the state of the logical qubits and whether any corrections need to be made. As the complexity of the logical qubits grows, the computational burden of evaluating grows with it. If this evaluation can’t be executed in real time, then it becomes impossible to perform error-corrected calculations.

To address this, IBM has developed a message-passing decoder that can perform parallel evaluations of the syndrome data. The system explores more of the solution space by a combination of randomizing the weight given to the memory of past solutions and by handing any seemingly non-optimal solutions on to new instances for additional evaluation. The key thing is that IBM estimates that this can be run in real time using FPGAs, ensuring that the system works.

A quantum architecture

There are a lot more details beyond those, as well. Gambetta described the linkage between each computational unit—IBM is calling it a Universal Bridge—which requires one microwave cable for each code distance of the logical qubits being linked. (In other words, a distance 12 code would need 12 microwave-carrying cables to connect each chip.) He also said that IBM is developing control hardware that can operate inside the refrigeration hardware, based on what they’re calling “cold CMOS,” which is capable of functioning at 4 Kelvin.

The company is also releasing renderings of what it expects Starling to look like: a series of dilution refrigerators, all connected by a single pipe that contains the Universal Bridge. “It’s an architecture now,” Gambetta said. “I have never put details in the roadmap that I didn’t feel we could hit, and now we’re putting a lot more details.”

The striking thing to me about this is that it marks a shift away from a focus on individual qubits, their connectivity, and their error rates. The error hardware rates are now good enough (4 x 10-4) for this to work, although Gambetta felt that a few more improvements should be expected. And connectivity will now be directed exclusively toward creating a functional computational unit.

That said, there’s still a lot of space beyond Starling on IBM’s roadmap. The 200 logical qubits it promises will be enough to handle some problems, but not enough to perform the complex algorithms needed to do things like break encryption. That will need to wait for something closer to Blue Jay, a 2033 system that IBM expects will have 2,000 logical qubits. And, as of right now, it’s the only thing listed beyond Starling.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

IBM is now detailing what its first quantum compute system will look like Read More »

anti-vaccine-advocate-rfk-jr.-fires-entire-cdc-panel-of-vaccine-advisors

Anti-vaccine advocate RFK Jr. fires entire CDC panel of vaccine advisors

“Most likely aim to serve the public interest as they understand it,” he wrote. “The problem is their immersion in a system of industry-aligned incentives and paradigms that enforce a narrow pro-industry orthodoxy.”

Kennedy, who is currently trying to shift the national attention to his idea of clean living and higher-quality foods, has a long history of advocating against vaccines, spreading misinformation and disinformation about the lifesaving shots. However, a clearer explanation of Kennedy’s war on vaccines can be found in his rejection of germ theory. In his 2021 book that vilifies infectious disease expert Anthony Fauci, he bemoaned germ theory as “the pharmaceutical paradigm that emphasized targeting particular germs with specific drugs rather than fortifying the immune system through healthy living, clean water, and good nutrition.”

As such, he rails against the “$1 trillion pharmaceutical industry pushing patented pills, powders, pricks, potions, and poisons.”

In Kennedy’s op-ed, he indicates that new ACIP members will be appointed who “won’t directly work for the vaccine industry. … will exercise independent judgment, refuse to serve as a rubber stamp, and foster a culture of critical inquiry.”

It’s unclear how the new members will be vetted and appointed and when the new committee will be assembled.

In a statement, the president of the American Medical Association, Bruce Scott, rebuked Kennedy’s firings, saying that ACIP “has been a trusted national source of science- and data-driven advice and guidance on the use of vaccines to prevent and control disease.” Today’s removal “undermines that trust and upends a transparent process that has saved countless lives,” he continued. “With an ongoing measles outbreak and routine child vaccination rates declining, this move will further fuel the spread of vaccine-preventable illnesses.”

This post has been updated to include a statement from the AMA. This story is breaking and may be updated further.

Anti-vaccine advocate RFK Jr. fires entire CDC panel of vaccine advisors Read More »