Author name: Tim Belzer

everything-we-know-about-the-2026-nissan-leaf

Everything we know about the 2026 Nissan Leaf

The first-generation Nissan Leaf was an incredible achievement for the company and for the industry. A mass-market EV that wasn’t priced out of reach was something the industry needed at the time.

That’s important. Since then, things have stagnated. To say that the 2026 Leaf is the most important EV launch for Nissan since the original car would be an understatement. It must get it right, because the competition is too good not to.

Starting things off, the car is available with two battery options. There is a 52 kWh base pack and a 75 kWh longer-range option. Each option has an active thermal management system—a first for Leaf—to address DC fast-charging concerns. Those batteries also deliver more range, with up to 303 miles (488 km) on the S+ model.

The 2026 Leaf is 3 inches shorter (76 mm) than the current hatchback, although the wheelbase is only 0.4 inches (10 mm) shorter. Nissan

The 52-kWh version makes 174 hp (130 kW), and the 75-kWh motor generates 215 hp (160 kW).

The Leaf adopts Nissan’s new 3-in-1 EV powertrain, which integrates the motor, inverter, and reducer. This reduces packaging by 10 percent, and Nissan claims it improves responsiveness and refines the powertrain.

Native NACS

Instead of a slow and clunky CHAdeMO connector, the Leaf rocks a Tesla-style NACS port for DC fast charging. Interestingly, the car also has a SAE J-1772 connector for AC charging. The driver’s side fender has the J plug, while the passenger side fender has the NACS.

Confusingly, the NACS connector is only for DC fast charging. If you’re going to level 2 charge, you must use the J plug or a NACS connector with an adapter. It’s weird, but the car will make it obvious to owners if they plug into the wrong connector.

When connected to a DC fast charger that can deliver 150 kW, both battery sizes will charge from 10 to 80 percent in 35 minutes. While not class-leading, it wipes the floor with the old model. It also supports a peak charging rate that is higher than its bigger sibling, the Ariya.

Everything we know about the 2026 Nissan Leaf Read More »

paramount-drops-trailer-for-the-naked-gun-reboot

Paramount drops trailer for The Naked Gun reboot

Liam Neeson stars as Lt. Frank Drebin Jr. in The Naked Gun.

Thirty years after the last film in The Naked Gun crime-spoof comedy franchise, we’re finally getting a new installment, The Naked Gun, described as a “legacy sequel.” And it’s Liam Neeson stepping into Leslie Nielsen’s fumbling shoes, playing that character’s son. Judging by the official trailer, Neeson is up to the task, showcasing his screwball comedy chops.

(Some spoilers for the first three films in the franchise below.)

The original Naked Gun: From the Files of Police Squad! debuted in 1988, with Leslie Nielsen starring as Detective Frank Drebin, trying to foil an assassination attempt on Queen Elizabeth II during her visit to the US. It proved successful enough to launch two sequels. Naked Gun 2-1/2: The Smell of Fear (1991) found Drebin battling an evil plan to kidnap a prominent nuclear scientist. Naked Gun 33-1/3: The Final Insult (1994) found Drebin coming out of retirement and going undercover to take down a crime syndicate planning to blow up the Academy Awards.

The franchise rather lost steam after that, but by 2013, Paramount was planning a reboot starring Ed Helms as “Frank Drebin, no relation.” David Zucker, who produced the prior Naked Gun films and directed the first two, declined to be involved, feeling it could only be “inferior” to his originals. He was briefly involved in the 2017 rewrites, featuring Frank’s son as a secret agent rather than a policeman. That film never transpired either.  The project was revived again in 2021 by Seth MacFarlane (without Zucker’s involvement), and Neeson was cast as Frank Drebin Jr.—a police lieutenant in this incarnation.

In addition to Neeson, the film stars Paul Walter Hauser as Captain Ed Hocken, Jr.—Hauser will also appear as Mole Man in the forthcoming Fantastic Four: First Steps—and Pamela Anderson as a sultry femme fatale named Beth. The cast also includes Kevin Durand, Danny Huston, Liza Koshy, Cody Rhodes, CCH Pounder, Busta Rhymes, and Eddy Yu.

Paramount drops trailer for The Naked Gun reboot Read More »

the-“online-monkey-torture-video”-arrests-just-keep-coming

The “online monkey torture video” arrests just keep coming

So the group tried again. “Million tears” had been booted by its host, but the group reconstituted on another platform and renamed itself “the trail of trillion tears.” They reached out to another Indonesian videographer and asked for a more graphic version of the same video. But this version, more sadistic than the last, still didn’t satisfy. As one of the Americans allegedly said to another, “honey that’s not what you asked for. Thats the village idiot version. But I’m talking with someone about getting a good vo [videographer] to do it.”

Arrests continue

In 2021, someone leaked communications from the “million tears” group to animals rights organizations like Lady Freethinker and Action for Primates, which handed it over to authorities. Still, it took several years to arrest and prosecute the torture group’s leaders.

In 2024, one of these leaders—Ronald Bedra of Ohio—pled guilty to commissioning the videos and to mailing “a thumb drive containing 64 videos of monkey torture to a co-conspirator in Wisconsin.” His mother, in a sentencing letter to the judge, said that her son must “have been undergoing some mental crisis when he decided to create the website.” As a boy, he had loved all of the family pets, she said, even providing a funeral for a fish.

Bedra was sentenced late last year to 54 months in prison. According to letters from family members, he has also lost his job, his wife, and his kids.

In April 2025, two more alleged co-conspirators were indicted and subsequently arrested; their cases were unsealed only this week. Two other co-conspirators from this group still appear to be uncharged.

In May 2025, 11 other Americans were indicted for their participation in monkey torture groups, though they appear to come from a different network. This group allegedly “paid a minor in Indonesia to commit the requested acts on camera.”

As for the Indonesian side of this equation, arrests have been happening there, too. Following complaints from animal rights groups, police in Indonesia have arrested multiple videographers over the last two years.

Update: Showing the international nature of these torture groups, the Scottish Sun just ran a rather lurid piece about the “sadistic Scots mum jailed for helping to run a horrific global monkey torture network.” The 39-year-old was apparently caught after US authorities broke up another monkey torturing network based around outsourcing the torture to Indonesia.

The “online monkey torture video” arrests just keep coming Read More »

these-va-tech-scientists-are-building-a-better-fog-harp

These VA Tech scientists are building a better fog harp

Unlike standard fog harvesting technologies, “We’re trying to use clever geometric designs in place of chemistry,” Boreyko told Ars. “When I first came into this field, virtually everyone was using nets, but they were just trying to make more and more clever chemical coatings to put on the nets to try to reduce the clogging. We found that simply going from a net to a harp, with no chemicals or coatings whatsoever—just the change in geometry solved the clogging problem much better.”

Jimmy Kaindu inspects a new collecting prototype beside the original fog harp.

Jimmy Kaindu inspects a new collecting prototype beside the original fog harp. Credit: Alex Parrish for Virginia Tech

For their scale prototypes in the lab, Boreyko’s team 3D printed their harp “strings” out of a weakly hydrophobic plastic. “But in general, the harp works fantastic with uncoated stainless steel wires and definitely doesn’t require any kind of fancy coating,” said Boreyko. And the hybrid harp can be scaled up with relative ease, just like classic nets. It just means stringing together a bunch of harps of smaller heights, meter by meter, to get the desired size. “There is no limit to how big this thing could be,” he said.

Scaling up the model is the next obvious step, along with testing larger prototypes outdoors. Boreyko would also like to test an electric version of the hybrid fog harp. “If you apply a voltage, it turns out you can catch even more water,” he said. “Because our hybrid’s non-clogging, you can have the best of both worlds: using an electric field to boost the harvesting amount in real-life systems and at the same time preventing clogging.”

While the hybrid fog harp is well-suited for harvesting water in any coastal region that receives a lot of fog, Boreyko also envisions other, less obvious potential applications for high-efficiency fog harvesters, such as roadways, highways, or airport landing strips that are prone to fog that can pose safety hazards. “There’s even industrial chemical supply manufacturers creating things like pressurized nitrogen gas,” he said. “The process cools the surrounding air into an ice fog that can drift across the street and wreak havoc on city blocks.”

Journal of Materials Chemistry A, 2025. DOI: 10.1039/d5ta02686e  (About DOIs).

These VA Tech scientists are building a better fog harp Read More »

another-one-for-the-graveyard:-google-to-kill-instant-apps-in-december

Another one for the graveyard: Google to kill Instant Apps in December

But that was then, and this is now. Today, an increasing number of mobile apps are functionally identical to the mobile websites they are intended to replace, and developer uptake of Instant Apps was minimal. Even in 2017, loading an app instead of a website had limited utility. As a result, most of us probably only encountered Instant Apps a handful of times in all the years it was an option for developers.

To use the feature, which was delivered to virtually all Android devices by Google Play Services, developers had to create a special “instant” version of their app that was under 15MB. The additional legwork to get an app in front of a subset of new users meant this was always going to be a steep climb, and Google struggles to incentivize developers to adopt new features. Plus, there’s no way to cram in generative AI! So it’s not a shock to see Google retiring the feature.

This feature is currently listed in the collection of Google services in your phone settings as “Google Play Instant.” Unfortunately, there aren’t many examples still available if you’re curious about what Instant Apps were like—the Finnish publisher Ilta-Sanomat is one of the few still offering it. Make sure the settings toggle for Instant Apps is on if you want a little dose of nostalgia.

Another one for the graveyard: Google to kill Instant Apps in December Read More »

google-left-months-old-dark-mode-bug-in-android-16,-fix-planned-for-next-pixel-drop

Google left months-old dark mode bug in Android 16, fix planned for next Pixel Drop

Google’s Pixel phones got a big update this week with the release of Android 16 and a batch of Pixel Drop features. Pixels now have enhanced security, new contact features, and improved button navigation. However, some of the most interesting features, like desktop windowing and Material 3 Expressive, are coming later. Another thing that’s coming later, it seems, is a fix for an annoying bug Google introduced a few months back.

Google broke the system dark mode schedule in its March Pixel update and did not address it in time for Android 16. The company confirms a fix is coming, though.

The system-level dark theme arrives in Android 10 to offer a less eye-searing option, which is particularly handy in dark environments. It took a while for even Google’s apps to fully adopt this feature, but support is solid five years later. Google even offers a scheduling feature to switch between light and dark mode at custom times or based on sunrise/sunset. However, the scheduling feature was busted in the March update.

Currently, if you manually toggle dark mode on or off, schedules stop working. The only way to get them back is to set up your schedule again and then never toggle dark mode. Google initially marked this as “intended behavior,” but a more recent bug report was accepted as a valid issue.

Google left months-old dark mode bug in Android 16, fix planned for next Pixel Drop Read More »

Why is digital sovereignty important right now?

Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders.

Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure.

The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself.

But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades (according to historical surveys I’ve run), most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack.

Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas.

Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty.

As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems.

What does the digital sovereignty landscape look like today?

Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts.

We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas others (the UK included) are adopting a risk-based approach based on trusted locales.

We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data?

This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks.

Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP.

How Are Cloud Providers Responding?

Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoring (in the French sense) its spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now.

We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France (Microsoft has similar in Germany). However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue.

Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players.

What Can Enterprise Organizations Do About It?

First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience.

If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that.

This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture.

It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency.

Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate.

Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing.

Where to start? Look after your own organization first

Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once.

Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario.

Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it.

Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience.

The post Why is digital sovereignty important right now? appeared first on Gigaom.

Why is digital sovereignty important right now? Read More »

after-a-series-of-tumors,-woman’s-odd-looking-tongue-explains-everything

After a series of tumors, woman’s odd-looking tongue explains everything

Breast cancer. Colon cancer. An enlarged thyroid gland. A family history of tumors and cancers as well. It wasn’t until the woman developed an annoying case of dry mouth that doctors put it all together. By then, she was in her 60s.

According to a new case study in JAMA Dermatology, the woman presented to a dermatology clinic in Spain after three months of oral unpleasantness. They noted the cancers in her medical history. When she opened wide, doctors immediately saw the problem: Her tongue was covered in little wart-like bumps that resembled a slippery, flesh-colored cobblestone path. (Image here.)

Such a cobblestone tongue is a telltale sign of a rare genetic condition called Cowden syndrome. It’s caused by inherited mutations that break a protein, called PTEN, leading to tumors and cancers.

PTEN, which stands for phosphatase and tensin homolog, generally helps keep cells from growing out of control. Specifically, PTEN deactivates a signaling lipid called PIP3 (phosphatidylinositol 3,4,5-trisphosphate), and that deactivation blocks a signaling pathway (the PI3K/AKT/mTOR pathway) involved in regulating cell growth, survival, and migration. When PTEN is broken, PIP3 activity ramps up, and tumors can grow unchecked.

Not-so-rare mutation

In Cowden syndrome, PTEN mutations lead to noncancerous tumors or masses called hamartomas, which can occur in any organ. But, people with the syndrome are also at high risk of developing a slew of cancerous growths—most commonly cancers of the breast, thyroid, and uterus—over their lifetime. That’s why people diagnosed with the condition are advised to undergo intensive cancer screenings, including annual ultrasounds of the thyroid starting at age 7 and annual mammograms and MRIs (magnetic resonance imaging) starting at age 30 at the latest.

After a series of tumors, woman’s odd-looking tongue explains everything Read More »

5-things-in-trump’s-budget-that-won’t-make-nasa-great-again

5 things in Trump’s budget that won’t make NASA great again

If signed into law as written, the White House’s proposal to slash nearly 25 percent from NASA’s budget would have some dire consequences.

It would cut the agency’s budget from $24.8 billion to $18.8 billion. Adjusted for inflation, this would be the smallest NASA budget since 1961, when the first American launched into space.

The proposed funding plan would halve NASA’s funding for robotic science missions and technology development next year, scale back research on the International Space Station, turn off spacecraft already exploring the Solar System, and cancel NASA’s Space Launch System rocket and Orion spacecraft after two more missions in favor of procuring lower-cost commercial transportation to the Moon and Mars.

The SLS rocket and Orion spacecraft have been targets for proponents of commercial spaceflight for several years. They are single-use, and their costs are exorbitant, with Moon missions on SLS and Orion projected to cost more than $4 billion per flight. That price raises questions about whether these vehicles will ever be able to support a lunar space station or Moon base where astronauts can routinely rotate in and out on long-term expeditions, like researchers do in Antarctica today.

Reusable rockets and spaceships offer a better long-term solution, but they won’t be ready to ferry people to the Moon for a while longer. The Trump administration proposes flying SLS and Orion two more times on NASA’s Artemis II and Artemis III missions, then retiring the vehicles. Artemis II’s rocket is currently being assembled at Kennedy Space Center in Florida for liftoff next year, carrying a crew of four around the far side of the Moon. Artemis III would follow with the first attempt to land humans on the Moon since 1972.

The cuts are far from law

Every part of Trump’s budget proposal for fiscal year 2026 remains tentative. Lawmakers in each house of Congress will write their own budget bills, which must go to the White House for Trump’s signature. A Senate bill released last week includes language that would claw back funding for SLS and Orion to support the Artemis IV and Artemis V missions.

5 things in Trump’s budget that won’t make NASA great again Read More »

ibm-is-now-detailing-what-its-first-quantum-compute-system-will-look-like

IBM is now detailing what its first quantum compute system will look like


Company is moving past focus on qubits, shifting to functional compute units.

A rendering of what IBM expects will be needed to house a Starling quantum computer. Credit: IBM

On Tuesday, IBM released its plans for building a system that should push quantum computing into entirely new territory: a system that can both perform useful calculations while catching and fixing errors and be utterly impossible to model using classical computing methods. The hardware, which will be called Starling, is expected to be able to perform 100 million operations without error on a collection of 200 logical qubits. And the company expects to have it available for use in 2029.

Perhaps just as significant, IBM is also committing to a detailed description of the intermediate steps to Starling. These include a number of processors that will be configured to host a collection of error-corrected qubits, essentially forming a functional compute unit. This marks a major transition for the company, as it involves moving away from talking about collections of individual hardware qubits and focusing instead on units of functional computational hardware. If all goes well, it should be possible to build Starling by chaining a sufficient number of these compute units together.

“We’re updating [our roadmap] now with a series of deliverables that are very precise,” IBM VP Jay Gambetta told Ars, “because we feel that we’ve now answered basically all the science questions associated with error correction and it’s becoming more of a path towards an engineering problem.”

New architectures

Error correction on quantum hardware involves entangling a groups of qubits in a way that distributes one or more quantum bit values among them and includes additional qubits that can be used to check the state of the system. It can be helpful to think of these as data and measurement qubits. Performing weak quantum measurements on the measurement qubits produces what’s called “syndrome data,” which can be interpreted to determine whether anything about the data qubits has changed (indicating an error) and how to correct it.

There are lots of potential ways to arrange different combinations of data and measurement qubits for this to work, each referred to as a code. But, as a general rule, the more hardware qubits committed to the code, the more robust it will be to errors, and the more logical qubits that can be distributed among its hardware qubits.

Some quantum hardware, like that based on trapped ions or neutral atoms, are relatively flexible when it comes to hosting error-correction codes. The hardware qubits can be moved around so that any two can be entangled, so it’s possible to adopt a huge range of configurations, albeit at the cost of the time spent moving atoms around. IBM’s technology is quite different. It relies on qubits made of superconducting electronics laid out on a chip, with entanglement mediated by wiring that runs between qubits. The layout of this wiring is set during the chip’s manufacture, and so the design of the chip commits it to a limited number of potential error-correction codes.

Unfortunately, this wiring can also enable cross-talk between neighboring qubits, causing them to lose their state. To avoid this, existing IBM processors have their qubits wired in what they term a “heavy hex” configuration, named for its hexagonal arrangements of connections among its qubits. This has worked well to keep the error rate of its hardware down, but it also poses a challenge, since IBM has decided to go with an error-correction code that’s incompatible with the heavy hex geometry.

A couple of years back, an IBM team described a compact error correction code called a low-density parity check (LDPC). This requires a square grid of nearest-neighbor connections among its qubits, as well as wiring to connect qubits that are relatively distant on the chip. To get its chips and error-correction scheme in sync, IBM has made two key advances. The first is in its chip packaging, which now uses several layers of wiring sitting above the hardware qubits to enable all of the connections needed for the LDPC code.

We’ll see that first in a processor called Loon that’s on the company’s developmental roadmap. “We’ve already demonstrated these three things: high connectivity, long-range couplers, and couplers that break the plane [of the chip] and connect to other qubits,” Gambetta said. “We have to combine them all as a single demonstration showing that all these parts of packaging can be done, and that’s what I want to achieve with Loon.” Loon will be made public later this year.

Two diagrams of blue objects linked by red lines. The one on the left is sparse and simple, while the one on the right is a complicated mesh of red lines.

On the left, the simple layout of the connections in a current-generation Heron processor. At right, the complicated web of connections that will be present in Loon. Credit: IBM

The second advance IBM has made is to eliminate the crosstalk that the heavy hex geometry was used to minimize, so heavy hex will be going away. “We are releasing this year a bird for near term experiments that is a square array that has almost zero crosstalk,” Gambetta said, “and that is Nighthawk.” The more densely connected qubits cut the overhead needed to perform calculations by a factor of 15, Gambetta told Ars.

Nighthawk is a 2025 release on a parallel roadmap that you can think of as user-facing. Iterations on its basic design will be released annually through 2028, each enabling more operations without error (going from 5,000 gate operations this year to 15,000 in 2028). Each individual Nighthawk processor will host 120 hardware qubits, but 2026 will see three of them chained together and operating as a unit, providing 360 hardware qubits. That will be followed in 2027 by a machine with nine linked Nighthawk processors, boosting the hardware qubit number over 1,000.

Riding the bicycle

The real future of IBM’s hardware, however, will be happening over on the developmental line of processors, where talk about hardware qubit counts will become increasingly irrelevant. In a technical document released today, IBM is describing the specific LDPC code it will be using, termed a bivariate bicycle code due to some cylindrical symmetries in its details that vaguely resemble bicycle wheels. The details of the connections matter less than the overall picture of what it takes to use this error code in practice.

IBM describes two implementations of this form of LDPC code. In the first, 144 hardware qubits are arranged so that they play host to 12 logical qubits and all of the measurement qubits needed to perform error checks. The standard measure of a code’s ability to catch and correct errors is called its distance, and in this case, the distance is 12. As an alternative, they also describe a code that uses 288 hardware qubits to host the same 12 logical qubits but boost the distance to 18, meaning it’s more resistant to errors. IBM will make one of these collections of logical qubits available as a Kookaburra processor in 2026, which will use them to enable stable quantum memory.

The follow-on will bundle these with a handful of additional qubits that can produce quantum states that are needed for some operations. Those, plus hardware needed for the quantum memory, form a single, functional computation unit, built on a single chip, that is capable of performing all the operations needed to implement any quantum algorithm.

That will appear with the Cockatoo chip, which will also enable multiple processing units to be linked on a single bus, allowing the logical qubit count to grow beyond 12. (The company says that one of the dozen logical qubits in each unit will be used to mediate entanglement with other units and so won’t be available for computation.) That will be followed by the first test versions of Starling, which will allow universal computations on a limited number of logical qubits spread across multiple chips.

Separately, IBM is releasing a document that describes a key component of the system that will run on classical computing hardware. Full error correction requires evaluating the syndrome data derived from the state of all the measurement qubits in order to determine the state of the logical qubits and whether any corrections need to be made. As the complexity of the logical qubits grows, the computational burden of evaluating grows with it. If this evaluation can’t be executed in real time, then it becomes impossible to perform error-corrected calculations.

To address this, IBM has developed a message-passing decoder that can perform parallel evaluations of the syndrome data. The system explores more of the solution space by a combination of randomizing the weight given to the memory of past solutions and by handing any seemingly non-optimal solutions on to new instances for additional evaluation. The key thing is that IBM estimates that this can be run in real time using FPGAs, ensuring that the system works.

A quantum architecture

There are a lot more details beyond those, as well. Gambetta described the linkage between each computational unit—IBM is calling it a Universal Bridge—which requires one microwave cable for each code distance of the logical qubits being linked. (In other words, a distance 12 code would need 12 microwave-carrying cables to connect each chip.) He also said that IBM is developing control hardware that can operate inside the refrigeration hardware, based on what they’re calling “cold CMOS,” which is capable of functioning at 4 Kelvin.

The company is also releasing renderings of what it expects Starling to look like: a series of dilution refrigerators, all connected by a single pipe that contains the Universal Bridge. “It’s an architecture now,” Gambetta said. “I have never put details in the roadmap that I didn’t feel we could hit, and now we’re putting a lot more details.”

The striking thing to me about this is that it marks a shift away from a focus on individual qubits, their connectivity, and their error rates. The error hardware rates are now good enough (4 x 10-4) for this to work, although Gambetta felt that a few more improvements should be expected. And connectivity will now be directed exclusively toward creating a functional computational unit.

That said, there’s still a lot of space beyond Starling on IBM’s roadmap. The 200 logical qubits it promises will be enough to handle some problems, but not enough to perform the complex algorithms needed to do things like break encryption. That will need to wait for something closer to Blue Jay, a 2033 system that IBM expects will have 2,000 logical qubits. And, as of right now, it’s the only thing listed beyond Starling.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

IBM is now detailing what its first quantum compute system will look like Read More »

anti-vaccine-advocate-rfk-jr.-fires-entire-cdc-panel-of-vaccine-advisors

Anti-vaccine advocate RFK Jr. fires entire CDC panel of vaccine advisors

“Most likely aim to serve the public interest as they understand it,” he wrote. “The problem is their immersion in a system of industry-aligned incentives and paradigms that enforce a narrow pro-industry orthodoxy.”

Kennedy, who is currently trying to shift the national attention to his idea of clean living and higher-quality foods, has a long history of advocating against vaccines, spreading misinformation and disinformation about the lifesaving shots. However, a clearer explanation of Kennedy’s war on vaccines can be found in his rejection of germ theory. In his 2021 book that vilifies infectious disease expert Anthony Fauci, he bemoaned germ theory as “the pharmaceutical paradigm that emphasized targeting particular germs with specific drugs rather than fortifying the immune system through healthy living, clean water, and good nutrition.”

As such, he rails against the “$1 trillion pharmaceutical industry pushing patented pills, powders, pricks, potions, and poisons.”

In Kennedy’s op-ed, he indicates that new ACIP members will be appointed who “won’t directly work for the vaccine industry. … will exercise independent judgment, refuse to serve as a rubber stamp, and foster a culture of critical inquiry.”

It’s unclear how the new members will be vetted and appointed and when the new committee will be assembled.

In a statement, the president of the American Medical Association, Bruce Scott, rebuked Kennedy’s firings, saying that ACIP “has been a trusted national source of science- and data-driven advice and guidance on the use of vaccines to prevent and control disease.” Today’s removal “undermines that trust and upends a transparent process that has saved countless lives,” he continued. “With an ongoing measles outbreak and routine child vaccination rates declining, this move will further fuel the spread of vaccine-preventable illnesses.”

This post has been updated to include a statement from the AMA. This story is breaking and may be updated further.

Anti-vaccine advocate RFK Jr. fires entire CDC panel of vaccine advisors Read More »

ex-fcc-chair-ajit-pai-is-now-a-wireless-lobbyist—and-enemy-of-cable-companies

Ex-FCC Chair Ajit Pai is now a wireless lobbyist—and enemy of cable companies


Pai’s return as CTIA lobbyist fuels industry-wide battle over spectrum rights.

Ajit Pai, former chairman of the Federal Communications Commission, during a Senate Commerce Committee hearing on Wednesday, April 9, 2025. Credit: Getty Images | Bloomberg

Ajit Pai is back on the telecom policy scene as chief lobbyist for the mobile industry, and he has quickly managed to anger a coalition that includes both cable companies and consumer advocates.

Pai was the Federal Communications Commission chairman during President Trump’s first term and then spent several years at private equity firm Searchlight Capital. He changed jobs in April, becoming the president and CEO of wireless industry lobby group CTIA. Shortly after, he visited the White House to discuss wireless industry priorities and had a meeting with Brendan Carr, the current FCC chairman who was part of Pai’s Republican majority at the FCC from 2017 to 2021.

Pai’s new job isn’t surprising. He was once a lawyer for Verizon, and it’s not uncommon for FCC chairs and commissioners to be lobbyists before or after terms in government.

Pai’s move to CTIA means he is now battling a variety of industry players and advocacy groups over the allocation of spectrum. As always, wireless companies AT&T, Verizon, and T-Mobile want more spectrum and the exclusive rights to use it. The fight puts Pai at odds with the cable industry that cheered his many deregulatory actions when he led the FCC.

Pai wrote a May 4 op-ed in The Wall Street Journal arguing that China is surging ahead of the US in 5G deployment and that “the US doesn’t even have enough licensed spectrum available to keep up with expected consumer demand.” He said that Congress must restore the FCC’s lapsed authority to auction spectrum licenses, and auction off “at least 600 megahertz of midband spectrum for future 5G services.”

“During the first Trump administration, the US was determined to lead the world in wireless innovation—and by 2021 it did,” Pai wrote. “But that urgency and sense of purpose have diminished. With Mr. Trump’s leadership, we can rediscover both.”

Pai’s op-ed drew a quick rebuke from a group called Spectrum for the Future, which alleged that Pai mangled the facts.

“Mr. Pai’s arguments are wrong on the facts—and wrong on how to accelerate America’s global wireless leadership,” the vaguely named group said in a May 8 press release that accused Pai of “stunning hypocrisy.” Spectrum for the Future said Pai is wrong about the existence of a spectrum shortage, wrong about how much money a spectrum auction could raise, and wrong about the cost of reallocating spectrum from the military to mobile companies.

“Mr. Pai attributes the US losing its lead in 5G availability to the FCC’s lapsed spectrum auction authority. He’d be more accurate to blame his own members’ failure to build out their networks,” the group said.

Big Cable finds allies

Pai’s op-ed said that auctioning 600 MHz “could raise as much as $200 billion” to support other US government priorities. Spectrum for the Future called this an “absurd claim” that “presumes that this auction of 600 MHz could approach the combined total ($233 billion) that has been raised by every prior spectrum auction (totaling nearly 6 GHz of bandwidth) in US history combined.”

The group also said Pai “completely ignores the immense cost to taxpayers to relocate incumbent military and intelligence systems out of the bands CTIA covets for its own use.” Spectrum for the Future didn’t mention that one of the previous auctions, for the 3.7–3.98 GHz band, netted over $81 billion in winning bids.

So who is behind Spectrum for the Future? The group’s website lists 18 members , including the biggest players in the cable industry. Comcast, Charter, Cox, and lobby group NCTA-The Internet & Television Association are all members of Spectrum for the Future. (Disclosure: The Advance/Newhouse Partnership, which owns 12 percent of Charter, is part of Advance Publications, which owns Ars Technica parent Condé Nast.)

When contacted by Ars, a CTIA spokesperson criticized cable companies for “fighting competition” and said the cable firms are being “disingenuous.” Charter and Cox declined to answer our questions about their involvement in Spectrum for the Future. Comcast and the NCTA didn’t respond to requests for comment.

The NCTA and big cable companies are no strangers to lobbying the FCC and Congress and could fight for CBRS entirely on their own. But as it happens, some consumer advocates who regularly oppose the cable industry on other issues are on cable’s side in this battle.

With Spectrum for the Future, the cable industry has allied not just with consumer advocates but also small wireless ISPs and operators of private networks that use spectrum the big mobile companies want for themselves. Another group that is part of the coalition represents schools and libraries that use spectrum to provide local services.

For cable, joining with consumer groups, small ISPs, and others in a broad coalition has an obvious advantage from a public relations standpoint. “This is a lot of different folks who are in it for their own reasons. Sometimes that’s a big advantage because it makes it more authentic,” said Harold Feld, senior VP of consumer advocacy group Public Knowledge, which is part of Spectrum of the Future.

In some cases, a big company will round up nonprofits to which it has donated to make a show of broad public support for one of the company’s regulatory priorities—like a needed merger approval. That’s not what happened here, according to Feld. While cable companies probably provided most of the funding for Spectrum for the Future, the other members are keenly interested in fighting the wireless lobby over spectrum access.

“There’s a difference between cable being a tentpole member and this being cable with a couple of friends on the side,” Feld told Ars. Cable companies “have the most to lose, they have the most initial resources. But all of these other guys who are in here, I’ve been on these calls, they’re pretty active. There are a lot of diverse interests in this, which sometimes makes it easier to lobby, sometimes makes it harder to lobby because you all want to talk about what’s important to you.”

Feld didn’t help write the group’s press release criticizing Pai but said the points made are “all things I agree with.”

The “everybody but Big Mobile” coalition

Public Knowledge and New America’s Open Technology Institute (OTI), another Spectrum for the Future member, are both longtime proponents of shared spectrum. OTI’s Wireless Future Project director, Michael Calabrese, told Ars that Spectrum for the Future is basically the “everybody but Big Mobile” wireless coalition and “a very broad but ad hoc coalition.”

While Public Knowledge and OTI advocate for shared spectrum in many frequency bands, Spectrum for the Future is primarily focused on one: the Citizens Broadband Radio Service (CBRS), which spans from 3550 MHz to 3700 MHz. The CBRS spectrum is used by the Department of Defense and shared with non-federal users.

CBRS users in the cable industry and beyond want to ensure that CBRS remains available to them and free of high-power mobile signals that would crowd out lower-power operations. They were disturbed by AT&T’s October 2024 proposal to move CBRS to the lower part of the 3 GHz band, which is also used by the Department of Defense, and auction existing CBRS frequencies to 5G wireless companies “for licensed, full-power use.”

The NCTA told the FCC in December that “AT&T’s proposal to reallocate the entire 3 GHz band is unwarranted, impracticable, and unworkable and is based on the false assertion that the CBRS band is underutilized.”

Big mobile companies want the CBRS spectrum because it is adjacent to frequencies that are already licensed to them. The Department of Defense seems to support AT&T’s idea, even though it would require moving some military operations and sharing the spectrum with non-federal users.

Pentagon plan similar to AT&T’s

In a May research note provided to Ars, New Street Research Policy Advisor Blair Levin reported some details of a Department of Defense proposal for several bands of spectrum, including CBRS. The White House asked the Department of Defense “to come up with a plan to enable allocation of mid-band exclusive-use spectrum,” and the Pentagon recently started circulating its initial proposal.

The Pentagon plan is apparently similar to AT&T’s, as it would reportedly move current CBRS licensees and users to the lower 3 GHz band to clear spectrum for auctions.

“It represents the first time we can think of where the government would change the license terms of one set of users to benefit a competitor of that first set of users… While the exclusive-use spectrum providers would see this as government exercising its eminent domain rights as it has traditionally done, CBRS users, particularly cable, would see this as the equivalent of a government exercis[ing] its eminent domain rights to condemn and tear down a Costco to give the land to a Walmart,” Levin wrote.

If the proposal is implemented, cable companies would likely sue the government “on the grounds that it violates their property rights” under the priority licenses they purchased to use CBRS, Levin wrote. Levin’s note said he doesn’t think this proposal is likely to be adopted, but it shows that “the game is afoot.”

CBRS is important to cable companies because they have increasingly focused on selling mobile service as another revenue source on top of their traditional TV and broadband businesses. Cable firms got into the mobile business by reselling network access from the likes of Verizon. They’ve been increasing the use of CBRS, reducing their reliance on the major mobile companies, although a recent Light Reading article indicates that cable’s progress with CBRS deployment has been slow.

Then-FCC Chairman Ajit Pai and FCC commissioner Brendan Carr stand next to each other in a Senate committee hearing room in 2018.

Then-FCC Chairman Ajit Pai with FCC Commissioner Brendan Carr before the start of a Senate Commerce Committee hearing on Thursday, Aug. 16, 2018.

Credit: Getty Images | Bill Clark

Then-FCC Chairman Ajit Pai with FCC Commissioner Brendan Carr before the start of a Senate Commerce Committee hearing on Thursday, Aug. 16, 2018. Credit: Getty Images | Bill Clark

In its statement to Ars, CTIA said the cable industry “opposes full-power 5G access in the US at every opportunity” in CBRS and other spectrum bands. Cable companies are “fighting competition” from wireless operators “every chance they can,” CTIA said. “With accelerating losses in the marketplace, their advocacy is now more aggressive and disingenuous.”

The DoD plan that reportedly mirrors AT&T’s proposal seems to represent a significant change from the Biden-era Department of Defense’s stance. In September 2023, the department issued a report saying that sharing the 3.1 GHz band with non-federal users would be challenging and potentially cause interference, even if rules were in place to protect DoD operations.

“DoD is concerned about the high possibility that non-Federal users will not adhere to the established coordination conditions at all times; the impacts related to airborne systems, due to their range and speed; and required upgrades to multiple classes of ships,” the 2023 report said. We contacted the Department of Defense and did not receive a response.

Levin quoted Calabrese as saying the new plan “would pull the rug out from under more than 1,000 CBRS operators that have deployed more than 400,000 base stations. While they could, in theory, share DoD spectrum lower in the band, that spectrum will now be so congested it’s unclear how or when that could be implemented.”

Small ISP slams “AT&T and its cabal of telecom giants”

AT&T argues that CBRS spectrum is underutilized and should be repurposed for commercial mobile use because it “resides between two crucial, high-power, licensed 5G bands”—specifically 3.45–3.55 GHz and 3.7–3.98 GHz. It said its proposal would expand the CBRS band’s total size from 150 MHz to 200 MHz by relocating it to 3.1–3.3 GHz.

Keefe John, CEO of a Wisconsin-based wireless home Internet provider called Ethoplex, argued that “AT&T and its cabal of telecom giants” are “scheming to rip this resource from the hands of small operators and hand it over to their 5G empire. This is nothing less than a brazen theft of America’s digital future, and we must fight back with unrelenting resolve.”

John is vice chairperson of the Wireless Internet Service Providers Association (WISPA), which represents small ISPs and is a member of Spectrum for the Future. He wrote that CBRS is a “vital spectrum band that has become the lifeblood of rural connectivity” because small ISPs use it to deliver fixed wireless Internet service to underserved areas.

John called the AT&T proposal “a deliberate scheme to kneecap WISPs, whose equipment, painstakingly deployed, would be rendered obsolete in the lower band.” Instead of moving CBRS from one band to another, John said CBRS should stay on its current spectrum and expand into additional spectrum “to ensure small providers have a fighting chance.”

An AT&T spokesperson told Ars that “CBRS can coexist with incumbents in the lower 3 GHz band, and with such high demand for spectrum, it should. Thinking creatively about how to most efficiently use scarce spectrum to meet crucial needs is simply good public policy.”

AT&T said that an auction “would provide reimbursement for costs associated with” moving CBRS users to other spectrum and that “the Department of Defense has already stated that incumbents in the lower 3 GHz could share with low-power commercial uses.”

“Having a low-power use sandwiched between two high-power use cases is an inefficient use of spectrum that doesn’t make sense. Our proposal would fix that inefficiency,” AT&T said.

AT&T has previously said that under its proposal, CBRS priority license holders “would have the choice of relocating to the new CBRS band, accepting vouchers they can use toward bidding on new high-power licenses, or receiving a cash payment in exchange for the relinquishment of their priority rights.”

Democrat warns of threat to naval operations

Reallocating spectrum could require the Navy to move from the current CBRS band to the lower part of 3 GHz. US Senator Maria Cantwell (D-Wash.) sent a letter urging the Department of Defense to avoid major changes, saying the current sharing arrangement “allows the Navy to continue using high-power surveillance and targeting radars to protect vessels and our coasts, while also enabling commercial use of the band when and where the Navy does not need access.”

Moving CBRS users would “disrupt critical naval operations and homeland defense” and “undermine an innovative ecosystem of commercial wireless technology that will be extremely valuable for robotic manufacturing, precision agriculture, ubiquitous connectivity in large indoor spaces, and private wireless networks,” Cantwell wrote.

Cantwell said she is also concerned that “a substantial number of military radar systems that operate in the lower 3 GHz band” will be endangered by moving CBRS. She pointed out that the DoD’s September 2023 report said the 3.1 GHz range has “unique spectrum characteristics” that “provide long detection ranges, tracking accuracy, and discrimination capability required for DoD radar systems.” The spectrum “is low enough in the frequency range to maintain a high-power aperture capability in a transportable system” and “high enough in the frequency range that a sufficient angular accuracy can be maintained for a radar track function for a fire control capability,” the DoD report said.

Spectrum for the Future members

In addition to joining the cable industry in Spectrum for the Future, public interest groups are fighting for CBRS on their own. Public Knowledge and OTI teamed up with the American Library Association, the Benton Institute for Broadband & Society, the Schools Health & Libraries Broadband (SHLB) Coalition, and others in a November 2024 FCC filing that praised the pro-consumer virtues of CBRS.

“CBRS has been the most successful innovation in wireless technology in the last decade,” the groups said. They accused the big three mobile carriers of “seeking to cripple CBRS as a band that promotes not only innovation, but also competition.”

These advocacy groups are interested in helping cable companies and small home Internet providers compete against the big three mobile carriers because that opens new options for consumers. But the groups also point to many other use cases for CBRS, writing:

CBRS has encouraged the deployment of “open networks” designed to host users needing greater flexibility and control than that offered by traditional CMRS [Commercial Mobile Radio Services] providers, at higher power and with greater interference protection than possible using unlicensed spectrum. Manufacturing campuses (such as John Deere and Dow Chemical), transit hubs (Miami International Airport, Port of Los Angeles), supply chain and logistic centers (US Marine Corps), sporting arenas (Philadelphia’s Wells Fargo Center), school districts and libraries (Fresno Unified School District, New York Public Library) are all examples of a growing trend toward local spectrum access fueling purpose-built private LTE/5G networks for a wide variety of use cases.

The SHLB told Ars that “CBRS spectrum plays a critical role in helping anchor institutions like schools and libraries connect their communities, especially in rural and underserved areas where traditional broadband options may be limited. A number of our members rely on access to shared and unlicensed spectrum to deliver remote learning and essential digital services, often at low or no cost to the user.”

Spectrum for the Future’s members also include companies that sell services to help customers deploy CBRS networks, as well as entities like Miami International Airport that deploy their own CBRS-based private cellular networks. The NCTA featured Miami International Airport’s private network in a recent press release, saying that CBRS helped the airport “deliver more reliable connectivity for visitors while also powering a robust Internet of Things network to keep the airport running smoothly.”

Spectrum for the Future doesn’t list any staff on its website. Media requests are routed to a third-party public relations firm. An employee of the public relations firm declined to answer our questions about how Spectrum for the Future is structured and operated but said it is “a member-driven coalition with a wide range of active supporters and contributors, including innovators, anchor institutions, and technology companies.”

Spectrum for the Future appears to be organized by Salt Point Strategies, a public affairs consulting firm. Salt Point Spectrum Policy Analyst David Wright is described as Spectrum for the Future’s policy director in an FCC filing. We reached out to Wright and didn’t receive a response.

One Big Beautiful Bill is a battleground

Senator Ted Cruz at a Senate committee hearing, sitting in his seat and using his hand to move a nameplate that says

Senate Commerce Committee Chairman Ted Cruz (R-Texas) at a hearing on Tuesday, January 28, 2025.

Credit: Getty Images | Tom Williams

Senate Commerce Committee Chairman Ted Cruz (R-Texas) at a hearing on Tuesday, January 28, 2025. Credit: Getty Images | Tom Williams

The Trump-backed “One Big Beautiful Bill,” approved by the House, is one area of interest for both sides of the CBRS debate. The bill would restore the FCC’s expired authority to auction spectrum and require new auctions. One question is whether the bill will simply require the FCC to auction a minimum amount of spectrum or if it will require specific bands to be auctioned.

WISPA provided us with a statement about the version that passed the House, saying the group is glad it “excludes the 5.9 GHz and 6 GHz bands from its call to auction off 600 megahertz of spectrum” but worried because the bill “does not exclude the widely used and previously auctioned Citizens Broadband Radio Service (CBRS) band from competitive bidding, leaving it vulnerable to sale and/or major disruption.”

WISPA said that “spectrum auctions are typically designed to favor large players” and “cut out small and rural providers who operate on the front lines of the digital divide.” WISPA said that over 60 percent of its members “use CBRS to deliver high-quality broadband to hard-to-serve and previously unserved Americans.”

On June 5, Sen. Ted Cruz (R-Texas) released the text of the Senate Commerce Committee proposal, which also does not exclude the 3550–3700 MHz from potential auctions. Pai and AT&T issued statements praising Cruz’s bill.

Pai said that Cruz’s “bold approach answers President Trump’s call to keep all options on the table and provides the President with full flexibility to identify the right bands to meet surging consumer demand, safeguard our economic competitiveness, and protect national security.” AT&T said that “by renewing the FCC’s auction authority and creating a pipeline of mid-band spectrum, the Senate is taking a strong step toward meeting consumers’ insatiable demand for mobile data.”

The NCTA said it welcomed the plan to restore the FCC’s auction authority but urged lawmakers to “reject the predictable calls from large mobile carriers that seek to cripple competition and new services being offered over existing Wi-Fi and CBRS bands.”

Licensed, unlicensed, and in-between

Spectrum is generally made available on a licensed or unlicensed basis. Wireless carriers pay big bucks for licenses that grant them exclusive use of spectrum bands on which they deploy nationwide cellular networks. Unlicensed spectrum—like the bands used in Wi-Fi—can be used by anyone without a license as long as they follow rules that prevent interference with other users and services.

The FCC issued rules for the CBRS band in 2015 during the Obama administration, using a somewhat different kind of system. The FCC rules allow “for dynamic spectrum sharing in the 3.5 GHz band between the Department of Defense (DoD) and commercial spectrum users,” the National Telecommunications and Information Administration notes. “DoD users have protected, prioritized use of the spectrum. When the government isn’t using the airwaves, companies and the public can gain access through a tiered framework.”

Instead of a binary licensed-versus-unlicensed system, the FCC implemented a three-tiered system of access. Tier 1 is for incumbent users of the band, including federal users and fixed satellite service. Tier 1 users receive protection against harmful interference from Tier 2 and Tier 3 users.

Tier 2 of CBRS consists of Priority Access Licenses (PALs) that are distributed on a county-by-county basis through competitive bidding. Tier 2 users get interference protection from users of Tier 3, which is made available in a manner similar to unlicensed spectrum.

Tier 3 “is licensed-by-rule to permit open, flexible access to the band for the widest possible group of potential users,” the FCC says. Tier 3 users can operate throughout the 3550–3700 MHz band but “must not cause harmful interference to Incumbent Access users or Priority Access Licensees and must accept interference from these users. GAA users also have no expectation of interference protection from other GAA users.”

The public interest groups’ November 2024 filing with the FCC said the unique approach to spectrum sharing “allow[s] all would-be users to operate where doing so does not threaten harmful interference” and provides a happy medium between high-powered operations in exclusively licensed spectrum bands and low-powered operations in unlicensed spectrum.

CTIA wants the ability to send higher-power signals in the band, arguing that full-power wireless transmissions would help the US match the efforts of other countries “where this spectrum has been identified as central to 5G.” The public interest groups urged the FCC to reject the mobile industry proposal to increase power levels, saying it “would disrupt and diminish the expanding diversity of GAA users and use cases that represent the central purpose of CBRS’s innovative three-tier, low-power and coordinated sharing framework.”

Pai helped carriers as FCC chair

The FCC’s original plan for PALs during the Obama administration was to auction them off for individual Census tracts, small areas containing between 1,200 and 8,000 people each. During President Trump’s first term, the Pai FCC granted a CTIA request to boost the size of license areas from census tracts to counties, making it harder for small companies to win at auction.

The FCC auctioned PALs in 2020, getting bids of nearly $4.6 billion from 228 bidders. The biggest winners were Verizon, Dish Network, Charter, Comcast, and Cox.

Although Verizon uses CBRS for parts of its network, that doesn’t mean it’s on the same side as cable users in the policy debate. Verizon urged the FCC to increase the allowed power levels in the band. Dish owner EchoStar also asked for power increases. Cable companies oppose raising the power levels, with the NCTA saying that doing so would “jeopardize the continued availability of the 3.5 GHz band for lower-power operations” and harm both federal and non-federal users.

As head of CTIA, one of Pai’s main jobs is to obtain more licensed spectrum for the exclusive use of AT&T, Verizon, T-Mobile, and other mobile companies that his group represents. Pai’s Wall Street Journal op-ed said that “traffic on wireless networks is expected to triple by 2029,” driven by “AI, 5G home broadband and other emerging technologies.” Pai cited a study commissioned by CTIA to argue that “wireless networks will be unable to meet a quarter of peak demand in as little as two years.”

Spectrum for the Future countered that Pai “omits that the overwhelming share of this traffic will travel over Wi-Fi, not cellular networks.” CTIA told Ars that “the Ericsson studies we use for traffic growth projections only consider demand over commercial networks using licensed spectrum.”

Spectrum for the Future pointed to statements made by the CEOs of wireless carriers that seem to contradict Pai’s warnings of a spectrum shortage:

Mr. Pai cites a CTIA-funded study to claim “wireless networks will be unable to meet a quarter of peak demand in as little as two years.” If that’s true, then why are his biggest members’ CEOs telling Wall Street the exact opposite?

Verizon’s CEO insists he’s sitting on “a generation of spectrum”—”years and years and years” of spectrum capacity still to deploy. The CEO of Verizon’s consumer group goes even further, insisting they have “almost unlimited spectrum.” T-Mobile agrees, bragging that it has “only deployed 60 percent of our mid-band spectrum on 5G,” leaving “lots of spectrum we haven’t put into the fight yet.”

Battle could last for years

Spectrum for the Future also scoffed at Pai’s comparison of the US to China. Pai’s op-ed said that China “has accelerated its efforts to dominate in wireless and will soon boast more than four times the amount of commercial midband spectrum than the US.” Pai added that “China isn’t only deploying 5G domestically. It’s exporting its spectrum policies, its equipment vendors (such as Huawei and ZTE), and its Communist Party-centric vision of innovation to the rest of the world.”

Spectrum for the Future responded that “China’s spectrum policy goes all-in on exclusive-license frameworks, such as 5G, because they limit spectrum access to just a small handful of regime-aligned telecom companies complicit in Beijing’s censorship regime… America’s global wireless leadership, by contrast, is fueled by spectrum innovations like unlicensed Wi-Fi and CBRS spectrum sharing, whose hardware markets are dominated by American and allied companies.”

Spectrum for the Future also said that Pai and CTIA “blasting China for ‘exporting its spectrum policies’—while asking the US to adopt the same approach—is stunning hypocrisy.”

CTIA’s statement to Ars disputed Spectrum for the Future’s description. “The system of auctioning spectrum licenses was pioneered in America but is not used in China. China does, however, allocate unlicensed spectrum in a similar manner to the United States,” CTIA told Ars.

The lobbying battle and potential legal war that has Pai and CTIA lined up against the “everybody but Big Mobile” wireless coalition could last throughout Trump’s second term. Levin’s research note about the DoD proposal said, “the path from adoption to auction to making the spectrum available to the winners of an auction is likely to be at least three years.” The fight could go on a lot longer if “current licensees object and litigate,” Levin wrote.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Ex-FCC Chair Ajit Pai is now a wireless lobbyist—and enemy of cable companies Read More »