Author name: Tim Belzer

amazon-starts-selling-hyundai-cars,-more-brands-next-year

Amazon starts selling Hyundai cars, more brands next year

Fear not—there’s no one-click option, so no one should be in any danger of absent-mindedly buying a brand-new Palisade. Instead, there’s a “Begin Purchase” button, at which point you can choose to pay the entire amount or finance the purchase.

Here is a huge difference to the traditional dealership experience: There’s no negotiation, no browbeating or asking you how much of a monthly payment you want to make, and no upselling paint protection or the like. Everything can be done through amazon with a few clicks, ending with scheduling a pick-up time for the new car at the dealership. You can even trade in your existing car during the process. (I only tested it so far lest I accidentally end up with a brand-new Ioniq 5 N, which I still can’t charge at home.)

Amazon says it will add more brands next year, as well as leasing, and will also expand to more cities. For now, Amazon Autos is available in Atlanta, Austin, Baltimore, Beaumont-Port Arthur, Birmingham, Boston, Champaign/Springfield, Charlotte, Chicago, Cincinnati, Cleveland, Columbia, Columbus, Dallas, Denver, El Paso, Fond Du Lac, Ft. Myers/Naples, Harrisburg-Lancaster-Lebanon-York, Harrisonburg, Hartford, Houston, Indianapolis, Jacksonville, Los Angeles, Miami, Milwaukee, Minneapolis-St. Paul, Nashville, New York, Orlando, Philadelphia, Phoenix, Pittsburgh, Portland, Providence, Raleigh-Durham, Salt Lake City, San Antonio, San Diego, San Francisco, Seattle, Sheboygan, Springfield, St. Louis, Tampa, West Palm Beach, and Washington, DC.

Amazon starts selling Hyundai cars, more brands next year Read More »

avian-flu-cases-are-on-the-upswing-at-big-dairy-farms

Avian flu cases are on the upswing at big dairy farms


Rise in cases amplifies concerns about consolidation in agriculture.

Holstein dairy cows in a freestall barn. Credit: Getty |

A handful of dairy farms sprawl across the valley floor, ringed by the spikey, copper-colored San Jacinto mountains. This is the very edge of California’s dairy country—and so far, the cows here are safe.

But everyone worries that the potentially lethal bird flu is on the way. “I hope not,” says Clemente Jimenez, as he fixes a hose at Pastime Lakes, a 1,500-head dairy farm. “It’s a lot of trouble.”

Further north and west, in the San Joaquin Valley—the heart of the state’s dairy industry—the H5N1 virus, commonly known as bird flu, has rippled through the massive herds that provide most of the country’s milk. Farmworkers have piled carcasses into black and white heaps. This week the state reported 19 new confirmed cases in cows and more than 240,000 in chickens. Another 50,000 cases were confirmed at a chicken breeding facility in Oklahoma.

Most worrying, though, is the spillover from livestock to humans. So far, 58 people in the United States have tested positive for bird flu. Fifty-six of them worked either on dairy or poultry farms where millions of birds had to be culled.

The Centers for Disease Control and Prevention confirmed that four of the cases in humans had no known connection to livestock, raising fears that the virus eventually could jump from one human to another, though that hasn’t happened yet. On Thursday, a study published in Science by researchers at The Scripps Research Institute said it would take only a single mutation in the H5N1 virus for it to attach itself to human receptor cells.

Large livestock facilities in states across the country, and especially in California, have become the epicenters of these cases, and some researchers say that’s no surprise: Putting thousands, even hundreds of thousands, of animals together in confined, cramped barns or corrals creates a petri dish for viruses to spread, especially between genetically similar and often stressed animals.

More drought and higher temperatures, fueled by climate change, supercharge those conditions.

“Animal production acts like a connectivity for the virus,” said Paula Ribeiro Prist, a conservation scientist with the EcoHealth Alliance, a not-for-profit group that focuses on research into pandemics. “If you have a lot of cattle being produced in more places, you have a higher chance of the virus spreading. When you have heat stress, they’re more vulnerable.”

So far, this bird flu outbreak has affected more than 112 million chickens, turkeys, and other poultry across the US since it was first detected at a turkey-producing facility in Indiana in February 2022. In March of this year, officials confirmed a case of the virus in a Texas dairy cow—the first evidence that the virus had jumped from one livestock species to another. Since then, 720 cows have been affected, most of them in California, where there have been nearly 500 recorded cases.

In the United States, a trend of consolidation in agriculture, particularly dairies, has seen more animals housed together on ever-larger farms as the number of small farms has rapidly shrunk. In 1987, half of the country’s dairy cows were in herds of 80 or more, and half in herds of 80 or fewer. Twenty years later, half the country’s cows were raised in herds of 1,300 or more. Today, 5,000-head dairies are common, especially in the arid West.

California had just over 21,000 dairy farms in 1950, producing 5.6 billion pounds of milk. Today, it has 1,100 producing around 41 billion pounds. Total US milk production has soared from about 116 billion pounds in 1950 to about 226 billion today.

“The pace of consolidation in dairy far exceeds the pace of consolidation seen in most of US agriculture,” a recent USDA report said.

Initially, researchers thought the virus was spreading through cows’ respiration, but recent research suggests it’s being transmitted through milking equipment and milk itself.

“It’s been the same strain in dairy cows… We don’t necessarily have multiple events of spillover,” said Meghan Davis, an associate professor of environmental health and engineering at Johns Hopkins Bloomberg School of Public Health. “Now it’s transmission from one cow to the next, often through milking equipment.”

It’s still unclear what caused that initial jump from wild birds, which are the natural reservoirs of the virus, to commercial poultry flocks and then to cows, but some research suggests that changing migration patterns caused by warmer weather are creating conditions conducive to the spreading of viruses. Some wild birds are migrating earlier than usual, hatching juvenile birds in new or different habitats.

“This is leading to a higher number of young that are naive to the virus,” Prist explained. “This makes the young birds more infectious—they have a higher chance of transmitting the virus because they don’t have antibodies protecting them.

“They’re going to different areas and they’re staying longer,” Prist added, “so they have higher contact with other animals, to the other native populations, that they have never had contact [with] before.”

That, researchers believe, could have initiated the spillover from wild birds to poultry, where it has become especially virulent. In wild birds, the virus tends to be a low pathogenic strain that occurs naturally, causing only minor symptoms in some birds.

“But when we introduce the virus to poultry operations where birds live in unsanitary and highly confined conditions, the virus is … able to spread through them like wildfire,” said Ben Rankin, a legal expert with the Center for Biological Diversity, an advocacy group. “There are so many more opportunities for the virus to mutate, to adapt to new kinds of hosts, and eventually, the virus spills back into the wild and this creates this cycle, or this loop, of intensification and increasing pathogenicity.”

Rankin pointed to an analysis that looked at 39 different viral outbreaks in birds from 1959 to 2015, where a low pathogenic avian influenza became a highly pathogenic one. Out of those, 37 were associated with commercial poultry operations. “So it’s a very clear relationship between the increasing pathogenicity of this virus and its relationship with industrial animal raising,” Rankin said.

Some researchers worry that large farms with multiple species are providing the optimal conditions for more species-to-species transfer. In North Carolina, the second-largest hog-producing state after Iowa, some farmers have started raising both chickens and hogs under contracts that require huge numbers of animals.

“So you’ve got co-location at a pretty substantial scale of herd size, on a single property,” said Chris Heaney, an associate professor of environmental health, engineering, epidemiology, and international health at the Bloomberg School of Public Health. “Another concern is seeing it jump into swine. That host, in particular, is uniquely well suited for those influenza viruses to re-assort and acquire properties that are very beneficial for taking up residence in humans.”

In late October, the USDA reported the first case of bird flu in a pig that lived on a small poultry and hog farm in Oregon.

Farmworker advocates say the number of cases in humans is likely underreported, largely because the immigrant and non-English speaking workforce on farms could be reluctant to seek help or may not be informed about taking precautions.

“What we’re dealing with is the lack of information from the top to the workers,” said Ana Schultz, a director with Project Protect Food Systems Workers.

In northern Colorado, home to dozens of large dairies, Schultz started to ask dairy workers in May if they were getting protective gear and whether anyone was falling ill. Many workers told her they were feeling flu-ish but didn’t go to the doctor for fear of losing a day of work or getting fired.

“I feel like there’s a lot more avian flu incidents, but no one knows about it because they don’t go to the doctor and they don’t get tested,” Schultz said. “In all the months that we’ve been doing outreach and taking protective gear and flyers, we haven’t had one single person tell us they’ve been to the doctor.”

This story originally appeared on Inside Climate News.

Georgina Gustin covers agriculture for Inside Climate News and has reported on the intersections of farming, food systems, and the environment for much of her journalism career. Her work has won numerous awards, including the John B. Oakes Award for Distinguished Environmental Journalism, and she was twice named the Glenn Cunningham Agricultural Journalist of the Year, once with ICN colleagues. She has worked as a reporter for The Day in New London, Conn., the St. Louis Post-Dispatch and CQ Roll Call, and her stories have appeared in The New York Times, Washington Post, and National Geographic’s The Plate, among others. She is a graduate of the Columbia University Graduate School of Journalism and the University of Colorado at Boulder.

Photo of Inside Climate News

Avian flu cases are on the upswing at big dairy farms Read More »

we’ve-got-a-lavish-new-trailer-for-star-trek:-section-31

We’ve got a lavish new trailer for Star Trek: Section 31

Michelle Yeoh stars in Star Trek: Section 31.

We’ve got a shiny new trailer for Star Trek: Section 31, the long-awaited spinoff film that brings back Michelle Yeoh’s magnificent Phillipa Georgiou from Star Trek: Discovery. The film will give us the backstory for Georgiou’s evil Mirror Universe counterpart, where she was a despotic emperor who murdered millions of her own people.

As previously reported, Yeoh’s stylishly acerbic Georgiou was eventually written out of Discovery, but fans took hope from rumors of a spinoff series featuring the character. That turned into a spinoff film, and we’ll take it. Miku Martineau plays a young Phillipa Georgiou in the film. Meanwhile, Yeoh’s older Georgiou is tasked with protecting the United Federation of Planets as part of a black ops group called Section 31, while dealing with all the blood she’s spilled in her past.

Any hardcore Star Trek fan will tell you that Section 31 was first introduced as an urban legend of sorts in Star Trek: Deep Space Nine. Apparently Ira Steven Behr—who came up with the idea of a secret rogue organization within Starfleet doing shady things to protect the Federation—took inspiration from Commander Sisko’s comment in one episode about how “It’s easy to be a saint in paradise.” The name is taken from Starfleet Charter Article 14, Section 31, which allows Starfleet to take extraordinary measures in the face of extreme threats—including sabotage, assassination, and even biological warfare.

We’ve got a lavish new trailer for Star Trek: Section 31 Read More »

in-a-not-so-subtle-signal-to-regulators,-blue-origin-says-new-glenn-is-ready

In a not-so-subtle signal to regulators, Blue Origin says New Glenn is ready

Blue Origin said Tuesday that the test payload for the first launch of its new rocket, New Glenn, is ready for liftoff. The company published an image of the “Blue Ring” pathfinder nestled up against one half of the rocket’s payload fairing.

“There is a growing demand to quickly move and position equipment and infrastructure in multiple orbits,” the company’s chief executive, Dave Limp, said on LinkedIn. “Blue Ring has advanced propulsion and communication capabilities for government and commercial customers to handle these maneuvers precisely and efficiently.”

This week’s announcement—historically Blue Origin has been tight-lipped about new products, but is opening up more as it nears the debut of its flagship New Glenn rocket—appears to serve a couple of purposes.

All Blue wants for Christmas is…

First of all, the relatively small payload contrasted with the size of the payload fairing highlights the greater volume the rocket offers over most conventional boosters. New Glenn’s payload fairing is 7 meters (23 feet) in diameter as opposed to the more conventional 5 meters (16.4 feet). It looks roomy inside.

Additionally, the company appears to be publicly signaling the Federal Aviation Administration and other regulatory agencies that it believes New Glenn is ready to fly, pending approval to conduct a hot fire test at Launch Complex-36, and then for a liftoff from Florida. This is a not-so-subtle message to regulators to please hurry up and complete the paperwork necessary for launch activities. It is not clear what is holding up the hot-fire and launch approval in this case, but it is often environmental issues or certification of a flight termination system.

Blue Origin’s release on Tuesday was carefully worded. The headline said New Glenn was “on track” for a launch this year and stated that the Blue Ring payload is “ready” for a launch this year. As yet there is no notional or public launch date. The hot-fire test has been delayed multiple times since the company put the rocket on its launch pad on Nov. 23. It had been targeting November for the test, and more recently, this past weekend.

After years of delays for the rocket, originally due to debut in 2020, Blue Origin founder Jeff Bezos hired a new chief executive to run the company a little more than a year ago. Limp, an executive from Amazon, was given the mandate to change Blue Origin’s slower-moving culture to be more nimble and urgent and was told to launch New Glenn by the end of 2024.

In a not-so-subtle signal to regulators, Blue Origin says New Glenn is ready Read More »

From Products to Customers: Delivering Business Transformation At Scale

Transformation is a journey, not a destination – so how to transform at scale? GigaOm Field CTOs Darrel Kent and Whit Walters explore the nuances of business and digital transformation, sharing their thoughts on scaling businesses, value-driven growth, and leadership in a rapidly evolving world.

Whit: Darrel, transformation is such a well-used word these days—digital transformation, business transformation. It’s tough enough at a project level, but for enterprises looking to grow, where should they begin?

Darrel: You’re right. Transformation has become one of those overused buzzwords, but at its core, it’s about fundamental change. What is digital transformation? What is business transformation? It’s about translating those big concepts into value-based disciplines—disciplines that drive real impact.

Whit: That sounds compelling. Can you give us an example of what that looks like in practice – how does transformation relate to company growth?

Darrel: Sure. Think of a company aiming to grow from 1 billion, to 2 billion, to 5 billion in revenue. That’s not just a numbers game; it’s a journey of transformation. To get to 1 billion, you can get there by focusing on product excellence. But you won’t get to 2 billion based on product alone – you need more. You need to rethink your approach to scaling—whether it’s through innovation, operations, or culture. Finance needs to invest strategically, sales needs to evolve, and leadership must align every decision with long-term goals.

Whit: It’s a fascinating shift. So, scaling isn’t just about selling more products?

Darrel: Exactly. Scaling requires a transformation in how you deliver value. For example, moving beyond transactional sales to consultative relationships. It’s about operational efficiency, customer experience, and innovation working together to create value at scale. I call these value-based disciplines.

Whit: Let’s break that down a bit more. You’ve mentioned product excellence, operational excellence, and customer excellence. How do these concepts build on each other?

Darrel: Great question. Product excellence is the foundation. When building a company, your product needs to solve a real problem and do it exceptionally well. That’s how you reach your first milestone—say, that 1-billion-dollar mark. But to scale beyond that, you can’t rely on product alone. This is where operational excellence comes in. It’s about streamlining your processes, reducing inefficiencies, and ensuring that every part of the organization is working in harmony.

Whit: And customer excellence? Where does that fit in?

Darrel: Customer excellence takes it to the next level beyond operational excellence. Once again, what gets you to 2 billion does not take you beyond that. You have to change again. It’s not just about creating a great product or running a smooth operation. It’s about truly understanding and anticipating your customers’ needs. Companies that master customer excellence create loyalty and advocacy. They don’t just react to customer feedback; they proactively shape the customer experience. This is where long-term growth happens, and it’s a hallmark of companies that scale successfully.

Whit: That makes so much sense. So, it’s a progression—starting with product, moving to operations, and finally centering everything around the customer?

Darrel: Exactly. Think of it as a ladder. Each step builds on the previous one. You need product excellence to get off the ground, operational excellence to scale efficiently, and customer excellence to ensure longevity and market leadership. And these aren’t isolated phases—they’re interconnected. A failure in one area can disrupt the whole system.

Whit: That’s a powerful perspective. What role does leadership play in this transformation?

Darrel: Leadership is everything. It starts with understanding that transformation isn’t optional—it’s survival. Leaders must champion change, align the organization’s culture with its strategy, and invest in the right areas. For example, what does the CFO prioritize? What technologies or processes does the COO implement? It all needs to work together.

Whit: That’s a powerful perspective. What would you say to leaders who are hesitant to embark on such a daunting journey?

Darrel: I’d tell them this: Transformation isn’t just about surviving the present; it’s about thriving in the future. It’s what Simon Sinek refers to as ‘the long game’. Companies that embrace these principles—aligning value creation with their business strategy—will not only grow but will set the pace in their industries.

Whit: Do you have any final thoughts for organizations navigating their own transformations?

Darrel: Focus on value. Whether it’s your customers, employees, or stakeholders, every transformation effort should return to delivering value. And remember, it’s a journey. You don’t have to get it perfect overnight, but you do have to start.

Whit: Thank you, Darrel. Your insights are invaluable.

From Products to Customers: Delivering Business Transformation At Scale Read More »

ev-charging-infrastructure-isn’t-just-for-road-trippers

EV charging infrastructure isn’t just for road trippers

Although there’s been a whole lot of pessimism recently, electric vehicle sales continue to grow, even if it is less quickly than many hoped. That’s true in the commercial vehicle space as well—according to Cox Automotive, 87 percent of vehicle fleet operators expect to add EVs in the next five years, and more than half thought they were likely to buy EVs this year. And where and when to plug those EVs in to charge is a potential headache for fleet operators.

The good news is that charging infrastructure really is growing. It doesn’t always feel that way—the $7.5 billion allocated under the Inflation Reduction Act for charging infrastructure has to be disbursed via state departments of transportation, so the process there has been anything but rapid. But according to the Joint Office of Energy and Transportation, the total number of public charging plugs has doubled since 2020, to more than 144,000 level 2 plugs and closing in on 49,000 DC fast charger plugs.

There are ways to throw off a planned timeline when building out a station with multiple chargers. Obviously you need the funds to pay for it all—if these are to come from grants like the National Electric Vehicle Infrastructure program, that had to wait for the states to each develop their own funding plans, then open for submissions, and so on, before even approving a project, for example.

Permitting can add plenty more delays, and then there’s the need to run sufficient power to a site. “The challenge is getting the power to the points that it needs to be used. The good thing is that the rollout for EV is not happening overnight, and it’s staged. So that does give some opportunity,” said Amber Putignano, market development leader at ABB Electrification.

For example, ABB has been working with Greenlane, a $650 million joint venture between Daimler Truck North America, NextEra Energy Resources, and BlackRock, as it builds out a series of charging corridors along freight routes, starting with a 280-mile (450 km) stretch of I-15 between Los Angeles and Las Vegas.

EV charging infrastructure isn’t just for road trippers Read More »

reddit-debuts-ai-powered-discussion-search—but-will-users-like-it?

Reddit debuts AI-powered discussion search—but will users like it?

The company then went on to strike deals with major tech firms, including a $60 million agreement with Google in February 2024 and a partnership with OpenAI in May 2024 that integrated Reddit content into ChatGPT.

But Reddit users haven’t been entirely happy with the deals. In October 2024, London-based Redditors began posting false restaurant recommendations to manipulate search results and keep tourists away from their favorite spots. This coordinated effort to feed incorrect information into AI systems demonstrated how user communities might intentionally “poison” AI training data over time.

The potential for trouble

While it’s tempting to lean heavily into generative AI technology while it is currently trendy, the move could also represent a challenge for the company. For example, Reddit’s AI-powered summaries could potentially draw from inaccurate information featured on the site and provide incorrect answers, or it may draw inaccurate conclusions from correct information.

We will keep an eye on Reddit’s new AI-powered search tool to see if it resists the type of confabulation that we’ve seen with Google’s AI Overview, an AI summary bot that has been a critical failure so far.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

Reddit debuts AI-powered discussion search—but will users like it? Read More »

cable-isps-compare-data-caps-to-food-menus:-don’t-make-us-offer-unlimited-soup

Cable ISPs compare data caps to food menus: Don’t make us offer unlimited soup

“Commenters have clearly demonstrated how fees and overage charges, unclear information about data caps, and throttling or caps in the midst of public crises such as natural disasters negatively affect consumers, especially consumers in the lowest income brackets,” the filing said.

The groups said that “many low-income households have no choice but to be limited by data caps because lower priced plan tiers, the only ones they can afford, are typically capped.” Their filing urged the FCC to take action, arguing that federal law provides “ample rulemaking authority to regulate data caps as they are an unjustified, unreasonable business practice and unreasonably discriminate against low-income individuals.”

The filing quoted a December 2023 report by nonprofit news organization Capital B about broadband access problems faced by Black Americans in rural areas. The article described Internet users such as Gloria Simmons, who had lived in Devereux, Georgia, for over 50 years.

“But as a retiree on a fixed income, it’s too expensive, she says,” the Capital B report said. “She pays $60 a month for fixed wireless Internet with AT&T. But some months, if she goes over her data usage, it’s $10 for each additional 50 gigabytes of data. If it increases, she says she’ll cancel the service, despite its convenience.”

Free Press: “inequitable burden” for low-income users

Comments filed last month by advocacy group Free Press said that some ISPs don’t impose data caps because of competition from fiber-to-the-home (FTTH) and fixed wireless services. Charter doesn’t impose caps, and Comcast has avoided caps in the Northeast US where Verizon’s un-capped FiOS fiber-to-the-home service is widely deployed, Free Press said.

“ISPs like Cox and Comcast (outside of its northeast territory) continue to show that they want their customers to use as much data as possible, so long as they pay a monthly fee for unlimited data, and/or ‘upgrade’ their service with an expensive monthly equipment rental,” Free Press wrote. “Comcast’s continued use of cap-and-fee pricing is particularly egregious because it repeatedly gloats about how robust its network is relative to others in terms of handling heavy traffic volume, and it does not impose caps in the parts of its service area where it faces more robust FTTH competition from FTTH providers.”

Cable ISPs compare data caps to food menus: Don’t make us offer unlimited soup Read More »

meet-hyperlight,-ars-technica’s-new,-even-brighter-“light”-mode

Meet Hyperlight, Ars Technica’s new, even brighter “Light” mode

Like many sites, apps, and operating systems, Ars Technica has both “Light” and “Dark” visual styles. They look great! But even the “Light” mode has darker elements in it, and after our recent redesign, some Ars readers asked for an even lighter “Light” mode, one that would allow them to absolutely sear their own retinas with various shades of blinding white. (I kid, of course; for some readers, it’s a serious visual comfort issue.)

We’ve spent the last month working up a third visual style to give the people what they want. Behold the fully armed and operational “Hyperlight” mode, our new visual theme featuring a white background, light gray headline boxes, and black text. You can activate it right now from the visual style menu on the navigation bar at the top of the page.

In total, we now have four visual modes. Hyperlight is the brightest of these, while Day & Night is our rebranded “Light mode” and mixes light and dark elements. Dark is all dark backgrounds with light text. The fourth mode is System, which automatically switches between Day & Night and Dark modes based on your operating system setting. (System will not switch the site to Hyperlight.)

Meet Hyperlight, Ars Technica’s new, even brighter “Light” mode Read More »

us-businesses-will-lose-$1b-in-one-month-if-tiktok-is-banned,-tiktok-warns

US businesses will lose $1B in one month if TikTok is banned, TikTok warns

The US is prepared to fight the injunction. In a letter, the US Justice Department argued that the court has already “definitively rejected petitioners’ constitutional claims” and no further briefing should be needed before rejecting the injunction.

If the court denies the injunction, TikTok plans to immediately ask SCOTUS for an injunction next. That’s part of the reason why TikTok wants the lower court to grant the injunction—out of respect for the higher court.

“Unless this Court grants interim relief, the Supreme Court will be forced to resolve an emergency injunction application on this weighty constitutional question in mere weeks (and over the holidays, no less),” TikTok argued.

The DOJ, however, argued that’s precisely why the court should quickly deny the injunction.

“An expedient decision by this Court denying petitioners’ motions, without awaiting the government’s response, would be appropriate to maximize the time available for the Supreme Court’s consideration of petitioners’ submissions,” the DOJ’s letter said.

TikTok has requested a decision on the injunction by December 16, and the government has agreed to file its response by Wednesday.

This is perhaps the most dire fight of TikTok’s life. The social media company has warned that not only would a US ban impact US TikTok users, but also “tens of millions” of users globally whose service could be interrupted if TikTok has to cut off US users. And once TikTok loses those users, there’s no telling if they’ll ever come back, even if TikTok wins a dragged-out court battle.

For TikTok users, an injunction granted at this stage would offer a glimmer of hope that TikTok may survive as a preferred platform for free speech and irreplaceable source of income. But for TikTok, the injunction would likely be a stepping stone, as the fastest path to securing its future increasingly seems to be appealing to Trump.

“It would not be in the interest of anyone—not the parties, the public, or the courts—to have emergency Supreme Court litigation over the Act’s constitutionality, only for the new Administration to halt its enforcement mere days or weeks later,” TikTok argued. “This Court should avoid that burdensome spectacle by granting an injunction that would allow Petitioners to seek further orderly review only if necessary.”

US businesses will lose $1B in one month if TikTok is banned, TikTok warns Read More »

google-gets-an-error-corrected-quantum-bit-to-be-stable-for-an-hour

Google gets an error-corrected quantum bit to be stable for an hour


Using almost the entire chip for a logical qubit provides long-term stability.

Google’s new Willow chip is its first new generation of chips in about five years. Credit: Google

On Monday, Nature released a paper from Google’s quantum computing team that provides a key demonstration of the potential of quantum error correction. Thanks to an improved processor, Google’s team found that increasing the number of hardware qubits dedicated to an error-corrected logical qubit led to an exponential increase in performance. By the time the entire 105-qubit processor was dedicated to hosting a single error-corrected qubit, the system was stable for an average of an hour.

In fact, Google told Ars that errors on this single logical qubit were rare enough that it was difficult to study them. The work provides a significant validation that quantum error correction is likely to be capable of supporting the execution of complex algorithms that might require hours to execute.

A new fab

Google is making a number of announcements in association with the paper’s release (an earlier version of the paper has been up on the arXiv since August). One of those is that the company is committed enough to its quantum computing efforts that it has built its own fabrication facility for its superconducting processors.

“In the past, all the Sycamore devices that you’ve heard about were fabricated in a shared university clean room space next to graduate students and people doing kinds of crazy stuff,” Google’s Julian Kelly said. “And we’ve made this really significant investment in bringing this new facility online, hiring staff, filling it with tools, transferring their process over. And that enables us to have significantly more process control and dedicated tooling.”

That’s likely to be a critical step for the company, as the ability to fabricate smaller test devices can allow the exploration of lots of ideas on how to structure the hardware to limit the impact of noise. The first publicly announced product of this lab is the Willow processor, Google’s second design, which ups its qubit count to 105. Kelly said one of the changes that came with Willow actually involved making the individual pieces of the qubit larger, which makes them somewhat less susceptible to the influence of noise.

All of that led to a lower error rate, which was critical for the work done in the new paper. This was demonstrated by running Google’s favorite benchmark, one that it acknowledges is contrived in a way to make quantum computing look as good as possible. Still, people have figured out how to make algorithm improvements for classical computers that have kept them mostly competitive. But, with all the improvements, Google expects that the quantum hardware has moved firmly into the lead. “We think that the classical side will never outperform quantum in this benchmark because we’re now looking at something on our new chip that takes under five minutes, would take 1025 years, which is way longer than the age of the Universe,” Kelly said.

Building logical qubits

The work focuses on the behavior of logical qubits, in which a collection of individual hardware qubits are grouped together in a way that enables errors to be detected and corrected. These are going to be essential for running any complex algorithms, since the hardware itself experiences errors often enough to make some inevitable during any complex calculations.

This naturally creates a key milestone. You can get better error correction by adding more hardware qubits to each logical qubit. If each of those hardware qubits produces errors at a sufficient rate, however, then you’ll experience errors faster than you can correct for them. You need to get hardware qubits of a sufficient quality before you start benefitting from larger logical qubits. Google’s earlier hardware had made it past that milestone, but only barely. Adding more hardware qubits to each logical qubit only made for a marginal improvement.

That’s no longer the case. Google’s processors have the hardware qubits laid out on a square grid, with each connected to its nearest neighbors (typically four except at the edges of the grid). And there’s a specific error correction code structure, called the surface code, that fits neatly into this grid. And you can use surface codes of different sizes by using progressively more of the grid. The size of the grid being used is measured by a term called distance, with larger distance meaning a bigger logical qubit, and thus better error correction.

(In addition to a standard surface code, Google includes a few qubits that handle a phenomenon called “leakage,” where a qubit ends up in a higher-energy state, instead of the two low-energy states defined as zero and one.)

The key result is that going from a distance of three to a distance of five more than doubled the ability of the system to catch and correct errors. Going from a distance of five to a distance of seven doubled it again. Which shows that the hardware qubits have reached a sufficient quality that putting more of them into a logical qubit has an exponential effect.

“As we increase the grid from three by three to five by five to seven by seven, the error rate is going down by a factor of two each time,” said Google’s Michael Newman. “And that’s that exponential error suppression that we want.”

Going big

The second thing they demonstrated is that, if you make the largest logical qubit that the hardware can support, with a distance of 15, it’s possible to hang onto the quantum information for an average of an hour. This is striking because Google’s earlier work had found that its processors experience widespread simultaneous errors that the team ascribed to cosmic ray impacts. (IBM, however, has indicated it doesn’t see anything similar, so it’s not clear whether this diagnosis is correct.) Those happened every 10 seconds or so. But this work shows that a sufficiently large error code can correct for these events, whatever their cause.

That said, these qubits don’t survive indefinitely. One of them seems to be a localized temporary increase in errors. The second, more difficult to deal with problem involves a widespread spike in error detection affecting an area that includes roughly 30 qubits. At this point, however, Google has only seen six of these events, so they told Ars that it’s difficult to really characterize them. “It’s so rare it actually starts to become a bit challenging to study because you have to gain a lot of statistics to even see those events at all,” said Kelly.

Beyond the relative durability of these logical qubits, the paper notes another advantage to going with larger code distances: it enhances the impact of further hardware improvements. Google estimates that at a distance of 15, improving hardware performance by a factor of two would drop errors in the logical qubit by a factor of 250. At a distance of 27, the same hardware improvement would lead to an improvement of over 10,000 in the logical qubit’s performance.

Note that none of this will ever get the error rate to zero. Instead, we just need to get the error rate to a level where an error is unlikely for a given calculation (more complex calculations will require a lower error rate). “It’s worth understanding that there’s always going to be some type of error floor and you just have to push it low enough to the point where it practically is irrelevant,” Kelly said. “So for example, we could get hit by an asteroid and the entire Earth could explode and that would be a correlated error that our quantum computer is not currently built to be robust to.”

Obviously, a lot of additional work will need to be done to both make logical qubits like this survive for even longer, and to ensure we have the hardware to host enough logical qubits to perform calculations. But the exponential improvements here, to Google, suggest that there’s nothing obvious standing in the way of that. “We woke up one morning and we kind of got these results and we were like, wow, this is going to work,” Newman said. “This is really it.”

Nature, 2024. DOI: 10.1038/s41586-024-08449-y  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google gets an error-corrected quantum bit to be stable for an hour Read More »

itch.io-platform-briefly-goes-down-due-to-“ai-driven”-anti-phishing-report

Itch.io platform briefly goes down due to “AI-driven” anti-phishing report

The itch.io domain was back up and running by 7 am Eastern, according to media reports, “after the registrant finally responded to our notice and took appropriate action to resolve the issue.” Users could access the site throughout if they typed the itch.io IP address into their web browser directly.

Too strong a shield?

BrandShield’s website describes it as a service that “detects and hunts online trademark infringement, counterfeit sales, and brand abuse across multiple platforms.” The company claims to have multiple Fortune 500 and FTSE100 companies on its client list.

In its own series of social media posts, BrandShield said its “AI-driven platform” had identified “an abuse of Funko… from an itch.io subdomain.” The takedown request it filed was focused on that subdomain, not the entirety of itch.io, BrandShield said.

“The temporary takedown of the website was a decision made by the service providers, not BrandShield or Funko.”

The whole affair highlights how the delicate web of domain registrars and DNS servers can remain a key failure point for web-based businesses. Back in May, we saw how the desyncing of a single DNS root server could cause problems across the entire Internet. And in 2012, the hacking collective Anonymous highlighted the potential for a coordinated attack to take down the entire DNS system.

Itch.io platform briefly goes down due to “AI-driven” anti-phishing report Read More »