Google isn’t alone in eyeballing nuclear power as an energy source for massive datacenters. In September, Ars reported on a plan from Microsoft that would re-open the Three Mile Island nuclear power plant in Pennsylvania to fulfill some of its power needs. And the US administration is getting into the nuclear act as well, signing a bipartisan ADVANCE act in July with the aim of jump-starting new nuclear power technology.
AI is driving demand for nuclear
In some ways, it would be an interesting twist if demand for training and running power-hungry AI models, which are often criticized as wasteful, ends up kick-starting a nuclear power renaissance that helps wean the US off fossil fuels and eventually reduces the impact of global climate change. These days, almost every Big Tech corporate position could be seen as an optics play designed to increase shareholder value, but this may be one of the rare times when the needs of giant corporations accidentally align with the needs of the planet.
Even from a cynical angle, the partnership between Google and Kairos Power represents a step toward the development of next-generation nuclear power as an ostensibly clean energy source (especially when compared to coal-fired power plants). As the world sees increasing energy demands, collaborations like this one, along with adopting solutions like solar and wind power, may play a key role in reducing greenhouse gas emissions.
Despite that potential upside, some experts are deeply skeptical of the Google-Kairos deal, suggesting that this recent rush to nuclear may result in Big Tech ownership of clean power generation. Dr. Sasha Luccioni, Climate and AI Lead at Hugging Face, wrote on X, “One step closer to a world of private nuclear power plants controlled by Big Tech to power the generative AI boom. Instead of rethinking the way we build and deploy these systems in the first place.”
After a US court ruled earlier this week that Google must open its Play Store to allow for third-party app stores and alternative payment options, Microsoft is moving quickly to slide into this slightly ajar door.
Sarah Bond, president of Xbox, posted on X (formerly Twitter) Thursday evening that the ruling “will allow more choice and flexibility.” “Our mission is to allow more players to play on more devices so we are thrilled to share that starting in November, players will be able to play and purchase Xbox games directly from the Xbox App on Android,” Bond wrote.
Because the court order requires Google to stop forcing apps to use its own billing system and allow for third-party app stores inside Google Play itself, Microsoft now intends to offer Xbox games directly through its app. Most games will likely not run directly on Android, but a revamped Xbox Android app could also directly stream purchased or subscribed games to Android devices.
Until now, buying Xbox games (or most any game) on a mobile device has typically involved either navigating to a web-based store in a browser—while avoiding attempts by the phone to open a store’s official app—or simply using a different device entirely to buy the game, then playing or streaming it on the phone.
Google called the DOJ extending search remedies to AI “radical,” an “overreach.”
The US Department of Justice finally proposed sweeping remedies to destroy Google’s search monopoly late yesterday, and, predictably, Google is not loving any of it.
On top of predictable asks—like potentially requiring Google to share search data with rivals, restricting distribution agreements with browsers like Firefox and device makers like Apple, and breaking off Chrome or Android—the DOJ proposed remedies to keep Google from blocking competition in “the evolving search industry.” And those extra steps threaten Google’s stake in the nascent AI search world.
This is only the first step in the remedies stage of litigation, but Google is already showing resistance to both expected and unexpected remedies that the DOJ proposed. In a blog from Google’s vice president of regulatory affairs, Lee-Anne Mulholland, the company accused the DOJ of “overreach,” suggesting that proposed remedies are “radical” and “go far beyond the specific legal issues in this case.”
From here, discovery will proceed as the DOJ makes a case to broaden the scope of proposed remedies and Google raises its defense to keep remedies as narrowly tailored as possible. After that phase concludes, the DOJ will propose its final judgement on remedies in November, which must be fully revised by March 2025 for the court to then order remedies.
Even then, however, the trial is unlikely to conclude, as Google plans to appeal. In August, Mozilla’s spokesperson told Ars that the trial could drag on for years before any remedies are put in place.
In the meantime, Google plans to continue focusing on building out its search empire, Google’s president of global affairs, Kent Walker, said in August. This presumably includes innovations in AI search that the DOJ fears may further entrench Google’s dominant position.
Scrutiny of Google’s every move in the AI industry will likely only be heightened in that period. As Google has already begun seeking exclusive AI deals with companies like Apple, it risks appearing to engage in the same kinds of anti-competitive behavior in AI markets as the court has already condemned. And giving that impression could not only impact remedies ordered by the court, but also potentially weaken Google’s chances of winning on appeal, Lee Hepner, an antitrust attorney monitoring the trial for the American Economic Liberties Project, told Ars.
Ending Google’s monopoly starts with default deals
In the DOJ’s proposed remedy framework, the DOJ says that there’s still so much more to consider before landing on final remedies that it reserves “the right to add or remove potential proposed remedies.”
Through discovery, DOJ said that it plans to continue engaging experts and stakeholders “to learn not just about the relevant markets themselves but also about adjacent markets as well as remedies from other jurisdictions that could affect or inform the optimal remedies in this action.
“To be effective, these remedies… must include some degree of flexibility because market developments are not always easy to predict and the mechanisms and incentives for circumvention are endless,” the DOJ said.
Ultimately, the DOJ said that any remedies sought should be “mutually reinforcing” and work to “unfetter” Google’s current monopoly in general search services and general text advertising markets. That effort would include removing barriers to competition—like distribution and revenue-sharing agreements—as well as denying Google monopoly profits and preventing Google from monopolizing “related markets in the future,” the DOJ said.
Any effort to undo Google’s monopoly starts with ending Google’s control over “the most popular distribution channels,” the DOJ said. At one point during the trial, for example, a witness accidentally blurted out that Apple gets a 36 percent cut from its Safari deal with Google. Lucrative default deals like that leave rivals with “little-to-no incentive to compete for users,” the DOJ said.
“Fully remedying these harms requires not only ending Google’s control of distribution today, but also ensuring Google cannot control the distribution of tomorrow,” the DOJ warned.
To dislodge this key peg propping up Google’s search monopoly, some options include ending Google’s default deals altogether, which would “limit or prohibit default agreements, preinstallation agreements, and other revenue-sharing arrangements related to search and search-related products, potentially with or without the use of a choice screen.”
A breakup could be necessary
Behavior and structural remedies may also be needed, the DOJ proposed, to “prevent Google from using products such as Chrome, Play, and Android to advantage Google search and Google search-related products and features—including emerging search access points and features, such as artificial intelligence—over rivals or new entrants.” That could mean spinning off the Chrome browser or restricting Google from preinstalling its search engine as the default in Chrome or on Android devices.
In her blog, Mulholland conceded that “this case is about a set of search distribution contracts” but claimed that “overbroad restrictions on distribution contracts” would create friction for Google users and “reduce revenue for companies like Mozilla” as well as Android smart phone makers.
Asked to comment on supposedly feared revenue losses, a Mozilla spokesperson told Ars, “[We are] closely monitoring the legal process and considering its potential impact on Mozilla and how we can positively influence the next steps. Mozilla has always championed competition and choice online, particularly in search. Firefox continues to offer a range of search options, and we remain committed to serving our users’ preferences while fostering a competitive market.”
Mulholland also warned that “splitting off” Chrome or Android from Google’s search business “would break them” and potentially “raise the cost of devices,” because “few companies would have the ability or incentive to keep them open source, or to invest in them at the same level we do.”
“We’ve invested billions of dollars in Chrome and Android,” Mulholland wrote. “Chrome is a secure, fast, and free browser and its open-source code provides the backbone for numerous competing browsers. Android is a secure, innovative, and free open-source operating system that has enabled vast choice in the smartphone market, helping to keep the cost of phones low for billions of people.”
Google has long argued that its investment in open source Chrome and Android projects benefits developers whose businesses and customers would be harmed if those efforts lost critical funding.
“Features like Chrome’s Safe Browsing, Android’s security features, and Play Protect benefit from information and signals from a range of Google products and our threat-detection expertise,” Mulholland wrote. “Severing Chrome and Android would jeopardize security and make patching security bugs harder.”
Hepner told Ars that Android could potentially thrive if broken off from Google, suggesting that through discovery, it will become clearer what would happen if either Google product was severed from the company.
“I think others would agree that Android is a company that is capable [being] a standalone entity,” Hepner said. “It could be independently monetized through relationships with device manufacturers, web browsers, alternative Play Stores that are not under Google’s umbrella. And that if that were the case, what you would see is that Android and the operating system marketplace begins to evolve to meet the needs and demands of innovative products that are not being created just by Google. And you’ll see that dictating the evolution of the marketplace and fundamentally the flow of information across our society.”
Mulholland also claimed that sharing search data with rivals risked exposing users to privacy and security risks, but the DOJ vowed to be “mindful of potential user privacy concerns in the context of data sharing” while distinguishing “genuine privacy concerns” from “pretextual arguments” potentially misleading the court regarding alleged risks.
One possible way around privacy concerns, the DOJ suggested, would be prohibiting Google from collecting the kind of sensitive data that cannot be shared with rivals.
Finally, to stop Google from charging supra-competitive prices for ads, the DOJ is “evaluating remedies” like licensing or syndicating Google’s ad feed “independent of its search results.” Further, the DOJ may require more transparency, forcing Google to provide detailed “search query reports” featuring currently obscured “information related to its search text ads auction and ad monetization.”
Stakeholders were divided on whether the DOJ’s initial framework is appropriate.
Matt Schruers, the CEO of a trade association called the Computer & Communications Industry Association (which represents Big Tech companies like Google), criticized the DOJ’s “hodgepodge of structural and behavioral remedies” as going “far beyond” what’s needed to address harms.
“Any remedy should be narrowly tailored to address specific conduct, which in this case was a set of search distribution contracts,” Schruers said. “Instead, the proposed DOJ remedies would reshape numerous industries and products, which would harm consumers and innovation in these dynamic markets.”
But a senior vice president of public affairs for Google search rival DuckDuckGo, Kamyl Bazbaz, praised the DOJ’s framework as being “anchored to the court’s ruling” and appropriately broad.
“This proposal smartly takes aim at breaking Google’s illegal hold on the general search market now and ushers in a new era of enduring competition moving forward,” Bazbaz said. “The framework understands that no single remedy can undo Google’s illegal monopoly, it will require a range of behavioral and structural remedies to free the market.”
Bazbaz expects that “Google is going to use every resource at its disposal to discredit this proposal,” suggesting that “should be taken as a sign this framework can create real competition.”
AI deals could weaken Google’s appeal, expert says
Google appears particularly disturbed by the DOJ’s insistence that remedies must be forward-looking and prevent Google from leveraging its existing monopoly power “to feed artificial intelligence features.”
As Google sees it, the DOJ’s attempt to attack Google’s AI business “comes at a time when competition in how people find information is blooming, with all sorts of new entrants emerging and new technologies like AI transforming the industry.”
But the DOJ has warned that Google’s search monopoly potentially feeding AI features “is an emerging barrier to competition and risks further entrenching Google’s dominance.”
The DOJ has apparently been weighing some of the biggest complaints about Google’s AI training when mulling remedies. That includes listening to frustrated site owners who can’t afford to block Google from scraping data for AI training because the same exact crawler indexes their content in Google search results. Those site owners have “little choice” but to allow AI training or else sacrifice traffic from Google search, The Seattle Times reported.
Remedy options may come with consequences
Remedies in the search trial might change that. In their proposal, the DOJ said it’s considering remedies that would “prohibit Google from using contracts or other practices to undermine rivals’ access to web content and level the playing field by requiring Google to allow websites crawled for Google search to opt out of training or appearing in any Google-owned artificial-intelligence product or feature on Google search,” such as Google’s controversial AI summaries.
Hepner told Ars that “it’s not surprising at all” that remedies cover both search and AI because “at the core of Google’s monopoly power is its enormous scale and access to data.”
“The Justice Department is clearly thinking creatively,” Hepner said, noting that “the ability for content creators to opt out of having their material and work product used to train Google’s AI systems is an interesting approach to depriving Google of its immense scale.”
The DOJ is also eyeing controls on Google’s use of scale to power AI advertising technologies like Performance Max to end Google’s supracompetitive pricing on text ads for good.
It’s critical to think about the future, the DOJ argued in its framework, because “Google’s anticompetitive conduct resulted in interlocking and pernicious harms that present unprecedented complexities in a highly evolving set of markets”—not just in the markets where Google holds monopoly powers.
Google disagrees with this alleged “government overreach.”
“Hampering Google’s AI tools risks holding back American innovation at a critical moment,” Mulholland warned, claiming that AI is still new and “competition globally is fierce.”
“There are enormous risks to the government putting its thumb on the scale of this vital industry—skewing investment, distorting incentives, hobbling emerging business models—all at precisely the moment that we need to encourage investment, new business models, and American technological leadership,” Mulholland wrote.
Hepner told Ars that he thinks that the DOJ’s proposed remedies framework actually “meets the moment and matches the imperative to deprive Google of its monopoly hold on the search market, on search advertising, and potentially on future related markets.”
To ensure compliance with any remedies pursued, the DOJ also recommended “protections against circumvention and retaliation, including through novel paths to preserving dominance in the monopolized markets.”
That means Google might be required to “finance and report to a Court-appointed technical committee” charged with monitoring any Google missteps. The company may also have to agree to retain more records for longer—including chat messages that the company has been heavily criticized for deleting. And through this compliance monitoring, Google may also be prohibited from owning a large stake in any rivals.
If Google were ever found willfully non-compliant, the DOJ is considering a “range of provisions,” including risking more extreme structural or behavioral remedies or enduring extensions of compliance periods.
As the remedies stage continues through the spring, followed by Google’s prompt appeal, Hepner suggested that the DOJ could fight to start imposing remedies before the appeal concludes. Likely Google would just as strongly fight for any remedies to be delayed.
While the trial drags on, Hepner noted that Google already appears to be trying to strike another default deal with Apple that appears pretty similar to the controversial distribution deals at the heart of the search monopoly trial. In March, Apple started mulling using Google’s Gemini to exclusively power new AI features for the iPhone.
“This is basically the exact same anticompetitive behavior that they were found liable for,” Hepner told Ars, suggesting this could “weaken” Apple’s defense both against the DOJ’s broad framework of proposed remedies and during the appeal.
“If Google is actually engaging in the same anti-competitive conduct and artificial intelligence markets that they were found liable for in the search market, the court’s not going to look kindly on that relative to an appeal,” Hepner said.
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
Back in 2019, Google made waves by claiming it had achieved what has been called “quantum supremacy”—the ability of a quantum computer to perform operations that would take a wildly impractical amount of time to simulate on standard computing hardware. That claim proved to be controversial, in that the operations were little more than a benchmark that involved getting the quantum computer to behave like a quantum computer; separately, improved ideas about how to perform the simulation on a supercomputer cut the time required down significantly.
But Google is back with a new exploration of the benchmark, described in a paper published in Nature on Wednesday. It uses the benchmark to identify what it calls a phase transition in the performance of its quantum processor and uses it to identify conditions where the processor can operate with low noise. Taking advantage of that, they again show that, even giving classical hardware every potential advantage, it would take a supercomputer a dozen years to simulate things.
Cross entropy benchmarking
The benchmark in question involves the performance of what are called quantum random circuits, which involves performing a set of operations on qubits and letting the state of the system evolve over time, so that the output depends heavily on the stochastic nature of measurement outcomes in quantum mechanics. Each qubit will have a probability of producing one of two results, but unless that probability is one, there’s no way of knowing which of the results you’ll actually get. As a result, the output of the operations will be a string of truly random bits.
If enough qubits are involved in the operations, then it becomes increasingly difficult to simulate the performance of a quantum random circuit on classical hardware. That difficulty is what Google originally used to claim quantum supremacy.
The big challenge with running quantum random circuits on today’s hardware is the inevitability of errors. And there’s a specific approach, called cross-entropy benchmarking, that relates the performance of quantum random circuits to the overall fidelity of the hardware (meaning its ability to perform error-free operations).
Google Principal Scientist Sergio Boixo likened performing quantum random circuits to a race between trying to build the circuit and errors that would destroy it. “In essence, this is a competition between quantum correlations spreading because you’re entangling, and random circuits entangle as fast as possible,” he told Ars. “We use two qubit gates that entangle as fast as possible. So it’s a competition between correlations or entanglement growing as fast as you want. On the other hand, noise is doing the opposite. Noise is killing correlations, it’s killing the growth of correlations. So these are the two tendencies.”
The focus of the paper is using the cross-entropy benchmark to explore the errors that occur on the company’s latest generation of Sycamore chip and use that to identify the transition point between situations where errors dominate, and what the paper terms a “low noise regime,” where the probability of errors are minimized—where entanglement wins the race. The researchers likened this to a phase transition between two states.
Low noise performance
The researchers used a number of methods to identify the location of this phase transition, including numerical estimates of the system’s behavior and experiments using the Sycamore processor. Boixo explained that the transition point is related to the errors per cycle, with each cycle involving performing an operation on all of the qubits involved. So, the total number of qubits being used influences the location of the transition, since more qubits means more operations to perform. But so does the overall error rate on the processor.
If you want to operate in the low noise regime, then you have to limit the number of qubits involved (which has the side effect of making things easier to simulate on classical hardware). The only way to add more qubits is to lower the error rate. While the Sycamore processor itself had a well-understood minimal error rate, Google could artificially increase that error rate and then gradually lower it to explore Sycamore’s behavior at the transition point.
The low noise regime wasn’t error free; each operation still has the potential for error, and qubits will sometimes lose their state even when sitting around doing nothing. But this error rate could be estimated using the cross-entropy benchmark to explore the system’s overall fidelity. That wasn’t the case beyond the transition point, where errors occurred quickly enough that they would interrupt the entanglement process.
When this occurs, the result is often two separate, smaller entangled systems, each of which were subject to the Sycamore chip’s base error rates. The researchers simulated this by creating two distinct clusters of entangled qubits that could be entangled with each other by a single operation, allowing them to turn entanglement on and off at will. They showed that this behavior allowed a classical computer to spoof the overall behavior by breaking the computation up into two manageable chunks.
Ultimately, they used their characterization of the phase transition to identify the maximum number of qubits they could keep in the low noise regime given the Sycamore processor’s base error rate and then performed a million random circuits on them. While this is relatively easy to do on quantum hardware, even assuming that we could build a supercomputer without bandwidth constraints, simulating it would take roughly 10,000 years on an existing supercomputer (the Frontier system). Allowing all of the system’s storage to operate as secondary memory cut the estimate down to 12 years.
What does this tell us?
Boixo emphasized that the value of the work isn’t really based on the value of performing random quantum circuits. Truly random bit strings might be useful in some contexts, but he emphasized that the real benefit here is a better understanding of the noise level that can be tolerated in quantum algorithms more generally. Since this benchmark is designed to make it as easy as possible to outperform classical computations, you would need the best standard computers here to have any hope of beating them to the answer for more complicated problems.
“Before you can do any other application, you need to win on this benchmark,” Boixo said. “If you are not winning on this benchmark, then you’re not winning on any other benchmark. This is the easiest thing for a noisy quantum computer compared to a supercomputer.”
Knowing how to identify this phase transition, he suggested, will also be helpful for anyone trying to run useful computations on today’s processors. “As we define the phase, it opens the possibility for finding applications in that phase on noisy quantum computers, where they will outperform classical computers,” Boixo said.
Implicit in this argument is an indication of why Google has focused on iterating on a single processor design even as many of its competitors have been pushing to increase qubit counts rapidly. If this benchmark indicates that you can’t get all of Sycamore’s qubits involved in the simplest low-noise regime calculation, then it’s not clear whether there’s a lot of value in increasing the qubit count. And the only way to change that is to lower the base error rate of the processor, so that’s where the company’s focus has been.
All of that, however, assumes that you hope to run useful calculations on today’s noisy hardware qubits. The alternative is to use error-corrected logical qubits, which will require major increases in qubit count. But Google has been seeing similar limitations due to Sycamore’s base error rate in tests that used it to host an error-corrected logical qubit, something we hope to return to in future coverage.
John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.
Thunderbird’s Android app, which is actually the K-9 Mail project reborn, is almost out. You can check it out a bit early in a beta that will feel pretty robust to most users.
Thunderbird, maintained by the Mozilla Foundation subsidiary MZLA, acquired the source code and naming rights to K-9 Mail, as announced in June 2022. The group also brought K-9 maintainer Christian Ketterer (or “cketti”) onto the project. Their initial goals, before a full rebrand into Thunderbird, involved importing Thunderbird’s automatic account setup, message filters, and mobile/desktop Thunderbird syncing.
At the tail end of 2023, however, Ketterer wrote on K-9’s blog that the punchlist of items before official Thunderbird-dom was taking longer than expected. But when it’s fully released, Thunderbird for Android will have those features. As such, beta testers are asked to check out a specific list of things to see if they work, including automatic setup, folder management, and K-9-to-Thunderbird transfer. The beta will not be “addressing longstanding issues,” Thunderbird’s blog post notes.
Launching Thunderbird for Android from K-9 Mail’s base makes a good deal of sense. Thunderbird’s desktop client has had a strange, disjointed life so far and is only just starting to regain a cohesive vision for what it wants to provide. For a long time now, K-9 Mail has been the Android email of choice for people who don’t want Gmail or Outlook, will not tolerate the default “Email” app on non-Google-blessed Android systems, and just want to see their messages.
“Picture a massive football stadium filled with fans month after month,” Reichenstein wrote to Ars. In that stadium, he writes:
5 percent (max) have a two-week trial ticket
2 percent have a yearly ticket
0.5 percent have a monthly ticket
0.5 percent are buying “all-time” tickets
But even if every lifetime ticket buyer showed up at once, that’s 10 percent of the stadium, Reichenstein said. Even without full visibility of every APK—”and what is happening in China at all,” he wrote—iA can assume 90 percent of users are “climbing over the fence.”
“Long story short, that’s how you can end up with 50,000 users and only 1,000 paying you,” Reichenstein wrote in the blog post.
Piracy doesn’t just mean lost revenue, Reichenstein wrote, but also increased demands for support, feature requests, and chances for bad ratings from people who never pay. And it builds over time. “You sell less apps through the [Play Store], but pirated users keep coming in because pirate sites don’t have such reviews. Reviews don’t matter much if the app is free.”
The iA numbers on macOS hint at a roughly 10 percent piracy rate. On iOS, it’s “not 0%,” but it’s “very, very hard to say what the numbers are”; there is also no “reset trick” or trials offered there.
A possible future unfreezing
Reichenstein wrote in the post and to Ars that sharing these kinds of numbers can invite critique from other app developers, both armchair and experienced. He’s seen that happening on Mastodon, Hacker News, and X (formerly Twitter). But “critical people are useful,” he noted, and he’s OK with people working backward to figure out how much iA might have made. (Google did not offer comment on aspects of iA’s post outside discussing Drive access policy.)
iA suggests that it might bring back Writer on Android, perhaps in a business-to-business scenario with direct payments. For now, it’s a slab of history, albeit far less valuable to the metaphorical Darth Vader that froze it.
As a message highlighted above the thread warned YouTube users that there were “longer than normal wait times” for support requests, YouTube continually asked for “patience” and turned off the comments.
“We are very sorry for this error on our part,” YouTube said.
Unable to leave comments, thousands of users mashed a button on the support thread, confirming that they had “the same question.” On Friday morning, 8,000 users had signaled despair, and as of this writing, the number had notched up to nearly 11,000.
YouTube has not confirmed how many users were removed, so that’s likely the best estimate we have for how many users were affected.
On Friday afternoon, YouTube did update the thread, confirming that “all channels incorrectly removed for Spam & Deceptive Practices have been fully reinstated!”
While YouTube claims that all channels are back online, not all the videos mistakenly removed were reinstated, YouTube said. Although most of the users impacted were reportedly non-creators, and therefore their livelihoods were likely not disrupted by the bug, at least one commenter complained, “my two most-viewed videos got deleted,” suggesting some account holders may highly value the videos still missing on their accounts.
“We’re working on reinstating the last few videos, thanks for bearing with us!” YouTube’s update said. “We know this was a frustrating experience, really appreciate your patience while we sort this out.”
It’s unclear if paid subscribers will be reimbursed for lost access to content.
YouTube did not respond to Ars’ request to comment.
But the rest of the AI world doesn’t march to the same beat, doing its own thing and churning out new AI models and research by the minute. Here’s a roundup of some other notable AI news from the past week.
Google Gemini updates
On Tuesday, Google announced updates to its Gemini model lineup, including the release of two new production-ready models that iterate on past releases: Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002. The company reported improvements in overall quality, with notable gains in math, long context handling, and vision tasks. Google claims a 7 percent increase in performance on the MMLU-Pro benchmark and a 20 percent improvement in math-related tasks. But as you know, if you’ve been reading Ars Technica for a while, AI typically benchmarks aren’t as useful as we would like them to be.
Along with model upgrades, Google introduced substantial price reductions for Gemini 1.5 Pro, cutting input token costs by 64 percent and output token costs by 52 percent for prompts under 128,000 tokens. As AI researcher Simon Willison noted on his blog, “For comparison, GPT-4o is currently $5/[million tokens] input and $15/m output and Claude 3.5 Sonnet is $3/m input and $15/m output. Gemini 1.5 Pro was already the cheapest of the frontier models and now it’s even cheaper.”
Google also increased rate limits, with Gemini 1.5 Flash now supporting 2,000 requests per minute and Gemini 1.5 Pro handling 1,000 requests per minute. Google reports that the latest models offer twice the output speed and three times lower latency compared to previous versions. These changes may make it easier and more cost-effective for developers to build applications with Gemini than before.
Meta launches Llama 3.2
On Wednesday, Meta announced the release of Llama 3.2, a significant update to its open-weights AI model lineup that we have covered extensively in the past. The new release includes vision-capable large language models (LLMs) in 11 billion and 90B parameter sizes, as well as lightweight text-only models of 1B and 3B parameters designed for edge and mobile devices. Meta claims the vision models are competitive with leading closed-source models on image recognition and visual understanding tasks, while the smaller models reportedly outperform similar-sized competitors on various text-based tasks.
Willison did some experiments with some of the smaller 3.2 models and reported impressive results for the models’ size. AI researcher Ethan Mollick showed off running Llama 3.2 on his iPhone using an app called PocketPal.
Meta also introduced the first official “Llama Stack” distributions, created to simplify development and deployment across different environments. As with previous releases, Meta is making the models available for free download, with license restrictions. The new models support long context windows of up to 128,000 tokens.
Google’s AlphaChip AI speeds up chip design
On Thursday, Google DeepMind announced what appears to be a significant advancement in AI-driven electronic chip design, AlphaChip. It began as a research project in 2020 and is now a reinforcement learning method for designing chip layouts. Google has reportedly used AlphaChip to create “superhuman chip layouts” in the last three generations of its Tensor Processing Units (TPUs), which are chips similar to GPUs designed to accelerate AI operations. Google claims AlphaChip can generate high-quality chip layouts in hours, compared to weeks or months of human effort. (Reportedly, Nvidia has also been using AI to help design its chips.)
Notably, Google also released a pre-trained checkpoint of AlphaChip on GitHub, sharing the model weights with the public. The company reported that AlphaChip’s impact has already extended beyond Google, with chip design companies like MediaTek adopting and building on the technology for their chips. According to Google, AlphaChip has sparked a new line of research in AI for chip design, potentially optimizing every stage of the chip design cycle from computer architecture to manufacturing.
That wasn’t everything that happened, but those are some major highlights. With the AI industry showing no signs of slowing down at the moment, we’ll see how next week goes.
Google wound down its defense in the US Department of Justice’s ad tech monopoly trial this week, following a week of testimony from witnesses that experts said seemed to lack credibility.
The tech giant started its defense by showing a widely mocked chart that Google executive Scott Sheffer called a “spaghetti football,” supposedly showing a fluid industry thriving thanks to Google’s ad tech platform but mostly just “confusing” everyone and possibly even helping to debunk its case, Open Markets Institute policy analyst Karina Montoya reported.
“The effect of this image might have backfired as it also made it evident that Google is ubiquitous in digital advertising,” Montoya reported. “During DOJ’s cross-examination, the spaghetti football was untangled to show only the ad tech products used specifically by publishers and advertisers on the open web.”
One witness, Marco Hardie, Google’s current head of industry, was even removed from the stand, his testimony deemed irrelevant by US District Judge Leonie Brinkema, Big Tech On Trial reported. Another, Google executive Scott Sheffer, gave testimony Brinkema considered “tainted,” Montoya reported. But perhaps the most heated exchange about a witness’ credibility came during the DOJ’s cross-examination of Mark Israel, the key expert that Google is relying on to challenge the DOJ’s market definition.
Google’s case depends largely on Brinkema agreeing that the DOJ’s market definition is too narrow, with an allegedly outdated focus on display ads on the open web, as opposed to a broader market including display ads appearing in apps or on social media. But experts monitoring the trial suggested that Brinkema may end up questioning Israel’s credibility after DOJ lawyer Aaron Teitelbaum’s aggressive cross-examination.
According to Big Tech on Trial, which posted the exchange on X (formerly Twitter), Teitelbaum’s line of questioning came across as a “striking and effective impeachment of Mark Israel’s credibility as a witness.”
During his testimony, Israel told Brinkema that Google’s share of the US display ads market is only 25 percent, minimizing Google’s alleged dominance while emphasizing that Google faced “intense competition” from other Big Tech companies like Amazon, Meta, and TikTok in this broader market, Open Markets Institute policy analyst Karina Montoya reported.
On cross-examination, Teitelbaum called Israel out as a “serial ‘expert’ for companies facing antitrust challenges” who “always finds that the companies ‘explained away’ market definition,” Big Tech on Trial posted on X. Teitelbaum even read out quotes from past cases “in which judges described” Israel’s “expert testimony as ‘not credible’ and having ‘misunderstood antitrust law.'”
Israel was also accused by past judges of rendering his opinions “based on false assumptions,” according to USvGoogleAds, a site run by the digital advertising watchdog Check My Ads with ad industry partners. And specifically for the Google ad tech case, Teitelbaum noted that Israel omitted ad spend data to seemingly manipulate one of his charts.
“Not a good look,” the watchdog’s site opined.
Perhaps most damaging, Teitelbaum asked Israel to confirm that “80 percent of his income comes from doing this sort of expert testimony,” suggesting that Israel seemingly depended on being paid by companies like Jet Blue and Kroger-Albertsons—and even previously by Google during the search monopoly trial—to muddy the waters on market definition. Lee Hepner, an antitrust lawyer with the American Economic Liberties Project, posted on X that the DOJ’s antitrust chief, Jonathan Kanter, has grown wary of serial experts supposedly sowing distrust in the court system.
“Let me say this clearly—this will not end well,” Kanter said during a speech at a competition law conference this month. “Already we see a seeping distrust of expertise by the courts and by law enforcers.”
“Best witnesses money can buy”
In addition to experts and Google staffers backing up Google’s proposed findings of fact and conclusions of law, Google brought in Courtney Caldwell—the CEO of a small business that once received a grant from Google and appears in Google’s marketing materials—to back up claims that a DOJ win could harm small businesses, Big Tech on Trial reported.
Google’s direct examination of Caldwell was “basically just a Google ad,” Big Tech on Trial said, while Check My Ads’ site suggested that Google mostly just called upon “the best witnesses their money can buy, and it still did not get them very far.”
According to Big Tech on Trial, Google is using a “light touch” in its defense, refusing to go “pound for pound” to refute the DOJ’s case. Using this approach, Google can seemingly ignore any argument the DOJ raises that doesn’t fit into the picture Google wants Brinkema to accept of Google’s ad empire growing organically, rather than anti-competitively constructed with the intent to shut out rivals through mergers and acquisitions.
Where the DOJ wants the judge to see “a Google-only pipeline through the heart of the ad tech stack, denying non-Google rivals the same access,” Google argues that it has only “designed a set of products that work efficiently with each other and attract a valuable customer base.”
Evidence that Brinkeman might find hard to ignore include a 2008 statement from Google’s former president of display advertising, David Rosenblatt, confirming that it would “take an act of god” to get people to switch ad platforms because of extremely high switching costs. Rosenblatt also suggested in a 2009 presentation that Google acquiring DoubleClick for Publishers would make Google’s ad tech like the New York Stock Exchange, putting Google in a position to monitor every ad sale and doing for display ads “what Google did to search.” There’s also a 2010 email where now-YouTube CEO Neal Mohan recommended getting Google ahead in the display ad market by “parking” a rival with “the most traction.”
On Friday, testimony concluded abruptly after the DOJ only called one rebuttal witness, Big Tech on Trial posted on X. Brinkema is expected to hear closing arguments on November 25, Big Tech on Trial reported, and rule in December, Montoya reported.
As the US Department of Justice aims to break up Google’s alleged ad tech monopoly, experts say that remedies sought in the antitrust trial could potentially benefit not just advertisers and publishers but also everyone targeted by ads online.
So far, the DOJ has argued that through acquisitions, Google allegedly monopolizes the ad server market, taking a substantial cut of every online ad sale by tying together products on the buyer and seller sides. Locking publishers into using its seller-side platform to access its large advertiser demand, Google also allegedly shut out rivals by pushing advertisers into a corner, then making it hard for publishers to switch platforms.
This scheme also allegedly set Google up to charge higher “monopoly” fees, the DOJ argued, allegedly putting some publishers out of business and raising costs for advertisers.
But while the harms to publishers and advertisers have been outlined at length, there’s been less talk about the seemingly major consequences for consumers perhaps harmed by the alleged monopoly. Those harms include higher costs of goods, less privacy, and increasingly lower-quality ads that frequently bombard their screens with products nobody wants.
By overcharging by as much as 5 or 10 percent for online ads, Google allegedly placed a “Google tax” on the price of “everyday goods we buy,” Tech Oversight’s Sacha Haworth explained during a press briefing Thursday, where experts closely monitoring the trial shared insights.
“When it comes to lowering costs on families,” Haworth said, “Google has overcharged advertisers and publishers by nearly $2 billion. That’s just over the last four years. That has inflated the price of ads, it’s increased the cost of doing business, and, of course, these costs get passed down to us when we buy things online.”
But while it’s unclear if destroying Google’s alleged monopoly would pass on any savings to consumers, Elise Phillips, policy counsel focused on competition and privacy for Public Knowledge, outlined other benefits in the event of a DOJ win.
She suggested that Google’s conduct has diminished innovation, which has “negatively” affected “the quality diversity and even relevancy of the advertisements that consumers tend to see.”
Were Google’s ad tech to be broken up and behavioral remedies sought, more competition might mean that consumers have more control over how their personal data is used in targeted advertising, Phillips suggested, and ultimately, lead to a future where everyone gets fed higher-quality ads.
That could happen if, instead of Google’s ad model dominating the Internet, less invasive ad targeting models could become more widely adopted, experts suggested. That could enhance privacy and make online ads less terrible after The New York Times declared a “junk ad epidemic” last year.
The thinking goes that if small businesses and publishers benefited from potentially reduced costs, increased revenues, and more options, consumers might start seeing a wider, higher-quality range of ads online, experts suggested.
Better ad models “are already out there,” Open Markets Institute policy analyst Karina Montoya said, such as “conceptual advertising” that uses signals that, unlike Google’s targeting, don’t rely on “gigantic, massive data sets that collect every single thing that we do in all of our devices and that don’t ask for our consent.”
But any emerging ad models are seemingly “crushed and flattened by this current dominant business model that’s really arising” from Google’s tight grip on the ad tech markets that the DOJ is targeting, Montoya said. Those include markets “for publisher ad servers, advertiser ad networks, and the ad exchanges that connect the two,” Reuters reported.
At the furthest extreme, loosening Google’s grip on the online ad industry could even “revolutionize the Internet,” Haworth suggested.
One theory posits that if publishers’ revenues increased, consumers would also benefit from more information potentially becoming available on the open web—as less content potentially gets stuck behind paywalls as desperate publishers seek ways to make up for lost ad revenue.
Montoya—who also is a reporter for the Center for Journalism & Liberty, which monitors how media outlets can thrive in today’s digital economy—noted that publishers depending on reader funding through subscriptions or donations is not sustainable if society wants to “have an open in free market where everybody can access information that they deserve and have a right to access.” By reducing Google’s control, the DOJ argues that publishers would be more financially stable, and Montoya hopes the public is starting to understand how that could benefit the open web.
“The trial is really allowing the public to see a full display of Google’s pattern of retaliatory behavior, really just to protect its monopoly power,” Montoya sad. “This idea that innovation and ways to monetize journalistic content has to come only from Google is wrong and this is really their defense.”
Users of Fitbit’s iOS and Android apps have been reporting problems with the apps’ ability to sync and collect and display accurate data. Some have been complaining of such problems since at least April, and Fitbit has been working on addressing syncing issues since at least September 3. However, Google’s Fitbit hasn’t said when it expects the bugs to be totally resolved.
On September 3, Fitbit’s Status Dashboard updated to show a service disruption, pointing to an incident affecting the web API.
“Some users may notice data discrepancies or syncing issues between [third-party] apps and Fitbit. Our team is currently investigating the root cause of the issue,” the dashboard reads.
On September 3, Fitbit also released version 4.24 of its mobile apps. It’s unclear if the update is related to the problems. At least some of the complaints in this story started coming to light before September.
Owners of older and newer Fitbit devices have taken to the company’s online support forum to discuss software problems they’re reportedly having. There are several threads with dozens of pages’ worth of responses pointing to issues, like the app’s dashboard “deleting steps and not syncing properly,” the app recording steps but not distance traveled, the app seemingly showing inaccurate data, and other bugs.
When reached for comment about the complaints, a Google spokesperson told Ars Technica: “We’re aware of the issue and are working hard to get it resolved.”
Monthslong problems
Some of the complaints about the apps have seemingly gone on for months. Fitbit representatives have said online that the issues are being worked on.
For example, in an 11-page thread on Fitbit’s community forum, users say the app inaccurately claims that they’ve taken about the same number of steps per day for several days in a row. The thread began on April 10. On September 8, a Fitbit moderator said that Fitbit “is aware of the situation and is working on a solution to it.”
“We haven’t received any time frame yet, how long our team still needs to solve this. Hopefully it will be fixed soon,” the Fitbit moderator going by JuanFitbit said.
In another thread, started on July 3, a Charge 5 user claimed that their iOS is tracking steps but not kilometers traveled. On September 18, JuanFitbit posted in the thread: “We still haven’t received an update on how long this will take. But our team has this problem as one of their priorities to solve.”
“Insanely annoying”
As expected, the ongoing bugs and broken features have left users frustrated and hungry for a solution.
“This is insanely annoying,” a forum user going by MonkeyPants wrote on September 11. “The app has constant syncing issues especially with the One.”
On Fitbit’s forum, a user called DustyStone claimed they are having problems with the app’s dashboard losing steps and not syncing properly. They said this happened with both an old Fitbit One and newly purchased Inspire 3:
It looks that Google just somehow screwed up the app. Worse yet, nothing has changed in weeks. Google is a tier 1 tech company. But their response to this issue and the deletion of the web based Fitbit platform shows that may no longer be the case.
Similarly, MBWaldo said they are “not sure how serious the fitbit team is about resolving” the app problems while lamenting the lack of an online dashboard, like countless other users we’ve seen.
“Very frustrating!!!!,” MBWaldo wrote. “I have been experiencing this for several days now. I have deleted app and reinstalled it, I have unpaired and re-paired the ONE and looked for app updates in the app store – NADA. And of course the dashboard is no longer available at fitbit.com.”
Some app problems fixed
Based on Fitbit’s forums, it seems that at least some recently reported software problems have been fixed.
For example, some customers recently pointed to a problem with the apps’ “Exercise days” tiles not loading properly being fixed. Some people have also said that they’re no longer experiencing a problem where the app was listing calorie counts for days in the future.
One only needs to go back to the recent Sonos app debacle for a reminder of the importance of ensuring that software changes won’t hurt the experience of already-purchased hardware. A company’s bad app and slow response to issues can ruin otherwise functioning hardware and discourage future purchases.
Although this is different from the Charge 5’s battery problems that were suspected to be caused by a firmware update—Google denied this was the case but didn’t provide an alternate answer—it’s an improvement to see Google at least acknowledge the app problems. But killing features combined with a broken app experience won’t help the wearables brand’s errant reputation. Fixes are reportedly in the works, but for some it may be too little too late.
On Tuesday, Google announced plans to implement content authentication technology across its products to help users distinguish between human-created and AI-generated images. Over several upcoming months, the tech giant will integrate the Coalition for Content Provenance and Authenticity (C2PA) standard, a system designed to track the origin and editing history of digital content, into its search, ads, and potentially YouTube services. However, it’s an open question of whether a technological solution can address the ancient social issue of trust in recorded media produced by strangers.
A group of tech companies created the C2PA system beginning in 2019 in an attempt to combat misleading, realistic synthetic media online. As AI-generated content becomes more prevalent and realistic, experts have worried that it may be difficult for users to determine the authenticity of images they encounter. The C2PA standard creates a digital trail for content, backed by an online signing authority, that includes metadata information about where images originate and how they’ve been modified.
Google will incorporate this C2PA standard into its search results, allowing users to see if an image was created or edited using AI tools. The tech giant’s “About this image” feature in Google Search, Lens, and Circle to Search will display this information when available.
In a blog post, Laurie Richardson, Google’s vice president of trust and safety, acknowledged the complexities of establishing content provenance across platforms. She stated, “Establishing and signaling content provenance remains a complex challenge, with a range of considerations based on the product or service. And while we know there’s no silver bullet solution for all content online, working with others in the industry is critical to create sustainable and interoperable solutions.”
The company plans to use the C2PA’s latest technical standard, version 2.1, which reportedly offers improved security against tampering attacks. Its use will extend beyond search since Google intends to incorporate C2PA metadata into its ad systems as a way to “enforce key policies.” YouTube may also see integration of C2PA information for camera-captured content in the future.
Google says the new initiative aligns with its other efforts toward AI transparency, including the development of SynthID, an embedded watermarking technology created by Google DeepMind.
Widespread C2PA efficacy remains a dream
Despite having a history that reaches back at least five years now, the road to useful content provenance technology like C2PA is steep. The technology is entirely voluntary, and key authenticating metadata can easily be stripped from images once added.
AI image generators would need to support the standard for C2PA information to be included in each generated file, which will likely preclude open source image synthesis models like Flux. So perhaps, in practice, more “authentic,” camera-authored media will be labeled with C2PA than AI-generated images.
Beyond that, maintaining the metadata requires a complete toolchain that supports C2PA every step along the way, including at the source and any software used to edit or retouch the images. Currently, only a handful of camera manufacturers, such as Leica, support the C2PA standard. Nikon and Canon have pledged to adopt it, but The Verge reports that there’s still uncertainty about whether Apple and Google will implement C2PA support in their smartphone devices.
Adobe’s Photoshop and Lightroom can add and maintain C2PA data, but many other popular editing tools do not yet offer the capability. It only takes one non-compliant image editor in the chain to break the full usefulness of C2PA. And the general lack of standardized viewing methods for C2PA data across online platforms presents another obstacle to making the standard useful for everyday users.
Currently, C2PA could arguably be seen as a technological solution for current trust issues around fake images. In that sense, C2PA may become one of many tools used to authenticate content by determining whether the information came from a credible source—if the C2PA metadata is preserved—but it is unlikely to be a complete solution to AI-generated misinformation on its own.