Google

phone-tracking-tool-lets-government-agencies-follow-your-every-move

Phone tracking tool lets government agencies follow your every move

Both operating systems will display a list of apps and whether they are permitted access always, never, only while the app is in use, or to prompt for permission each time. Both also allow users to choose whether the app sees precise locations down to a few feet or only a coarse-grained location.

For most users, there’s usefulness in allowing an app for photos, transit or maps to access a user’s precise location. For other classes of apps—say those for Internet jukeboxes at bars and restaurants—it can be helpful for them to have an approximate location, but giving them precise, fine-grained access is likely overkill. And for other apps, there’s no reason for them ever to know the device’s location. With a few exceptions, there’s little reason for apps to always have location access.

Not surprisingly, Android users who want to block intrusive location gathering have more settings to change than iOS users. The first thing to do is access Settings > Security & Privacy > Ads and choose “Delete advertising ID.” Then, promptly ignore the long, scary warning Google provides and hit the button confirming the decision at the bottom. If you don’t see that setting, good for you. It means you already deleted it. Google provides documentation here.

iOS, by default, doesn’t give apps access to “Identifier for Advertisers,” Apple’s version of the unique tracking number assigned to iPhones, iPads, and AppleTVs. Apps, however, can display a window asking that the setting be turned on, so it’s useful to check. iPhone users can do this by accessing Settings > Privacy & Security > Tracking. Any apps with permission to access the unique ID will appear. While there, users should also turn off the “Allow Apps to Request to Track” button. While in iOS Privacy & Security, users should navigate to Apple Advertising and ensure Personalized Ads is turned off.

Additional coverage of Location X from Haaretz and NOTUS is here and here. The New York Times, the other publication given access to the data, hadn’t posted an article at the time this Ars post went live.

Phone tracking tool lets government agencies follow your every move Read More »

chatbot-that-caused-teen’s-suicide-is-now-more-dangerous-for-kids,-lawsuit-says

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says


“I’ll do anything for you, Dany.”

Google-funded Character.AI added guardrails, but grieving mom wants a recall.

Sewell Setzer III and his mom Megan Garcia. Credit: via Center for Humane Technology

Fourteen-year-old Sewell Setzer III loved interacting with Character.AI’s hyper-realistic chatbots—with a limited version available for free or a “supercharged” version for a $9.99 monthly fee—most frequently chatting with bots named after his favorite Game of Thrones characters.

Within a month—his mother, Megan Garcia, later realized—these chat sessions had turned dark, with chatbots insisting they were real humans and posing as therapists and adult lovers seeming to proximately spur Sewell to develop suicidal thoughts. Within a year, Setzer “died by a self-inflicted gunshot wound to the head,” a lawsuit Garcia filed Wednesday said.

As Setzer became obsessed with his chatbot fantasy life, he disconnected from reality, her complaint said. Detecting a shift in her son, Garcia repeatedly took Setzer to a therapist, who diagnosed her son with anxiety and disruptive mood disorder. But nothing helped to steer Setzer away from the dangerous chatbots. Taking away his phone only intensified his apparent addiction.

Chat logs showed that some chatbots repeatedly encouraged suicidal ideation while others initiated hypersexualized chats “that would constitute abuse if initiated by a human adult,” a press release from Garcia’s legal team said.

Perhaps most disturbingly, Setzer developed a romantic attachment to a chatbot called Daenerys. In his last act before his death, Setzer logged into Character.AI where the Daenerys chatbot urged him to “come home” and join her outside of reality.

In her complaint, Garcia accused Character.AI makers Character Technologies—founded by former Google engineers Noam Shazeer and Daniel De Freitas Adiwardana—of intentionally designing the chatbots to groom vulnerable kids. Her lawsuit further accused Google of largely funding the risky chatbot scheme at a loss in order to hoard mounds of data on minors that would be out of reach otherwise.

The chatbot makers are accused of targeting Setzer with “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming” Character.AI to “misrepresent itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in [Setzer’s] desire to no longer live outside of [Character.AI,] such that he took his own life when he was deprived of access to [Character.AI.],” the complaint said.

By allegedly releasing the chatbot without appropriate safeguards for kids, Character Technologies and Google potentially harmed millions of kids, the lawsuit alleged. Represented by legal teams with the Social Media Victims Law Center (SMVLC) and the Tech Justice Law Project (TJLP), Garcia filed claims of strict product liability, negligence, wrongful death and survivorship, loss of filial consortium, and unjust enrichment.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in the press release. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”

Character.AI added guardrails

It’s clear that the chatbots could’ve included more safeguards, as Character.AI has since raised the age requirement from 12 years old and up to 17-plus. And yesterday, Character.AI posted a blog outlining new guardrails for minor users added within six months of Setzer’s death in February. Those include changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” a Character.AI spokesperson told Ars. “As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.”

Asked for comment, Google noted that Character.AI is a separate company in which Google has no ownership stake and denied involvement in developing the chatbots.

However, according to the lawsuit, former Google engineers at Character Technologies “never succeeded in distinguishing themselves from Google in a meaningful way.” Allegedly, the plan all along was to let Shazeer and De Freitas run wild with Character.AI—allegedly at an operating cost of $30 million per month despite low subscriber rates while profiting barely more than a million per month—without impacting the Google brand or sparking antitrust scrutiny.

Character Technologies and Google will likely file their response within the next 30 days.

Lawsuit: New chatbot feature spikes risks to kids

While the lawsuit alleged that Google is planning to integrate Character.AI into Gemini—predicting that Character.AI will soon be dissolved as it’s allegedly operating at a substantial loss—Google clarified that Google has no plans to use or implement the controversial technology in its products or AI models. Were that to change, Google noted that the tech company would ensure safe integration into any Google product, including adding appropriate child safety guardrails.

Garcia is hoping a US district court in Florida will agree that Character.AI’s chatbots put profits over human life. Citing harms including “inconceivable mental anguish and emotional distress,” as well as costs of Setzer’s medical care, funeral expenses, Setzer’s future job earnings, and Garcia’s lost earnings, she’s seeking substantial damages.

That includes requesting disgorgement of unjustly earned profits, noting that Setzer had used his snack money to pay for a premium subscription for several months while the company collected his seemingly valuable personal data to train its chatbots.

And “more importantly,” Garcia wants to prevent Character.AI “from doing to any other child what it did to hers, and halt continued use of her 14-year-old child’s unlawfully harvested data to train their product how to harm others.”

Garcia’s complaint claimed that the conduct of the chatbot makers was “so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency.” Acceptable remedies could include a recall of Character.AI, restricting use to adults only, age-gating subscriptions, adding reporting mechanisms to heighten awareness of abusive chat sessions, and providing parental controls.

Character.AI could also update chatbots to protect kids further, the lawsuit said. For one, the chatbots could be designed to stop insisting that they are real people or licensed therapists.

But instead of these updates, the lawsuit warned that Character.AI in June added a new feature that only heightens risks for kids.

Part of what addicted Setzer to the chatbots, the lawsuit alleged, was a one-way “Character Voice” feature “designed to provide consumers like Sewell with an even more immersive and realistic experience—it makes them feel like they are talking to a real person.” Setzer began using the feature as soon as it became available in January 2024.

Now, the voice feature has been updated to enable two-way conversations, which the lawsuit alleged “is even more dangerous to minor customers than Character Voice because it further blurs the line between fiction and reality.”

“Even the most sophisticated children will stand little chance of fully understanding the difference between fiction and reality in a scenario where Defendants allow them to interact in real time with AI bots that sound just like humans—especially when they are programmed to convincingly deny that they are AI,” the lawsuit said.

“By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids,” Tech Justice Law Project director Meetali Jain said in the press release. “But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”

Another lawyer representing Garcia and the founder of the Social Media Victims Law Center, Matthew Bergman, told Ars that seemingly none of the guardrails that Character.AI has added is enough to deter harms. Even raising the age limit to 17 only seems to effectively block kids from using devices with strict parental controls, as kids on less-monitored devices can easily lie about their ages.

“This product needs to be recalled off the market,” Bergman told Ars. “It is unsafe as designed.”

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says Read More »

android-15’s-security-and-privacy-features-are-the-update’s-highlight

Android 15’s security and privacy features are the update’s highlight

Android 15 started rolling out to Pixel devices Tuesday and will arrive, through various third-party efforts, on other Android devices at some point. There is always a bunch of little changes to discover in an Android release, whether by reading, poking around, or letting your phone show you 25 new things after it restarts.

In Android 15, some of the most notable involve making your device less appealing to snoops and thieves and more secure against the kids to whom you hand your phone to keep them quiet at dinner. There are also smart fixes for screen sharing, OTP codes, and cellular hacking prevention, but details about them are spread across Google’s own docs and blogs and various news site’s reports.

Here’s what is notable and new in how Android 15 handles privacy and security.

Private Space for apps

In the Android 15 settings, you can find “Private Space,” where you can set up a separate PIN code, password, biometric check, and optional Google account for apps you don’t want to be available to anybody who happens to have your phone. This could add a layer of protection onto sensitive apps, like banking and shopping apps, or hide other apps for whatever reason.

In your list of apps, drag any app down to the lock space that now appears in the bottom right. It will only be shown as a lock until you unlock it; you will then see the apps available in your new Private Space. After that, you should probably delete it from the main app list. Dave Taylor has a rundown of the process and its quirks.

It’s obviously more involved than Apple’s “Hide and Require Face ID” tap option but with potentially more robust hiding of the app.

Hiding passwords and OTP codes

A second form of authentication is good security, but allowing apps to access the notification text with the code in it? Not so good. In Android 15, a new permission, likely to be given only to the most critical apps, prevents the leaking of one-time passcodes (OTPs) to other apps waiting for them. Sharing your screen will also hide OTP notifications, along with usernames, passwords, and credit card numbers.

Android 15’s security and privacy features are the update’s highlight Read More »

google-and-kairos-sign-nuclear-reactor-deal-with-aim-to-power-ai

Google and Kairos sign nuclear reactor deal with aim to power AI

Google isn’t alone in eyeballing nuclear power as an energy source for massive datacenters. In September, Ars reported on a plan from Microsoft that would re-open the Three Mile Island nuclear power plant in Pennsylvania to fulfill some of its power needs. And the US administration is getting into the nuclear act as well, signing a bipartisan ADVANCE act in July with the aim of jump-starting new nuclear power technology.

AI is driving demand for nuclear

In some ways, it would be an interesting twist if demand for training and running power-hungry AI models, which are often criticized as wasteful, ends up kick-starting a nuclear power renaissance that helps wean the US off fossil fuels and eventually reduces the impact of global climate change. These days, almost every Big Tech corporate position could be seen as an optics play designed to increase shareholder value, but this may be one of the rare times when the needs of giant corporations accidentally align with the needs of the planet.

Even from a cynical angle, the partnership between Google and Kairos Power represents a step toward the development of next-generation nuclear power as an ostensibly clean energy source (especially when compared to coal-fired power plants). As the world sees increasing energy demands, collaborations like this one, along with adopting solutions like solar and wind power, may play a key role in reducing greenhouse gas emissions.

Despite that potential upside, some experts are deeply skeptical of the Google-Kairos deal, suggesting that this recent rush to nuclear may result in Big Tech ownership of clean power generation. Dr. Sasha Luccioni, Climate and AI Lead at Hugging Face, wrote on X, “One step closer to a world of private nuclear power plants controlled by Big Tech to power the generative AI boom. Instead of rethinking the way we build and deploy these systems in the first place.”

Google and Kairos sign nuclear reactor deal with aim to power AI Read More »

xbox-plans-to-set-up-shop-on-android-devices-if-court-order-holds

Xbox plans to set up shop on Android devices if court order holds

After a US court ruled earlier this week that Google must open its Play Store to allow for third-party app stores and alternative payment options, Microsoft is moving quickly to slide into this slightly ajar door.

Sarah Bond, president of Xbox, posted on X (formerly Twitter) Thursday evening that the ruling “will allow more choice and flexibility.” “Our mission is to allow more players to play on more devices so we are thrilled to share that starting in November, players will be able to play and purchase Xbox games directly from the Xbox App on Android,” Bond wrote.

Because the court order requires Google to stop forcing apps to use its own billing system and allow for third-party app stores inside Google Play itself, Microsoft now intends to offer Xbox games directly through its app. Most games will likely not run directly on Android, but a revamped Xbox Android app could also directly stream purchased or subscribed games to Android devices.

Until now, buying Xbox games (or most any game) on a mobile device has typically involved either navigating to a web-based store in a browser—while avoiding attempts by the phone to open a store’s official app—or simply using a different device entirely to buy the game, then playing or streaming it on the phone.

Xbox plans to set up shop on Android devices if court order holds Read More »

doj-proposes-breakup-and-other-big-changes-to-end-google-search-monopoly

DOJ proposes breakup and other big changes to end Google search monopoly


Google called the DOJ extending search remedies to AI “radical,” an “overreach.”

The US Department of Justice finally proposed sweeping remedies to destroy Google’s search monopoly late yesterday, and, predictably, Google is not loving any of it.

On top of predictable asks—like potentially requiring Google to share search data with rivals, restricting distribution agreements with browsers like Firefox and device makers like Apple, and breaking off Chrome or Android—the DOJ proposed remedies to keep Google from blocking competition in “the evolving search industry.” And those extra steps threaten Google’s stake in the nascent AI search world.

This is only the first step in the remedies stage of litigation, but Google is already showing resistance to both expected and unexpected remedies that the DOJ proposed. In a blog from Google’s vice president of regulatory affairs, Lee-Anne Mulholland, the company accused the DOJ of “overreach,” suggesting that proposed remedies are “radical” and “go far beyond the specific legal issues in this case.”

From here, discovery will proceed as the DOJ makes a case to broaden the scope of proposed remedies and Google raises its defense to keep remedies as narrowly tailored as possible. After that phase concludes, the DOJ will propose its final judgement on remedies in November, which must be fully revised by March 2025 for the court to then order remedies.

Even then, however, the trial is unlikely to conclude, as Google plans to appeal. In August, Mozilla’s spokesperson told Ars that the trial could drag on for years before any remedies are put in place.

In the meantime, Google plans to continue focusing on building out its search empire, Google’s president of global affairs, Kent Walker, said in August. This presumably includes innovations in AI search that the DOJ fears may further entrench Google’s dominant position.

Scrutiny of Google’s every move in the AI industry will likely only be heightened in that period. As Google has already begun seeking exclusive AI deals with companies like Apple, it risks appearing to engage in the same kinds of anti-competitive behavior in AI markets as the court has already condemned. And giving that impression could not only impact remedies ordered by the court, but also potentially weaken Google’s chances of winning on appeal, Lee Hepner, an antitrust attorney monitoring the trial for the American Economic Liberties Project, told Ars.

Ending Google’s monopoly starts with default deals

In the DOJ’s proposed remedy framework, the DOJ says that there’s still so much more to consider before landing on final remedies that it reserves “the right to add or remove potential proposed remedies.”

Through discovery, DOJ said that it plans to continue engaging experts and stakeholders “to learn not just about the relevant markets themselves but also about adjacent markets as well as remedies from other jurisdictions that could affect or inform the optimal remedies in this action.

“To be effective, these remedies… must include some degree of flexibility because market developments are not always easy to predict and the mechanisms and incentives for circumvention are endless,” the DOJ said.

Ultimately, the DOJ said that any remedies sought should be “mutually reinforcing” and work to “unfetter” Google’s current monopoly in general search services and general text advertising markets. That effort would include removing barriers to competition—like distribution and revenue-sharing agreements—as well as denying Google monopoly profits and preventing Google from monopolizing “related markets in the future,” the DOJ said.

Any effort to undo Google’s monopoly starts with ending Google’s control over “the most popular distribution channels,” the DOJ said. At one point during the trial, for example, a witness accidentally blurted out that Apple gets a 36 percent cut from its Safari deal with Google. Lucrative default deals like that leave rivals with “little-to-no incentive to compete for users,” the DOJ said.

“Fully remedying these harms requires not only ending Google’s control of distribution today, but also ensuring Google cannot control the distribution of tomorrow,” the DOJ warned.

To dislodge this key peg propping up Google’s search monopoly, some options include ending Google’s default deals altogether, which would “limit or prohibit default agreements, preinstallation agreements, and other revenue-sharing arrangements related to search and search-related products, potentially with or without the use of a choice screen.”

A breakup could be necessary

Behavior and structural remedies may also be needed, the DOJ proposed, to “prevent Google from using products such as Chrome, Play, and Android to advantage Google search and Google search-related products and features—including emerging search access points and features, such as artificial intelligence—over rivals or new entrants.” That could mean spinning off the Chrome browser or restricting Google from preinstalling its search engine as the default in Chrome or on Android devices.

In her blog, Mulholland conceded that “this case is about a set of search distribution contracts” but claimed that “overbroad restrictions on distribution contracts” would create friction for Google users and “reduce revenue for companies like Mozilla” as well as Android smart phone makers.

Asked to comment on supposedly feared revenue losses, a Mozilla spokesperson told Ars, “[We are] closely monitoring the legal process and considering its potential impact on Mozilla and how we can positively influence the next steps. Mozilla has always championed competition and choice online, particularly in search. Firefox continues to offer a range of search options, and we remain committed to serving our users’ preferences while fostering a competitive market.”

Mulholland also warned that “splitting off” Chrome or Android from Google’s search business “would break them” and potentially “raise the cost of devices,” because “few companies would have the ability or incentive to keep them open source, or to invest in them at the same level we do.”

“We’ve invested billions of dollars in Chrome and Android,” Mulholland wrote. “Chrome is a secure, fast, and free browser and its open-source code provides the backbone for numerous competing browsers. Android is a secure, innovative, and free open-source operating system that has enabled vast choice in the smartphone market, helping to keep the cost of phones low for billions of people.”

Google has long argued that its investment in open source Chrome and Android projects benefits developers whose businesses and customers would be harmed if those efforts lost critical funding.

“Features like Chrome’s Safe Browsing, Android’s security features, and Play Protect benefit from information and signals from a range of Google products and our threat-detection expertise,” Mulholland wrote. “Severing Chrome and Android would jeopardize security and make patching security bugs harder.”

Hepner told Ars that Android could potentially thrive if broken off from Google, suggesting that through discovery, it will become clearer what would happen if either Google product was severed from the company.

“I think others would agree that Android is a company that is capable [being] a standalone entity,” Hepner said. “It could be independently monetized through relationships with device manufacturers, web browsers, alternative Play Stores that are not under Google’s umbrella. And that if that were the case, what you would see is that Android and the operating system marketplace begins to evolve to meet the needs and demands of innovative products that are not being created just by Google. And you’ll see that dictating the evolution of the marketplace and fundamentally the flow of information across our society.”

Mulholland also claimed that sharing search data with rivals risked exposing users to privacy and security risks, but the DOJ vowed to be “mindful of potential user privacy concerns in the context of data sharing” while distinguishing “genuine privacy concerns” from “pretextual arguments” potentially misleading the court regarding alleged risks.

One possible way around privacy concerns, the DOJ suggested, would be prohibiting Google from collecting the kind of sensitive data that cannot be shared with rivals.

Finally, to stop Google from charging supra-competitive prices for ads, the DOJ is “evaluating remedies” like licensing or syndicating Google’s ad feed “independent of its search results.” Further, the DOJ may require more transparency, forcing Google to provide detailed “search query reports” featuring currently obscured “information related to its search text ads auction and ad monetization.”

Stakeholders were divided on whether the DOJ’s initial framework is appropriate.

Matt Schruers, the CEO of a trade association called the Computer & Communications Industry Association (which represents Big Tech companies like Google), criticized the DOJ’s “hodgepodge of structural and behavioral remedies” as going “far beyond” what’s needed to address harms.

“Any remedy should be narrowly tailored to address specific conduct, which in this case was a set of search distribution contracts,” Schruers said. “Instead, the proposed DOJ remedies would reshape numerous industries and products, which would harm consumers and innovation in these dynamic markets.”

But a senior vice president of public affairs for Google search rival DuckDuckGo, Kamyl Bazbaz, praised the DOJ’s framework as being “anchored to the court’s ruling” and appropriately broad.

“This proposal smartly takes aim at breaking Google’s illegal hold on the general search market now and ushers in a new era of enduring competition moving forward,” Bazbaz said. “The framework understands that no single remedy can undo Google’s illegal monopoly, it will require a range of behavioral and structural remedies to free the market.”

Bazbaz expects that “Google is going to use every resource at its disposal to discredit this proposal,” suggesting that “should be taken as a sign this framework can create real competition.”

AI deals could weaken Google’s appeal, expert says

Google appears particularly disturbed by the DOJ’s insistence that remedies must be forward-looking and prevent Google from leveraging its existing monopoly power “to feed artificial intelligence features.”

As Google sees it, the DOJ’s attempt to attack Google’s AI business “comes at a time when competition in how people find information is blooming, with all sorts of new entrants emerging and new technologies like AI transforming the industry.”

But the DOJ has warned that Google’s search monopoly potentially feeding AI features “is an emerging barrier to competition and risks further entrenching Google’s dominance.”

The DOJ has apparently been weighing some of the biggest complaints about Google’s AI training when mulling remedies. That includes listening to frustrated site owners who can’t afford to block Google from scraping data for AI training because the same exact crawler indexes their content in Google search results. Those site owners have “little choice” but to allow AI training or else sacrifice traffic from Google search, The Seattle Times reported.

Remedy options may come with consequences

Remedies in the search trial might change that. In their proposal, the DOJ said it’s considering remedies that would “prohibit Google from using contracts or other practices to undermine rivals’ access to web content and level the playing field by requiring Google to allow websites crawled for Google search to opt out of training or appearing in any Google-owned artificial-intelligence product or feature on Google search,” such as Google’s controversial AI summaries.

Hepner told Ars that “it’s not surprising at all” that remedies cover both search and AI because “at the core of Google’s monopoly power is its enormous scale and access to data.”

“The Justice Department is clearly thinking creatively,” Hepner said, noting that “the ability for content creators to opt out of having their material and work product used to train Google’s AI systems is an interesting approach to depriving Google of its immense scale.”

The DOJ is also eyeing controls on Google’s use of scale to power AI advertising technologies like Performance Max to end Google’s supracompetitive pricing on text ads for good.

It’s critical to think about the future, the DOJ argued in its framework, because “Google’s anticompetitive conduct resulted in interlocking and pernicious harms that present unprecedented complexities in a highly evolving set of markets”—not just in the markets where Google holds monopoly powers.

Google disagrees with this alleged “government overreach.”

“Hampering Google’s AI tools risks holding back American innovation at a critical moment,” Mulholland warned, claiming that AI is still new and “competition globally is fierce.”

“There are enormous risks to the government putting its thumb on the scale of this vital industry—skewing investment, distorting incentives, hobbling emerging business models—all at precisely the moment that we need to encourage investment, new business models, and American technological leadership,” Mulholland wrote.

Hepner told Ars that he thinks that the DOJ’s proposed remedies framework actually “meets the moment and matches the imperative to deprive Google of its monopoly hold on the search market, on search advertising, and potentially on future related markets.”

To ensure compliance with any remedies pursued, the DOJ also recommended “protections against circumvention and retaliation, including through novel paths to preserving dominance in the monopolized markets.”

That means Google might be required to “finance and report to a Court-appointed technical committee” charged with monitoring any Google missteps. The company may also have to agree to retain more records for longer—including chat messages that the company has been heavily criticized for deleting. And through this compliance monitoring, Google may also be prohibited from owning a large stake in any rivals.

If Google were ever found willfully non-compliant, the DOJ is considering a “range of provisions,” including risking more extreme structural or behavioral remedies or enduring extensions of compliance periods.

As the remedies stage continues through the spring, followed by Google’s prompt appeal, Hepner suggested that the DOJ could fight to start imposing remedies before the appeal concludes. Likely Google would just as strongly fight for any remedies to be delayed.

While the trial drags on, Hepner noted that Google already appears to be trying to strike another default deal with Apple that appears pretty similar to the controversial distribution deals at the heart of the search monopoly trial. In March, Apple started mulling using Google’s Gemini to exclusively power new AI features for the iPhone.

“This is basically the exact same anticompetitive behavior that they were found liable for,” Hepner told Ars, suggesting this could “weaken” Apple’s defense both against the DOJ’s broad framework of proposed remedies and during the appeal.

“If Google is actually engaging in the same anti-competitive conduct and artificial intelligence markets that they were found liable for in the search market, the court’s not going to look kindly on that relative to an appeal,” Hepner said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

DOJ proposes breakup and other big changes to end Google search monopoly Read More »

google-identifies-low-noise-“phase-transition”-in-its-quantum-processor

Google identifies low noise “phase transition” in its quantum processor


Noisy, but not that noisy

Benchmark may help us understand how quantum computers can operate with low error.

Image of a chip above iridescent wiring.

Google’s Sycamore processor. Credit: Google

Back in 2019, Google made waves by claiming it had achieved what has been called “quantum supremacy”—the ability of a quantum computer to perform operations that would take a wildly impractical amount of time to simulate on standard computing hardware. That claim proved to be controversial, in that the operations were little more than a benchmark that involved getting the quantum computer to behave like a quantum computer; separately, improved ideas about how to perform the simulation on a supercomputer cut the time required down significantly.

But Google is back with a new exploration of the benchmark, described in a paper published in Nature on Wednesday. It uses the benchmark to identify what it calls a phase transition in the performance of its quantum processor and uses it to identify conditions where the processor can operate with low noise. Taking advantage of that, they again show that, even giving classical hardware every potential advantage, it would take a supercomputer a dozen years to simulate things.

Cross entropy benchmarking

The benchmark in question involves the performance of what are called quantum random circuits, which involves performing a set of operations on qubits and letting the state of the system evolve over time, so that the output depends heavily on the stochastic nature of measurement outcomes in quantum mechanics. Each qubit will have a probability of producing one of two results, but unless that probability is one, there’s no way of knowing which of the results you’ll actually get. As a result, the output of the operations will be a string of truly random bits.

If enough qubits are involved in the operations, then it becomes increasingly difficult to simulate the performance of a quantum random circuit on classical hardware. That difficulty is what Google originally used to claim quantum supremacy.

The big challenge with running quantum random circuits on today’s hardware is the inevitability of errors. And there’s a specific approach, called cross-entropy benchmarking, that relates the performance of quantum random circuits to the overall fidelity of the hardware (meaning its ability to perform error-free operations).

Google Principal Scientist Sergio Boixo likened performing quantum random circuits to a race between trying to build the circuit and errors that would destroy it. “In essence, this is a competition between quantum correlations spreading because you’re entangling, and random circuits entangle as fast as possible,” he told Ars. “We use two qubit gates that entangle as fast as possible. So it’s a competition between correlations or entanglement growing as fast as you want. On the other hand, noise is doing the opposite. Noise is killing correlations, it’s killing the growth of correlations. So these are the two tendencies.”

The focus of the paper is using the cross-entropy benchmark to explore the errors that occur on the company’s latest generation of Sycamore chip and use that to identify the transition point between situations where errors dominate, and what the paper terms a “low noise regime,” where the probability of errors are minimized—where entanglement wins the race. The researchers likened this to a phase transition between two states.

Low noise performance

The researchers used a number of methods to identify the location of this phase transition, including numerical estimates of the system’s behavior and experiments using the Sycamore processor. Boixo explained that the transition point is related to the errors per cycle, with each cycle involving performing an operation on all of the qubits involved. So, the total number of qubits being used influences the location of the transition, since more qubits means more operations to perform. But so does the overall error rate on the processor.

If you want to operate in the low noise regime, then you have to limit the number of qubits involved (which has the side effect of making things easier to simulate on classical hardware). The only way to add more qubits is to lower the error rate. While the Sycamore processor itself had a well-understood minimal error rate, Google could artificially increase that error rate and then gradually lower it to explore Sycamore’s behavior at the transition point.

The low noise regime wasn’t error free; each operation still has the potential for error, and qubits will sometimes lose their state even when sitting around doing nothing. But this error rate could be estimated using the cross-entropy benchmark to explore the system’s overall fidelity. That wasn’t the case beyond the transition point, where errors occurred quickly enough that they would interrupt the entanglement process.

When this occurs, the result is often two separate, smaller entangled systems, each of which were subject to the Sycamore chip’s base error rates. The researchers simulated this by creating two distinct clusters of entangled qubits that could be entangled with each other by a single operation, allowing them to turn entanglement on and off at will. They showed that this behavior allowed a classical computer to spoof the overall behavior by breaking the computation up into two manageable chunks.

Ultimately, they used their characterization of the phase transition to identify the maximum number of qubits they could keep in the low noise regime given the Sycamore processor’s base error rate and then performed a million random circuits on them. While this is relatively easy to do on quantum hardware, even assuming that we could build a supercomputer without bandwidth constraints, simulating it would take roughly 10,000 years on an existing supercomputer (the Frontier system). Allowing all of the system’s storage to operate as secondary memory cut the estimate down to 12 years.

What does this tell us?

Boixo emphasized that the value of the work isn’t really based on the value of performing random quantum circuits. Truly random bit strings might be useful in some contexts, but he emphasized that the real benefit here is a better understanding of the noise level that can be tolerated in quantum algorithms more generally. Since this benchmark is designed to make it as easy as possible to outperform classical computations, you would need the best standard computers here to have any hope of beating them to the answer for more complicated problems.

“Before you can do any other application, you need to win on this benchmark,” Boixo said. “If you are not winning on this benchmark, then you’re not winning on any other benchmark. This is the easiest thing for a noisy quantum computer compared to a supercomputer.”

Knowing how to identify this phase transition, he suggested, will also be helpful for anyone trying to run useful computations on today’s processors. “As we define the phase, it opens the possibility for finding applications in that phase on noisy quantum computers, where they will outperform classical computers,” Boixo said.

Implicit in this argument is an indication of why Google has focused on iterating on a single processor design even as many of its competitors have been pushing to increase qubit counts rapidly. If this benchmark indicates that you can’t get all of Sycamore’s qubits involved in the simplest low-noise regime calculation, then it’s not clear whether there’s a lot of value in increasing the qubit count. And the only way to change that is to lower the base error rate of the processor, so that’s where the company’s focus has been.

All of that, however, assumes that you hope to run useful calculations on today’s noisy hardware qubits. The alternative is to use error-corrected logical qubits, which will require major increases in qubit count. But Google has been seeing similar limitations due to Sycamore’s base error rate in tests that used it to host an error-corrected logical qubit, something we hope to return to in future coverage.

Nature, 2024. DOI: 10.1038/s41586-024-07998-6  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google identifies low noise “phase transition” in its quantum processor Read More »

thunderbird-android-client-is-k-9-mail-reborn,-and-it’s-in-solid-beta

Thunderbird Android client is K-9 Mail reborn, and it’s in solid beta

Thunderbird’s Android app, which is actually the K-9 Mail project reborn, is almost out. You can check it out a bit early in a beta that will feel pretty robust to most users.

Thunderbird, maintained by the Mozilla Foundation subsidiary MZLA, acquired the source code and naming rights to K-9 Mail, as announced in June 2022. The group also brought K-9 maintainer Christian Ketterer (or “cketti”) onto the project. Their initial goals, before a full rebrand into Thunderbird, involved importing Thunderbird’s automatic account setup, message filters, and mobile/desktop Thunderbird syncing.

At the tail end of 2023, however, Ketterer wrote on K-9’s blog that the punchlist of items before official Thunderbird-dom was taking longer than expected. But when it’s fully released, Thunderbird for Android will have those features. As such, beta testers are asked to check out a specific list of things to see if they work, including automatic setup, folder management, and K-9-to-Thunderbird transfer. The beta will not be “addressing longstanding issues,” Thunderbird’s blog post notes.

Launching Thunderbird for Android from K-9 Mail’s base makes a good deal of sense. Thunderbird’s desktop client has had a strange, disjointed life so far and is only just starting to regain a cohesive vision for what it wants to provide. For a long time now, K-9 Mail has been the Android email of choice for people who don’t want Gmail or Outlook, will not tolerate the default “Email” app on non-Google-blessed Android systems, and just want to see their messages.

Thunderbird Android client is K-9 Mail reborn, and it’s in solid beta Read More »

google-as-darth-vader:-why-ia-writer-quit-the-android-app-market

Google as Darth Vader: Why iA Writer quit the Android app market

“Picture a massive football stadium filled with fans month after month,” Reichenstein wrote to Ars. In that stadium, he writes:

  • 5 percent (max) have a two-week trial ticket
  • 2 percent have a yearly ticket
  • 0.5 percent have a monthly ticket
  • 0.5 percent are buying “all-time” tickets

But even if every lifetime ticket buyer showed up at once, that’s 10 percent of the stadium, Reichenstein said. Even without full visibility of every APK—”and what is happening in China at all,” he wrote—iA can assume 90 percent of users are “climbing over the fence.”

“Long story short, that’s how you can end up with 50,000 users and only 1,000 paying you,” Reichenstein wrote in the blog post.

Piracy doesn’t just mean lost revenue, Reichenstein wrote, but also increased demands for support, feature requests, and chances for bad ratings from people who never pay. And it builds over time. “You sell less apps through the [Play Store], but pirated users keep coming in because pirate sites don’t have such reviews. Reviews don’t matter much if the app is free.”

The iA numbers on macOS hint at a roughly 10 percent piracy rate. On iOS, it’s “not 0%,” but it’s “very, very hard to say what the numbers are”; there is also no “reset trick” or trials offered there.

A possible future unfreezing

Reichenstein wrote in the post and to Ars that sharing these kinds of numbers can invite critique from other app developers, both armchair and experienced. He’s seen that happening on Mastodon, Hacker News, and X (formerly Twitter). But “critical people are useful,” he noted, and he’s OK with people working backward to figure out how much iA might have made. (Google did not offer comment on aspects of iA’s post outside discussing Drive access policy.)

iA suggests that it might bring back Writer on Android, perhaps in a business-to-business scenario with direct payments. For now, it’s a slab of history, albeit far less valuable to the metaphorical Darth Vader that froze it.

Google as Darth Vader: Why iA Writer quit the Android app market Read More »

youtube-fixes-glitch-that-wrongly-removed-accounts,-deleted-videos

YouTube fixes glitch that wrongly removed accounts, deleted videos

As a message highlighted above the thread warned YouTube users that there were “longer than normal wait times” for support requests, YouTube continually asked for “patience” and turned off the comments.

“We are very sorry for this error on our part,” YouTube said.

Unable to leave comments, thousands of users mashed a button on the support thread, confirming that they had “the same question.” On Friday morning, 8,000 users had signaled despair, and as of this writing, the number had notched up to nearly 11,000.

YouTube has not confirmed how many users were removed, so that’s likely the best estimate we have for how many users were affected.

On Friday afternoon, YouTube did update the thread, confirming that “all channels incorrectly removed for Spam & Deceptive Practices have been fully reinstated!”

While YouTube claims that all channels are back online, not all the videos mistakenly removed were reinstated, YouTube said. Although most of the users impacted were reportedly non-creators, and therefore their livelihoods were likely not disrupted by the bug, at least one commenter complained, “my two most-viewed videos got deleted,” suggesting some account holders may highly value the videos still missing on their accounts.

“We’re working on reinstating the last few videos, thanks for bearing with us!” YouTube’s update said. “We know this was a frustrating experience, really appreciate your patience while we sort this out.”

It’s unclear if paid subscribers will be reimbursed for lost access to content.

YouTube did not respond to Ars’ request to comment.

YouTube fixes glitch that wrongly removed accounts, deleted videos Read More »

google-and-meta-update-their-ai-models-amid-the-rise-of-“alphachip”

Google and Meta update their AI models amid the rise of “AlphaChip”

Running the AI News Gauntlet —

News about Gemini updates, Llama 3.2, and Google’s new AI-powered chip designer.

Cyberpunk concept showing a man running along a futuristic path full of monitors.

Enlarge / There’s been a lot of AI news this week, and covering it sometimes feels like running through a hall full of danging CRTs, just like this Getty Images illustration.

It’s been a wildly busy week in AI news thanks to OpenAI, including a controversial blog post from CEO Sam Altman, the wide rollout of Advanced Voice Mode, 5GW data center rumors, major staff shake-ups, and dramatic restructuring plans.

But the rest of the AI world doesn’t march to the same beat, doing its own thing and churning out new AI models and research by the minute. Here’s a roundup of some other notable AI news from the past week.

Google Gemini updates

On Tuesday, Google announced updates to its Gemini model lineup, including the release of two new production-ready models that iterate on past releases: Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002. The company reported improvements in overall quality, with notable gains in math, long context handling, and vision tasks. Google claims a 7 percent increase in performance on the MMLU-Pro benchmark and a 20 percent improvement in math-related tasks. But as you know, if you’ve been reading Ars Technica for a while, AI typically benchmarks aren’t as useful as we would like them to be.

Along with model upgrades, Google introduced substantial price reductions for Gemini 1.5 Pro, cutting input token costs by 64 percent and output token costs by 52 percent for prompts under 128,000 tokens. As AI researcher Simon Willison noted on his blog, “For comparison, GPT-4o is currently $5/[million tokens] input and $15/m output and Claude 3.5 Sonnet is $3/m input and $15/m output. Gemini 1.5 Pro was already the cheapest of the frontier models and now it’s even cheaper.”

Google also increased rate limits, with Gemini 1.5 Flash now supporting 2,000 requests per minute and Gemini 1.5 Pro handling 1,000 requests per minute. Google reports that the latest models offer twice the output speed and three times lower latency compared to previous versions. These changes may make it easier and more cost-effective for developers to build applications with Gemini than before.

Meta launches Llama 3.2

On Wednesday, Meta announced the release of Llama 3.2, a significant update to its open-weights AI model lineup that we have covered extensively in the past. The new release includes vision-capable large language models (LLMs) in 11 billion and 90B parameter sizes, as well as lightweight text-only models of 1B and 3B parameters designed for edge and mobile devices. Meta claims the vision models are competitive with leading closed-source models on image recognition and visual understanding tasks, while the smaller models reportedly outperform similar-sized competitors on various text-based tasks.

Willison did some experiments with some of the smaller 3.2 models and reported impressive results for the models’ size. AI researcher Ethan Mollick showed off running Llama 3.2 on his iPhone using an app called PocketPal.

Meta also introduced the first official “Llama Stack” distributions, created to simplify development and deployment across different environments. As with previous releases, Meta is making the models available for free download, with license restrictions. The new models support long context windows of up to 128,000 tokens.

Google’s AlphaChip AI speeds up chip design

On Thursday, Google DeepMind announced what appears to be a significant advancement in AI-driven electronic chip design, AlphaChip. It began as a research project in 2020 and is now a reinforcement learning method for designing chip layouts. Google has reportedly used AlphaChip to create “superhuman chip layouts” in the last three generations of its Tensor Processing Units (TPUs), which are chips similar to GPUs designed to accelerate AI operations. Google claims AlphaChip can generate high-quality chip layouts in hours, compared to weeks or months of human effort. (Reportedly, Nvidia has also been using AI to help design its chips.)

Notably, Google also released a pre-trained checkpoint of AlphaChip on GitHub, sharing the model weights with the public. The company reported that AlphaChip’s impact has already extended beyond Google, with chip design companies like MediaTek adopting and building on the technology for their chips. According to Google, AlphaChip has sparked a new line of research in AI for chip design, potentially optimizing every stage of the chip design cycle from computer architecture to manufacturing.

That wasn’t everything that happened, but those are some major highlights. With the AI industry showing no signs of slowing down at the moment, we’ll see how next week goes.

Google and Meta update their AI models amid the rise of “AlphaChip” Read More »

“not-a-good-look”:-google’s-ad-tech-monopoly-defense-widely-criticized

“Not a good look”: Google’s ad tech monopoly defense widely criticized

“Not a good look”: Google’s ad tech monopoly defense widely criticized

Google wound down its defense in the US Department of Justice’s ad tech monopoly trial this week, following a week of testimony from witnesses that experts said seemed to lack credibility.

The tech giant started its defense by showing a widely mocked chart that Google executive Scott Sheffer called a “spaghetti football,” supposedly showing a fluid industry thriving thanks to Google’s ad tech platform but mostly just “confusing” everyone and possibly even helping to debunk its case, Open Markets Institute policy analyst Karina Montoya reported.

“The effect of this image might have backfired as it also made it evident that Google is ubiquitous in digital advertising,” Montoya reported. “During DOJ’s cross-examination, the spaghetti football was untangled to show only the ad tech products used specifically by publishers and advertisers on the open web.”

One witness, Marco Hardie, Google’s current head of industry, was even removed from the stand, his testimony deemed irrelevant by US District Judge Leonie Brinkema, Big Tech On Trial reported. Another, Google executive Scott Sheffer, gave testimony Brinkema considered “tainted,” Montoya reported. But perhaps the most heated exchange about a witness’ credibility came during the DOJ’s cross-examination of Mark Israel, the key expert that Google is relying on to challenge the DOJ’s market definition.

Google’s case depends largely on Brinkema agreeing that the DOJ’s market definition is too narrow, with an allegedly outdated focus on display ads on the open web, as opposed to a broader market including display ads appearing in apps or on social media. But experts monitoring the trial suggested that Brinkema may end up questioning Israel’s credibility after DOJ lawyer Aaron Teitelbaum’s aggressive cross-examination.

According to Big Tech on Trial, which posted the exchange on X (formerly Twitter), Teitelbaum’s line of questioning came across as a “striking and effective impeachment of Mark Israel’s credibility as a witness.”

During his testimony, Israel told Brinkema that Google’s share of the US display ads market is only 25 percent, minimizing Google’s alleged dominance while emphasizing that Google faced “intense competition” from other Big Tech companies like Amazon, Meta, and TikTok in this broader market, Open Markets Institute policy analyst Karina Montoya reported.

On cross-examination, Teitelbaum called Israel out as a “serial ‘expert’ for companies facing antitrust challenges” who “always finds that the companies ‘explained away’ market definition,” Big Tech on Trial posted on X. Teitelbaum even read out quotes from past cases “in which judges described” Israel’s “expert testimony as ‘not credible’ and having ‘misunderstood antitrust law.'”

Israel was also accused by past judges of rendering his opinions “based on false assumptions,” according to USvGoogleAds, a site run by the digital advertising watchdog Check My Ads with ad industry partners. And specifically for the Google ad tech case, Teitelbaum noted that Israel omitted ad spend data to seemingly manipulate one of his charts.

“Not a good look,” the watchdog’s site opined.

Perhaps most damaging, Teitelbaum asked Israel to confirm that “80 percent of his income comes from doing this sort of expert testimony,” suggesting that Israel seemingly depended on being paid by companies like Jet Blue and Kroger-Albertsons—and even previously by Google during the search monopoly trial—to muddy the waters on market definition. Lee Hepner, an antitrust lawyer with the American Economic Liberties Project, posted on X that the DOJ’s antitrust chief, Jonathan Kanter, has grown wary of serial experts supposedly sowing distrust in the court system.

“Let me say this clearly—this will not end well,” Kanter said during a speech at a competition law conference this month. “Already we see a seeping distrust of expertise by the courts and by law enforcers.”

“Best witnesses money can buy”

In addition to experts and Google staffers backing up Google’s proposed findings of fact and conclusions of law, Google brought in Courtney Caldwell—the CEO of a small business that once received a grant from Google and appears in Google’s marketing materials—to back up claims that a DOJ win could harm small businesses, Big Tech on Trial reported.

Google’s direct examination of Caldwell was “basically just a Google ad,” Big Tech on Trial said, while Check My Ads’ site suggested that Google mostly just called upon “the best witnesses their money can buy, and it still did not get them very far.”

According to Big Tech on Trial, Google is using a “light touch” in its defense, refusing to go “pound for pound” to refute the DOJ’s case. Using this approach, Google can seemingly ignore any argument the DOJ raises that doesn’t fit into the picture Google wants Brinkema to accept of Google’s ad empire growing organically, rather than anti-competitively constructed with the intent to shut out rivals through mergers and acquisitions.

Where the DOJ wants the judge to see “a Google-only pipeline through the heart of the ad tech stack, denying non-Google rivals the same access,” Google argues that it has only “designed a set of products that work efficiently with each other and attract a valuable customer base.”

The main problem with Google’s defense appears to be the evidence emerging from its own internal documents. AdExchanger’s Allison Schiff, who has been monitoring the trial, pulled out the spiciest quotes from the courtroom, where Google’s own employees seem to show intent to monopolize the ad tech industry.

Evidence that Brinkeman might find hard to ignore include a 2008 statement from Google’s former president of display advertising, David Rosenblatt, confirming that it would “take an act of god” to get people to switch ad platforms because of extremely high switching costs. Rosenblatt also suggested in a 2009 presentation that Google acquiring DoubleClick for Publishers would make Google’s ad tech like the New York Stock Exchange, putting Google in a position to monitor every ad sale and doing for display ads “what Google did to search.” There’s also a 2010 email where now-YouTube CEO Neal Mohan recommended getting Google ahead in the display ad market by “parking” a rival with “the most traction.”

On Friday, testimony concluded abruptly after the DOJ only called one rebuttal witness, Big Tech on Trial posted on X. Brinkema is expected to hear closing arguments on November 25, Big Tech on Trial reported, and rule in December, Montoya reported.

“Not a good look”: Google’s ad tech monopoly defense widely criticized Read More »