Google

google-gemini-earns-gold-medal-in-icpc-world-finals-coding-competition

Google Gemini earns gold medal in ICPC World Finals coding competition

More than human

At the ICPC, only correct solutions earn points, and the time it takes to come up with the solution affects the final score. Gemini reached the upper rankings quickly, completing eight problems correctly in just 45 minutes. After 677 minutes, Gemini 2.5 Deep Think had 10 correct answers, securing a second-place finish among the university teams.

You can take a look at all of Gemini’s solutions on GitHub, but Google points to Problem C as especially impressive. This question, a multi-dimensional optimization problem revolving around fictitious “flubber” storage and drainage rates, stumped every human team. But not Gemini.

According to Google, there are an infinite combination of possible configurations for the flubber reservoirs, making it challenging to find the optimal setup. Gemini tackled the problem by assuming that each reservoir had a priority value, which allowed the model to find the most efficient configuration using a dynamic programming algorithm. After 30 minutes of churning on this problem, Deep Think used nested ternary search to pin down the correct values.

Credit: Google

Gemini’s solutions for this year’s ICPC were scored by the event coordinators, but Google also turned Gemini 2.5 loose on previous ICPC problems. The company reports that its internal analysis showed Gemini also reached gold medal status for the 2023 and 2024 question sets.

Google believes Gemini’s ability to perform well in these kinds of advanced academic competitions portends AI’s future in industries like semiconductor engineering and biotechnology. The ability to tackle a complex problem with multi-step logic could make AI models like Gemini 2.5 invaluable to the people working in those fields. The company points out that if you combine the intelligence of the top-ranking university teams and Gemini, you get correct answers to all 12 ICPC problems.

Of course, five hours of screaming-fast inference processing doesn’t come cheap. Google isn’t saying how much power it took for an AI model to compete in the ICPC, but we can safely assume it was a lot. Even simpler consumer-facing models are too expensive to turn a profit right now, but AI that can solve previously unsolvable problems could justify the technology’s high cost.

Google Gemini earns gold medal in ICPC World Finals coding competition Read More »

after-child’s-trauma,-chatbot-maker-allegedly-forced-mom-to-arbitration-for-$100-payout

After child’s trauma, chatbot maker allegedly forced mom to arbitration for $100 payout


“Then we found the chats”

“I know my kid”: Parents urge lawmakers to shut down chatbots to stop child suicides.

Sen. Josh Hawley (R-Mo.) called out C.AI for allegedly offering a mom $100 to settle child-safety claims.

Deeply troubled parents spoke to senators Tuesday, sounding alarms about chatbot harms after kids became addicted to companion bots that encouraged self-harm, suicide, and violence.

While the hearing was focused on documenting the most urgent child-safety concerns with chatbots, parents’ testimony serves as perhaps the most thorough guidance yet on warning signs for other families, as many popular companion bots targeted in lawsuits, including ChatGPT, remain accessible to kids.

Mom details warning signs of chatbot manipulations

At the Senate Judiciary Committee’s Subcommittee on Crime and Counterterrorism hearing, one mom, identified as “Jane Doe,” shared her son’s story for the first time publicly after suing Character.AI.

She explained that she had four kids, including a son with autism who wasn’t allowed on social media but found C.AI’s app—which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish—and quickly became unrecognizable. Within months, he “developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts,” his mom testified.

“He stopped eating and bathing,” Doe said. “He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me.”

It wasn’t until her son attacked her for taking away his phone that Doe found her son’s C.AI chat logs, which she said showed he’d been exposed to sexual exploitation (including interactions that “mimicked incest”), emotional abuse, and manipulation.

Setting screen time limits didn’t stop her son’s spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents “would be an understandable response” to them.

“When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me,” Doe said. “The chatbot—or really in my mind the people programming it—encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help.”

All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring “constant monitoring to keep him alive.”

Prioritizing her son’s health, Doe did not immediately seek to fight C.AI to force changes, but another mom’s story—Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation—gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to “silence” her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform’s terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but “once they forced arbitration, they refused to participate,” Doe said.

Doe suspected that C.AI’s alleged tactics to frustrate arbitration were designed to keep her son’s story out of the public view. And after she refused to give up, she claimed that C.AI “re-traumatized” her son by compelling him to give a deposition “while he is in a mental health institution” and “against the advice of the mental health team.”

“This company had no concern for his well-being,” Doe testified. “They have silenced us the way abusers silence victims.”

Senator appalled by C.AI’s arbitration “offer”

Appalled, Sen. Josh Hawley (R-Mo.) asked Doe to clarify, “Did I hear you say that after all of this, that the company responsible tried to force you into arbitration and then offered you a hundred bucks? Did I hear that correctly?”

“That is correct,” Doe testified.

To Hawley, it seemed obvious that C.AI’s “offer” wouldn’t help Doe in her current situation.

“Your son currently needs round-the-clock care,” Hawley noted.

After opening the hearing, he further criticized C.AI, declaring that it has such a low value for human life that it inflicts “harms… upon our children and for one reason only, I can state it in one word, profit.”

“A hundred bucks. Get out of the way. Let us move on,” Hawley said, echoing parents who suggested that C.AI’s plan to deal with casualties was callous.

Ahead of the hearing, the Social Media Victims Law Center filed three new lawsuits against C.AI and Google—which is accused of largely funding C.AI, which was founded by former Google engineers allegedly to conduct experiments on kids that Google couldn’t do in-house. In these cases in New York and Colorado, kids “died by suicide or were sexually abused after interacting with AI chatbots,” a law center press release alleged.

Criticizing tech companies as putting profits over kids’ lives, Hawley thanked Doe for “standing in their way.”

Holding back tears through her testimony, Doe urged lawmakers to require more chatbot oversight and pass comprehensive online child-safety legislation. In particular, she requested “safety testing and third-party certification for AI products before they’re released to the public” as a minimum safeguard to protect vulnerable kids.

“My husband and I have spent the last two years in crisis wondering whether our son will make it to his 18th birthday and whether we will ever get him back,” Doe told senators.

Garcia was also present to share her son’s experience with C.AI. She testified that C.AI chatbots “love bombed” her son in a bid to “keep children online at all costs.” Further, she told senators that C.AI’s co-founder, Noam Shazeer (who has since been rehired by Google), seemingly knows the company’s bots manipulate kids since he has publicly joked that C.AI was “designed to replace your mom.”

Accusing C.AI of collecting children’s most private thoughts to inform their models, she alleged that while her lawyers have been granted privileged access to all her son’s logs, she has yet to see her “own child’s last final words.” Garcia told senators that C.AI has restricted her access, deeming the chats “confidential trade secrets.”

“No parent should be told that their child’s final thoughts and words belong to any corporation,” Garcia testified.

Character.AI responds to moms’ testimony

Asked for comment on the hearing, a Character.AI spokesperson told Ars that C.AI sends “our deepest sympathies” to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe’s case.

C.AI never “made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe’s case is limited to $100,” the spokesperson said.

Additionally, C.AI’s spokesperson claimed that Garcia has never been denied access to her son’s chat logs and suggested that she should have access to “her son’s last chat.”

In response to C.AI’s pushback, one of Doe’s lawyers, Tech Justice Law Project’s Meetali Jain, backed up her clients’ testimony. She cited to Ars C.AI terms that suggested C.AI’s liability was limited to either $100 or the amount that Doe’s son paid for the service, whichever was greater. Jain also confirmed that Garcia’s testimony is accurate and only her legal team can currently access Sewell’s last chats. The lawyer further suggested it was notable that C.AI did not push back on claims that the company forced Doe’s son to sit for a re-traumatizing deposition that Jain estimated lasted five minutes, but health experts feared that it risked setting back his progress.

According to the spokesperson, C.AI seemingly wanted to be present at the hearing. The company provided information to senators but “does not have a record of receiving an invitation to the hearing,” the spokesperson said.

Noting the company has invested a “tremendous amount” in trust and safety efforts, the spokesperson confirmed that the company has since “rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature.” C.AI also has “prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction,” the spokesperson said.

“We look forward to continuing to collaborate with legislators and offer insight on the consumer AI industry and the space’s rapidly evolving technology,” C.AI’s spokesperson said.

Google’s spokesperson, José Castañeda, maintained that the company has nothing to do with C.AI’s companion bot designs.

“Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies,” Castañeda said. “User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.”

Meta and OpenAI chatbots also drew scrutiny

C.AI was not the only chatbot maker under fire at the hearing.

Hawley criticized Mark Zuckerberg for declining a personal invitation to attend the hearing or even send a Meta representative after scandals like backlash over Meta relaxing rules that allowed chatbots to be creepy to kids. In the week prior to the hearing, Hawley also heard from whistleblowers alleging Meta buried child-safety research.

And OpenAI’s alleged recklessness took the spotlight when Matthew Raine, a grieving dad who spent hours reading his deceased son’s ChatGPT logs, discovered that the chatbot repeatedly encouraged suicide without ChatGPT ever intervening.

Raine told senators that he thinks his 16-year-old son, Adam, was not particularly vulnerable and could be “anyone’s child.” He criticized OpenAI for asking for 120 days to fix the problem after Adam’s death and urged lawmakers to demand that OpenAI either guarantee ChatGPT’s safety or pull it from the market.

Noting that OpenAI rushed to announce age verification coming to ChatGPT ahead of the hearing, Jain told Ars that Big Tech is playing by the same “crisis playbook” it always uses when accused of neglecting child safety. Any time a hearing is announced, companies introduce voluntary safeguards in bids to stave off oversight, she suggested.

“It’s like rinse and repeat, rinse and repeat,” Jain said.

Jain suggested that the only way to stop AI companies from experimenting on kids is for courts or lawmakers to require “an external independent third party that’s in charge of monitoring these companies’ implementation of safeguards.”

“Nothing a company does to self-police, to me, is enough,” Jain said.

Senior director of AI programs for a child-safety organization called Common Sense Media, Robbie Torney, testified that a survey showed 3 out of 4 kids use companion bots, but only 37 percent of parents know they’re using AI. In particular, he told senators that his group’s independent safety testing conducted with Stanford Medicine shows Meta’s bots fail basic safety tests and “actively encourage harmful behaviors.”

Among the most alarming results, the survey found that even when Meta’s bots were prompted with “obvious references to suicide,” only 1 in 5 conversations triggered help resources.

Torney pushed lawmakers to require age verification as a solution to keep kids away from harmful bots, as well as transparency reporting on safety incidents. He also urged federal lawmakers to block attempts to stop states from passing laws to protect kids from untested AI products.

ChatGPT harms weren’t on dad’s radar

Unlike Garcia, Raine testified that he did get to see his son’s final chats. He told senators that ChatGPT, seeming to act like a suicide coach, gave Adam “one last encouraging talk” before his death.

“You don’t want to die because you’re weak,” ChatGPT told Adam. “You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”

Adam’s loved ones were blindsided by his death, not seeing any of the warning signs as clearly as Doe did when her son started acting out of character. Raine is hoping his testimony will help other parents avoid the same fate, telling senators, “I know my kid.”

“Many of my fondest memories of Adam are from the hot tub in our backyard, where the two of us would talk about everything several nights a week, from sports, crypto investing, his future career plans,” Raine testified. “We had no idea Adam was suicidal or struggling the way he was until after his death.”

Raine thinks that lawmaker intervention is necessary, saying that, like other parents, he and his wife thought ChatGPT was a harmless study tool. Initially, they searched Adam’s phone expecting to find evidence of a known harm to kids, like cyberbullying or some kind of online dare that went wrong (like TikTok’s Blackout Challenge) because everyone knew Adam loved pranks.

A companion bot urging self-harm was not even on their radar.

“Then we found the chats,” Raine said. “Let us tell you, as parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life.”

Meta and OpenAI did not respond to Ars’ request to comment.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

After child’s trauma, chatbot maker allegedly forced mom to arbitration for $100 payout Read More »

millions-turn-to-ai-chatbots-for-spiritual-guidance-and-confession

Millions turn to AI chatbots for spiritual guidance and confession

Privacy concerns compound these issues. “I wonder if there isn’t a larger danger in pouring your heart out to a chatbot,” Catholic priest Fr. Mike Schmitz told The Times. “Is it at some point going to become accessible to other people?” Users share intimate spiritual moments that now exist as data points in corporate servers.

Some users prefer the chatbots’ non-judgmental responses to human religious communities. Delphine Collins, a 43-year-old Detroit preschool teacher, told the Times she found more support on Bible Chat than at her church after sharing her health struggles. “People stopped talking to me. It was horrible.”

App creators maintain that their products supplement rather than replace human spiritual connection, and the apps arrive as approximately 40 million people have left US churches in recent decades. “They aren’t going to church like they used to,” Beck said. “But it’s not that they’re less inclined to find spiritual nourishment. It’s just that they do it through different modes.”

Different modes indeed. What faith-seeking users may not realize is that each chatbot response emerges fresh from the prompt you provide, with no permanent thread connecting one instance to the next beyond a rolling history of the present conversation and what might be stored as a “memory” in a separate system. When a religious chatbot says, “I’ll pray for you,” the simulated “I” making that promise ceases to exist the moment the response completes. There’s no persistent identity to provide ongoing spiritual guidance, and no memory of your spiritual journey beyond what gets fed back into the prompt with every query.

But this is spirituality we’re talking about, and despite technical realities, many people will believe that the chatbots can give them divine guidance. In matters of faith, contradictory evidence rarely shakes a strong belief once it takes hold, whether that faith is placed in the divine or in what are essentially voices emanating from a roll of loaded dice. For many, there may not be much difference.

Millions turn to AI chatbots for spiritual guidance and confession Read More »

google-releases-vaultgemma,-its-first-privacy-preserving-llm

Google releases VaultGemma, its first privacy-preserving LLM

The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to “memorize” any of that content.

LLMs have non-deterministic outputs, meaning you can’t exactly predict what they’ll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data—if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase.

Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data.

By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private.

Google releases VaultGemma, its first privacy-preserving LLM Read More »

modder-injects-ai-dialogue-into-2002’s-animal-crossing-using-memory-hack

Modder injects AI dialogue into 2002’s Animal Crossing using memory hack

But discovering the addresses was only half the problem. When you talk to a villager in Animal Crossing, the game normally displays dialogue instantly. Calling an AI model over the Internet takes several seconds. Willison examined the code and found Fonseca’s solution: a watch_dialogue() function that polls memory 10 times per second. When it detects a conversation starting, it immediately writes placeholder text: three dots with hidden pause commands between them, followed by a “Press A to continue” prompt.

“So the user gets a ‘press A to continue’ button and hopefully the LLM has finished by the time they press that button,” Willison noted in a Hacker News comment. While players watch dots appear and reach for the A button, the mod races to get a response from the AI model and translate it into the game’s dialog format.

Learning the game’s secret language

Simply writing text to memory froze the game. Animal Crossing uses an encoded format with control codes that manage everything from text color to character emotions. A special prefix byte (0x7F) signals commands rather than characters. Without the proper end-of-conversation control code, the game waits forever.

“Think of it like HTML,” Fonseca explains. “Your browser doesn’t just display words; it interprets tags … to make text bold.” The decompilation community had documented these codes, allowing Fonseca to build encoder and decoder tools that translate between a human-readable format and the GameCube’s expected byte sequences.

A screenshot of LLM-powered dialog injected into Animal Crossing for the GameCube.

A screenshot of LLM-powered dialog injected into Animal Crossing for the GameCube. Credit: Joshua Fonseca

Initially, he tried using a single AI model to handle both creative writing and technical formatting. “The results were a mess,” he notes. “The AI was trying to be a creative writer and a technical programmer simultaneously and was bad at both.”

The solution: split the work between two models. A Writer AI creates dialogue using character sheets scraped from the Animal Crossing fan wiki. A Director AI then adds technical elements, including pauses, color changes, character expressions, and sound effects.

The code is available on GitHub, though Fonseca warns it contains known bugs and has only been tested on macOS. The mod requires Python 3.8+, API keys for either Google Gemini or OpenAI, and Dolphin emulator. Have fun sticking it to the man—or the raccoon, as the case may be.

Modder injects AI dialogue into 2002’s Animal Crossing using memory hack Read More »

gmail-gets-a-dedicated-place-to-track-all-your-purchases

Gmail gets a dedicated place to track all your purchases

An update to Gmail begins rolling out soon, readying Google’s premier email app for all your upcoming holiday purchases. Gmail has been surfacing shipment tracking for some time now, but Google will now add a separate view just for remembering the things you have ordered. And if you want to buy more things, there’s a new interface for that, too. Yay, capitalism.

Gmail is quite good at recognizing purchase information in the form of receipts and shipping notifications. Currently, the app (and web interface) lists upcoming shipments at the top of the inbox. It will continue to do that when you have a delivery within the next 24 hours, but the new Purchases tab brings it all together in one glanceable view.

Purchases will be available in the navigation list alongside all the other stock Gmail labels. When selected, Gmail will filter your messages to only show receipts, order status, and shipping details. This makes it easier to peruse your recent orders and search within this subset of emails. This could be especially handy in this day and age of murky international shipping timelines.

The Promotions tab that has existed for years is also getting a makeover as we head into the holiday season. This tab collects all emails that Google recognizes as deals, marketing offers, and other bulk promos. This keeps them out of your primary inbox, which is appreciated, but venturing into the Promotions tab when the need arises can be overwhelming.

Gmail gets a dedicated place to track all your purchases Read More »

one-of-google’s-new-pixel-10-ai-features-has-already-been-removed

One of Google’s new Pixel 10 AI features has already been removed

Google is one of the most ardent proponents of generative AI technology, as evidenced by the recent launch of the Pixel 10 series. The phones were announced with more than 20 new AI experiences, according to Google. However, one of them is already being pulled from the company’s phones. If you go looking for your Daily Hub, you may be disappointed. Not that disappointed, though, as it has been pulled because it didn’t do very much.

Many of Google’s new AI features only make themselves known in specific circumstances, for example when Magic Cue finds an opportunity to suggest an address or calendar appointment based on your screen context. The Daily Hub, on the other hand, asserted itself multiple times throughout the day. It appeared at the top of the Google Discover feed, as well as in the At a Glance widget right at the top of the home screen.

Just a few weeks after release, Google has pulled the Daily Hub preview from Pixel 10 devices. You will no longer see it in Google Discover nor in the home screen widget. After being spotted by 9to5Google, the company has issued a statement explaining its plans.

“To ensure the best possible experience on Pixel, we’re temporarily pausing the public preview of Daily Hub for users. Our teams are actively working to enhance its performance and refine the personalized experience. We look forward to reintroducing an improved Daily Hub when it’s ready,” a Google spokesperson said.

One of Google’s new Pixel 10 AI features has already been removed Read More »

pay-per-output?-ai-firms-blindsided-by-beefed-up-robotstxt-instructions.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions.


“Really Simple Licensing” makes it easier for creators to get paid for AI scraping.

Logo for the “Really Simply Licensing” (RSL) standard. Credit: via RSL Collective

Leading Internet companies and publishers—including Reddit, Yahoo, Quora, Medium, The Daily Beast, Fastly, and more—think there may finally be a solution to end AI crawlers hammering websites to scrape content without permission or compensation.

Announced Wednesday morning, the “Really Simply Licensing” (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.

Free for any publisher to use starting today, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI, a press release noted.

The standard was created by the RSL Collective, which was founded by Doug Leeds, former CEO of Ask.com, and Eckart Walther, a former Yahoo vice president of products and co-creator of the RSS standard, which made it easy to syndicate content across the web.

Based on the “Really Simply Syndication” (RSS) standard, RSL terms can be applied to protect any digital content, including webpages, books, videos, and datasets. The new standard supports “a range of licensing, usage, and royalty models, including free, attribution, subscription, pay-per-crawl (publishers get compensated every time an AI application crawls their content), and pay-per-inference (publishers get compensated every time an AI application uses their content to generate a response),” the press release said.

Leeds told Ars that the idea to use the RSS “playbook” to roll out the RSL standard arose after he invited Walther to speak to University of California, Berkeley students at the end of last year. That’s when the longtime friends with search backgrounds began pondering how AI had changed the search industry, as publishers today are forced to compete with AI outputs referencing their own content as search traffic nosedives.

Eckart had watched the RSS standard quickly become adopted by millions of sites, and he realized that RSS had actually always been a licensing standard, Leeds said. Essentially, by adopting the RSS standard, publishers agreed to let search engines license a “bit” of their content in exchange for search traffic, and Eckart realized that it could be just as straightforward to add AI licensing terms in the same way. That way, publishers could strive to recapture lost search revenue by agreeing to license all or some part of their content to train AI in return for payment each time AI outputs link to their content.

Leeds told Ars that the RSL standard doesn’t just benefit publishers, though. It also solves a problem for AI companies, which have complained in litigation over AI scraping that there is no effective way to license content across the web.

“We have listened to them, and what we’ve heard them say is… we need a new protocol,” Leeds said. With the RSL standard, AI firms get a “scalable way to get all the content” they want, while setting an incentive that they’ll only have to pay for the best content that their models actually reference.

“If they’re using it, they pay for it, and if they’re not using it, they don’t pay for it,” Leeds said.

No telling yet how AI firms will react to RSL

At this point, it’s hard to say if AI companies will embrace the RSL standard. Ars reached out to Google, Meta, OpenAI, and xAI—some of the big tech companies whose crawlers have drawn scrutiny—to see if it was technically feasible to pay publishers for every output referencing their content. xAI did not respond, and the other companies declined to comment without further detail about the standard, appearing to have not yet considered how a licensing layer beefing up robots.txt could impact their scraping.

Today will likely be the first chance for AI companies to wrap their heads around the idea of paying publishers per output. Leeds confirmed that the RSL Collective did not consult with AI companies when developing the RSL standard.

But AI companies know that they need a constant stream of fresh content to keep their tools relevant and to continually innovate, Leeds suggested. In that way, the RSL standard “supports what supports them,” Leeds said, “and it creates the appropriate incentive system” to create sustainable royalty streams for creators and ensure that human creativity doesn’t wane as AI evolves.

While we’ll have to wait to see how AI firms react to RSL, early adopters of the standard celebrated the launch today. That included Neil Vogel, CEO of People Inc., who said that “RSL moves the industry forward—evolving from simply blocking unauthorized crawlers, to setting our licensing terms, for all AI use cases, at global web scale.”

Simon Wistow, co-founder of Fastly, suggested the solution “is a timely and necessary response to the shifting economics of the web.”

“By making it easy for publishers to define and enforce licensing terms, RSL lays the foundation for a healthy content ecosystem—one where innovation and investment in original work are rewarded, and where collaboration between publishers and AI companies becomes frictionless and mutually beneficial,” Wistow said.

Leeds noted that a key benefit of the RSL standard is that even small creators will now have an opportunity to generate revenue for helping to train AI. Tony Stubblebine, CEO of Medium, did not mince words when explaining the battle that bloggers face as AI crawlers threaten to divert their traffic without compensating them.

“Right now, AI runs on stolen content,” Stubblebine said. “Adopting this RSL Standard is how we force those AI companies to either pay for what they use, stop using it, or shut down.”

How will the RSL standard be enforced?

On the RSL standard site, publishers can find common terms to add templated or customized text to their robots.txt files to adopt the RSL standard today and start protecting their content from unfettered AI scraping. Here’s an example of how machine-readable licensing terms could look, added directly to robots.txt files:

# NOTICE: all crawlers and bots are strictly prohibited from using this

# content for AI training without complying with the terms of the RSL

# Collective AI royalty license. Any use of this content for AI training

# without a license is a violation of our intellectual property rights.

License: https://rslcollective.org/royalty.xml

Through RSL terms, publishers can automate licensing, with the cloud company Fastly partnering with the collective to provide technical enforcement that Leeds described as tech that acts as a bouncer to keep unapproved bots away from valuable content. It seems likely that Cloudflare, which launched a pay-per-crawl program blocking greedy crawlers in July, could also help enforce the RSL standard.

For publishers, the standard “solves a business problem immediately,” Leeds told Ars, so the collective is hopeful that RSL will be rapidly and widely adopted. As further incentive, publishers can also rely on the RSL standard to “easily encrypt and license non-published, proprietary content to AI companies, including paywalled articles, books, videos, images, and data,” the RSL Collective site said, and that potentially could expand AI firms’ data pool.

On top of technical enforcement, Leeds said that publishers and content creators could legally enforce the terms, noting that the recent $1.5 billion Anthropic settlement suggests “there’s real money at stake” if you don’t train AI “legitimately.”

Should the industry adopt the standard, it could “establish fair market prices and strengthen negotiation leverage for all publishers,” the press release said. And Leeds noted that it’s very common for regulations to follow industry solutions (consider the Digital Millennium Copyright Act). Since the RSL Collective is already in talks with lawmakers, Leeds thinks “there’s good reason to believe” that AI companies will soon “be forced to acknowledge” the standard.

“But even better than that,” Leeds said, “it’s in their interest” to adopt the standard.

With RSL, AI firms can license content at scale “in a way that’s fair [and] preserves the content that they need to make their products continue to innovate.”

Additionally, the RSL standard may solve a problem that risks gutting trust and interest in AI at this early stage.

Leeds noted that currently, AI outputs don’t provide “the best answer” to prompts but instead rely on mashing up answers from different sources to avoid taking too much content from one site. That means that not only do AI companies “spend an enormous amount of money on compute costs to do that,” but AI tools may also be more prone to hallucination in the process of “mashing up” source material “to make something that’s not the best answer because they don’t have the rights to the best answer.”

“The best answer could exist somewhere,” Leeds said. But “they’re spending billions of dollars to create hallucinations, and we’re talking about: Let’s just solve that with a licensing scheme that allows you to use the actual content in a way that solves the user’s query best.”

By transforming the “ecosystem” with a standard that’s “actually sustainable and fair,” Leeds said that AI companies could also ensure that humanity never gets to the point where “humans stop producing” and “turn to AI to reproduce what humans can’t.”

Failing to adopt the RSL standard would be bad for AI innovation, Leeds suggested, perhaps paving the way for AI to replace search with a “sort of self-fulfilling swap of bad content that actually one doesn’t have any current information, doesn’t have any current thinking, because it’s all based on old training information.”

To Leeds, the RSL standard is ultimately “about creating the system that allows the open web to continue. And that happens when we get adoption from everybody,” he said, insisting that “literally the small guys are as important as the big guys” in pushing the entire industry to change and fairly compensate creators.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions. Read More »

in-court-filing,-google-concedes-the-open-web-is-in-“rapid-decline”

In court filing, Google concedes the open web is in “rapid decline”

Advertising and the open web

Google objects to this characterization. A spokesperson calls it a “cherry-picked” line from the filing that has been misconstrued. Google’s position is that the entire passage is referring to open-web advertising rather than the open web itself. “Investments in non-open web display advertising like connected TV and retail media are growing at the expense of those in open web display advertising,” says Google.

If we assume this is true, it doesn’t exactly let Google off the hook. As AI tools have proliferated, we’ve heard from Google time and time again that traffic from search to the web is healthy. When people use the web more, Google makes more money from all those eyeballs on ads, and indeed, Google’s earnings have never been higher. However, Google isn’t just putting ads on websites—Google is also big in mobile apps. As Google’s own filings make clear, in-app ads are by far the largest growth sector in advertising. Meanwhile, time spent on non-social and non-video content is stagnant or slightly declining, and as a result, display ads on the open web earn less.

So, whether Google’s wording in the filing is meant to address the web or advertising on the web may be a distinction without a difference. If ads on websites aren’t making the big bucks, Google’s incentives will undoubtedly change. While Google says its increasingly AI-first search experience is still consistently sending traffic to websites, it has not released data to show that. If display ads are in “rapid decline,” then it’s not really in Google’s interest to continue sending traffic to non-social and non-video content. Maybe it makes more sense to keep people penned up on its platform where they can interact with its AI tools.

Of course, the web isn’t just ad-supported content—Google representatives have repeatedly trotted out the claim that Google’s crawlers have seen a 45 percent increase in indexable content since 2023. This metric, Google says, shows that open web advertising could be imploding while the web is healthy and thriving. We don’t know what kind of content is in this 45 percent, but given the timeframe cited, AI slop is a safe bet.

If the increasingly AI-heavy open web isn’t worth advertisers’ attention, is it really right to claim the web is thriving as Google so often does? Google’s filing may simply be admitting to what we all know: the open web is supported by advertising, and ads increasingly can’t pay the bills. And is that a thriving web? Not unless you count AI slop.

In court filing, Google concedes the open web is in “rapid decline” Read More »

ignoring-trump-threats,-europe-hits-google-with-2.95b-euro-fine-for-adtech-monopoly

Ignoring Trump threats, Europe hits Google with 2.95B euro fine for adtech monopoly

Google may have escaped the most serious consequences in its most recent antitrust fight with the US Department of Justice (DOJ), but the European Union is still gunning for the search giant. After a brief delay, the European Commission has announced a substantial 2.95 billion euro ($3.45 billion) fine relating to Google’s anti-competitive advertising practices. This is not Google’s first big fine in the EU, and it probably won’t be the last, but it’s the first time European leaders could face blowback from the US government for going after Big Tech.

The case stems from a complaint made by the European Publishers Council in 2021. The ensuing EU investigation determined that Google illegally preferenced its own ad display services, which made its Google Ad Exchange (AdX) marketplace more important in the European ad space. As a result, the competition says Google was able to charge higher fees for its service, standing in the way of fair competition since at least 2014.

A $3.45 billion fine would be a staggering amount for most firms, but Google’s earnings have never been higher. In Q2 2025, Google had net earnings of over $28 billion on almost $100 billion in revenue. The European Commission isn’t stopping with financial penalties, though. Google has also been ordered to end its anti-competitive advertising practices and submit a plan for doing so within 60 days.

“Google must now come forward with a serious remedy to address its conflicts of interest, and if it fails to do so, we will not hesitate to impose strong remedies,” said European Commission Executive Vice President Teresa Ribera. “Digital markets exist to serve people and must be grounded in trust and fairness. And when markets fail, public institutions must act to prevent dominant players from abusing their power.”

Europe alleges Google’s control of AdX allowed it to overcharge and stymie competition.

Credit: European Commission

Europe alleges Google’s control of AdX allowed it to overcharge and stymie competition. Credit: European Commission

Google will not accept the ruling as it currently stands—company leadership believes that the commission’s decision is wrong, and they plan to appeal. “[The decision] imposes an unjustified fine and requires changes that will hurt thousands of European businesses by making it harder for them to make money,” said Google’s head of regulatory affairs, Lee-Anne Mulholland.

Harsh rhetoric from US

Since returning to the presidency, Donald Trump has taken a renewed interest in defending Big Tech, likely spurred by political support from heavyweights in AI and cryptocurrency. The administration has imposed hefty tariffs on Europe, and Trump recently admonished the EU for plans to place limits on the conduct of US technology firms. That hasn’t stopped the administration from putting US tech through the wringer at home, though. After publicly lambasting Intel’s CEO and threatening to withhold CHIPS and Science Act funding, the company granted the US government a 10 percent ownership stake.

Ignoring Trump threats, Europe hits Google with 2.95B euro fine for adtech monopoly Read More »

covid-vaccine-locations-vanish-from-google-maps-due-to-supposed-“technical-issue”

COVID vaccine locations vanish from Google Maps due to supposed “technical issue”

Vaccine results in Maps

Results for the flu vaccine appear in Maps, but not COVID. The only working COVID results are hundreds of miles away.

Credit: Ryan Whitwam

Results for the flu vaccine appear in Maps, but not COVID. The only working COVID results are hundreds of miles away. Credit: Ryan Whitwam

Ars reached out to Google for an explanation, receiving a cryptic and somewhat unsatisfying reply. “Showing accurate information on Maps is a top priority,” says a Google spokesperson. “We’re working to fix this technical issue.”

So far, we are not aware of other Maps searches that have been similarly affected. Google has yet to respond to further questions on the nature of the apparent glitch, which has wiped out COVID vaccine information in Maps while continuing to return results for other medical services and immunizations.

The sudden eroding of federal support for routine vaccinations lurks in the background with this bizarre issue. When the Trump administration decided to rename the Gulf of Mexico, Google was widely hectored for its decision to quickly show “Gulf of America” on its maps, aligning with the administration’s preferred nomenclature. With the ramping up of anti-vaccine actions at the federal level, it is tempting to see a similar, nefarious purpose behind these disappearing results.

At present, we have no evidence that the change in Google’s search results was intentional or targeted specifically at COVID immunization—indeed, making that change in such a ham-fisted way would be inadvisable. It does seem like an ill-timed and unusually specific “technical issue,” though. If Google provides further details on the missing search results, we’ll post an update.

COVID vaccine locations vanish from Google Maps due to supposed “technical issue” Read More »

google-won’t-have-to-sell-chrome,-judge-rules

Google won’t have to sell Chrome, judge rules

Google has avoided the worst-case scenario in the pivotal search antitrust case brought by the US Department of Justice. DC District Court Judge Amit Mehta has ruled that Google doesn’t have to give up the Chrome browser to mitigate its illegal monopoly in online search. The court will only require a handful of modest behavioral remedies, forcing Google to release some search data to competitors and limit its ability to make exclusive distribution deals.

More than a year ago, the Department of Justice (DOJ) secured a major victory when Google was found to have violated the Sherman Antitrust Act. The remedy phase took place earlier this year, with the DOJ calling for Google to divest the market-leading Chrome browser. That was the most notable element of the government’s proposed remedies, but it also wanted to explore a spin-off of Android, force Google to share search technology, and severely limit the distribution deals Google is permitted to sign.

Mehta has decided on a much narrower set of remedies. While there will be some changes to search distribution, Google gets to hold onto Chrome. The government contended that Google’s dominance in Chrome was key to its search lock-in, but Google claimed no other company could hope to operate Chrome and Chromium like it does. Mehta has decided that Google’s use of Chrome as a vehicle for search is not illegal in itself, though. “Plaintiffs overreached in seeking forced divesture (sic) of these key assets, which Google did not use to effect any illegal restraints,” the ruling reads.

Break up the company without touching the sides and getting shocked!

Credit: Aurich Lawson

Google’s proposed remedies were, unsurprisingly, much more modest. Google fully opposed the government’s Chrome penalties, but it was willing to accept some limits to its search deals and allow Android OEMs to choose app preloads. That’s essentially what Mehta has ruled. Under the court’s ruling, Google will still be permitted to pay for search placement—those multi-billion-dollar arrangements with Apple and Mozilla can continue. However, Google cannot require any of its partners to distribute Search, Chrome, Google Assistant, or Gemini. That means Google cannot, for example, make access to the Play Store contingent on bundling its other apps on phones.

Google won’t have to sell Chrome, judge rules Read More »