Artificial Intelligence

after-complaints,-google-will-make-it-easier-to-disable-gen-ai-search-in-photos

After complaints, Google will make it easier to disable gen AI search in Photos

Google has spent the past few years in a constant state of AI escalation, rolling out new versions of its Gemini models and integrating that technology into every feature possible. To say this has been an annoyance for Google’s userbase would be an understatement. Still, the AI-fueled evolution of Google products continues unabated—except for Google Photos. After waffling on how to handle changes to search in Photos, Google has relented and will add a simple toggle to bring back the classic search experience.

The rollout of the Gemini-powered Ask Photos search experience has not been smooth. According to Google Photos head Shimrit Ben-Yair, the company has heard the complaints. As a result, Google Photos will soon make it easy to go back to the traditional, non-Gemini search system.

If you weren’t using Google Photos from the start, it can be hard to understand just how revolutionary the search experience was. We went from painstakingly scrolling through timelines to find photos to being able to just search for what was in them. This application of artificial intelligence predates the current obsession with generative systems, and that’s why Google decided a few years ago it had to go.

Google launched the beta Ask Photos experience in 2024, rolling it out slowly in the Photos app while it gathered feedback. Google got a whole lot of feedback, most of it negative. Ask Photos is intended to better respond to natural language queries, but it’s much slower than the traditional search, and the way it chooses the pictures to display seems much more prone to error. It was so bad that Google had to pause the full rollout of Ask Photos in summer 2025 to make vital improvements, although it’s still not very good.

After complaints, Google will make it easier to disable gen AI search in Photos Read More »

gemini-burrows-deeper-into-google-workspace-with-revamped-document-creation-and-editing

Gemini burrows deeper into Google Workspace with revamped document creation and editing

Google didn’t waste time integrating Gemini into its popular Workspace apps, but those AI features are now getting an overhaul. The company says its new Gemini features for Drive, Docs, Sheets, and Slides will save you from the tyranny of the blank page by doing the hard work for you. Gemini will be able to create and refine drafts, stylize slides, and gather context from across your Google account. At this rate, you’ll soon never have to use that squishy human brain of yours again, and won’t that be a relief?

If you go to create a new Google Doc right now, you’ll see an assortment of AI-powered tools at the top of the page. Google is refining and expanding these options under the new system. The new AI editing features will appear at the bottom of a fresh document with a text box similar to your typical chatbot interface. From there, you can describe the document you want and get a first draft in a snap. When generating a new document, you can rope in content from sources like Gmail, other documents, Google Chat, and the web.

This also comes with expanded AI editing capabilities. You can use further prompts to reformat and change the document or simply highlight specific sections and ask for changes. Docs will also support AI-assisted style matching, which might come in handy if you have multiple people editing the text. Google notes that all Gemini suggestions are private until you approve them for use.

Gemini in Google Workspace.

Gemini is also getting an upgrade in Sheets, and Google claims the robot’s spreadsheet capabilities are nearing those of flesh-and-blood humans in recent testing. Similar to text documents, you can tell Gemini in the sidebar what kind of spreadsheet you need and the AI will use the prompt (and whatever data sources you specify) to generate it. Gemini can also allegedly fill in missing data by searching for it on the web. In our past testing, Gemini has had a lot of trouble with spreadsheet layouts, but Google says this revamp will handle everything, from basic tasks to complex data analysis.

Gemini burrows deeper into Google Workspace with revamped document creation and editing Read More »

google’s-new-command-line-tool-can-plug-openclaw-into-your-workspace-data

Google’s new command-line tool can plug OpenClaw into your Workspace data

The command line is hot again. For some people, command lines were never not hot, of course, but it’s becoming more common now in the age of AI. Google launched a Gemini command-line tool last year, and now it has a new AI-centric command-line option for cloud products. The new Google Workspace CLI bundles the company’s existing cloud APIs into a package that makes it easy to integrate with a variety of AI tools, including OpenClaw. How do you know this setup won’t blow up and delete all your data? That’s the fun part—you don’t.

There are some important caveats with the Workspace tool. While this new GitHub project is from Google, it’s “not an officially supported Google product.” So you’re on your own if you choose to use it. The company notes that functionality may change dramatically as Google Workspace CLI continues to evolve, and that could break workflows you’ve created in the meantime.

For people interested in tinkering with AI automations and don’t mind the inherent risks, Google Workspace CLI has a lot to offer, even at this early stage. It includes the APIs for every Workspace product, including Gmail, Drive, and Calendar. It’s designed for use by humans and AI agents, but like everything else Google does now, there’s a clear emphasis on AI.

The tool supports structured JSON outputs, and there are more than 40 agent skills included, says Google Cloud director Addy Osmani. The focus of Workspace CLI seems to be on agentic systems that can create command-line inputs and directly parse JSON outputs. The integrated tools can load and create Drive files, send emails, create and edit Calendar appointments, send chat messages, and much more.

Google’s new command-line tool can plug OpenClaw into your Workspace data Read More »

google-reveals-nano-banana-2-ai-image-model,-coming-to-gemini-today

Google reveals Nano Banana 2 AI image model, coming to Gemini today

With Nano Banana 2, Google promises consistency for up to five characters at a time, along with accurate rendering of as many as 14 different objects per workflow. This, along with richer textures and “vibrant” lighting will aid in visual storytelling with Nano Banana 2. Google is also expanding the range of available aspect ratios and resolutions, from 512px square up to 4K widescreen.

So what can you do with Nano Banana 2? Google has provided some example images with associated prompts. These are, of course, handpicked images, but Nano Banana has been a popular image model for good reason. This degree of improvement seems believable based on past iterations of Nano Banana.

Google AI infographic

Prompt: High-quality flat lay photography creating a DIY infographic that simply explains how the water cycle works, arranged on a clean, light gray textured background. The visual story flows from left to right in clear steps. Simple, clean black arrows are hand-drawn onto the background to guide the viewer’s eye. The overall mood is educational, modern, and easy to understand. The image is shot from a top-down, bird’s-eye view with soft, even lighting that minimizes shadows and keeps the focus on the process.

Credit: Google

Prompt: High-quality flat lay photography creating a DIY infographic that simply explains how the water cycle works, arranged on a clean, light gray textured background. The visual story flows from left to right in clear steps. Simple, clean black arrows are hand-drawn onto the background to guide the viewer’s eye. The overall mood is educational, modern, and easy to understand. The image is shot from a top-down, bird’s-eye view with soft, even lighting that minimizes shadows and keeps the focus on the process. Credit: Google

AI museum comparison

Prompt: Create an image of Museum Clos Lucé. In the style of bright colored Synthetic Cubism. No text. Your plan is to first search for visual references, and generate after. Aspect ratio 16:9.

Credit: Google

Prompt: Create an image of Museum Clos Lucé. In the style of bright colored Synthetic Cubism. No text. Your plan is to first search for visual references, and generate after. Aspect ratio 16:9. Credit: Google

AI farm image

Create an image of these 14 characters and items having fun at the farm. The overall atmosphere is fun, silly and joyful. It is strictly important to keep identity consistent of all the 14 characters and items.

Credit: Google

Create an image of these 14 characters and items having fun at the farm. The overall atmosphere is fun, silly and joyful. It is strictly important to keep identity consistent of all the 14 characters and items. Credit: Google

Google must be pretty confident in this model’s capabilities because it will be the only one available going forward. Starting now, Nano Banana 2 will replace both the standard and Pro variants of Nano Banana across the Gemini app, search, AI Studio, Vertex AI, and Flow.

In the Gemini app and on the website, Nano Banana 2 will be the image generator for the Fast, Thinking, and Pro settings. It’s possible there will eventually be a Nano Banana 2 Pro—Google tends to release elements of new model families one at a time. For now, it’s all “Flash” Image.

Google reveals Nano Banana 2 AI image model, coming to Gemini today Read More »

musk-has-no-proof-openai-stole-xai-trade-secrets,-judge-rules,-tossing-lawsuit

Musk has no proof OpenAI stole xAI trade secrets, judge rules, tossing lawsuit


Hostility is not proof of theft

Even twisting an ex-employee’s text to favor xAI’s reading fails to sway judge.

Elon Musk appears to be grasping at straws in a lawsuit accusing OpenAI of poaching eight xAI employees in an allegedly unlawful bid to access xAI trade secrets connected to its data centers and chatbot, Grok.

In a Tuesday order granting OpenAI’s motion to dismiss, US District Judge Rita F. Lin said that xAI failed to provide evidence of any misconduct from OpenAI.

Instead, xAI seemed fixated on a range of alleged conduct of former employees. But in assessing xAI’s claims, Lin said that xAI failed to show proof that OpenAI induced any of these employees to steal trade secrets “or that these former xAI employees used any stolen trade secrets once employed by OpenAI.”

Two employees admitted to stealing confidential information, with both downloading xAI’s source code and one improperly grabbing a supposedly sensitive recording from a Musk “All Hands” meeting. But the rest were either accused of retaining seemingly less consequential data, like retaining work chats on their devices, or didn’t seem to hold any confidential information at all. Lin called out particularly weak arguments that xAI’s complaint acknowledged that one employee who OpenAI poached never received access to confidential information allegedly sought after exiting xAI, and two employees were lumped into the complaint who “simply left xAI for OpenAI,” Lin noted.

From the limited evidence, Lin concluded that “while xAI may state misappropriation claims against a couple of its former employees, it does not state a plausible misappropriation claim against OpenAI.”

Lin’s order will likely not be the end of the litigation, as she is allowing xAI to amend its complaint to address the current deficiencies.

Ars could not immediately reach xAI for comment, so it’s unclear what steps xAI may take next.

However, xAI seems unlikely to give up the fight, which OpenAI has alleged is part of a “harassment campaign” that Musk is waging through multiple lawsuits attacking his biggest competitor’s business practices.

Unsurprisingly, OpenAI celebrated the order on X, alleging that “this baseless lawsuit was never anything more than yet another front in Mr. Musk’s ongoing campaign of harassment.”

Other tech companies poaching talent for AI projects will likely be relieved while reading Lin’s order. Commercial litigator Sarah Tishler told Ars that the order “boils down to a fundamental concept in trade secret law: hiring from a competitor is not the same as stealing trade secrets from one.”

“Under the Defend Trade Secrets Act, xAI has to show that OpenAI actually received and used the alleged trade secrets, not just that it hired employees who may have taken them,” Tishler said. “Suspicious timing, aggressive recruiting, and even downloaded files are not enough on their own.”

Tishler suggested that the ruling will likely be welcomed by AI firms eager to secure the best talent without incurring legal risks from their hiring practices.

“In the AI industry, where talent moves fast and the competitive stakes are enormous, this ruling reaffirms that suspicion is not enough,” Tishler said. “You have to show the stolen information actually made it into the competitor’s hands and was put to use.”

OpenAI not liable for engineers swiping source code

Through the lawsuit, Musk has alleged that OpenAI is violating California’s unfair competition law. He claims that OpenAI is attempting “to destroy legitimate competition in the AI industry by neutralizing xAI’s innovations” and forcing xAI “to unfairly compete against its own trade secrets.”

But this claim hinges entirely upon xAI proving that OpenAI poached its employees to steal its trade secrets. So, for xAI’s lawsuit to proceed, xAI will need to beef up the evidence base for its other claim, that OpenAI has violated the federal Defend Trade Secrets Act, Lin said. To succeed on that, xAI must prove that OpenAI unlawfully acquired, disclosed, or used a trade secret with xAI’s consent.

That will likely be challenging because xAI, at this point, has not offered “any nonconclusory allegations that OpenAI itself acquired, disclosed, or used xAI’s trade secrets,” Lin wrote.

All xAI has claimed is that OpenAI induced former employees to share secrets, and so far, nothing backs that claim, Lin said. Tishler noted that the court also rejected an xAI theory that “OpenAI should be responsible for what its new hires did before they arrived” for “the same reason: without evidence that OpenAI directed the theft or actually put the stolen information to use, you cannot hold the company liable.”

The strongest evidence that xAI had of employee misconduct, allegedly allowing OpenAI to misappropriate xAI trade secrets, revolves around the departure of one of xAI’s earliest engineers, Xuechen Li.

That evidence wasn’t enough, Lin said. xAI alleged that Li gave a presentation to OpenAI that supposedly included confidential information. Li also uploaded “the entire xAI source code base to a personal cloud account,” which he had connected to ChatGPT, Lin noted, after a recruiter sent a message on Signal sharing a link with Li to another unrelated cloud storage location.

xAI hoped the Signal messages would shock the court, expecting it to read through the lines the way xAI did. As proof that OpenAI allegedly got access to xAI’s source code, xAI pointed to a Signal message that an OpenAI recruiter sent to Li “four hours after” Li downloaded the source code, saying “nw!” xAI has alleged this message is short-hand for “no way!”—suggesting the OpenAI recruiter was geeked to get access to xAI’s source code. But in a footnote, Lin said that “OpenAI insists that ‘nw’ means ‘no worries,’” and thus is unconnected to Li’s decision to upload the source code to a ChatGPT-linked cloud account.

Even interpreting the text using xAI’s reading, however, xAI did not show enough to prove the recruiter or OpenAI accessed or requested the files, Lin said.

It also didn’t help xAI’s case that a temporary injunction that xAI secured in a separate lawsuit targeting the engineer blocked Li from accepting a job at OpenAI.

That injunction led OpenAI to withdraw its job offer to Li. And that’s a problem for xAI, because since Li never worked at OpenAI, it’s clear that he never used xAI’s trade secrets while working for OpenAI.

Further weakening xAI’s arguments, if Li indeed shared confidential information during his presentation while interviewing for OpenAI, xAI has alleged no facts suggesting that OpenAI was aware Li was sharing xAI trade secrets, Lin wrote.

This “makes it very hard to argue OpenAI ever used anything he allegedly took,” Tishler told Ars.

Another former xAI engineer, Jimmy Fraiture, was accused of copying xAI trade secrets, but Fraiture has said he deleted the information he improperly downloaded before starting his job at OpenAI. Importantly, Lin said, since he joined OpenAI, there’s no evidence that he used xAI trade secrets to benefit xAI’s rival.

“Other than the bare fact that Fraiture had been recruited” by the same OpenAI employee “who had also recruited Li, xAI does not allege any facts indicating that OpenAI had encouraged Fraiture to take xAI’s confidential information in the first place,” Lin wrote.

Since “none of the other former employees allegedly shared with or disclosed to OpenAI any xAI trade secrets,” xAI could not advance its claim that OpenAI misappropriated trade secrets based only on allegations tied to Li and Fraiture’s supposed misconduct, Lin said.

xAI may be able to amend its complaint to maintain these arguments, but the company has thus far presented scant, purely circumstantial evidence.

It’s possible that xAI will secure more evidence to support its misappropriation claims against OpenAI in its ongoing lawsuit against Li. Ars could not immediately reach Li’s lawyer to find out if today’s ruling may impact that case.

Ex-executive’s “hostility” is not proof of theft

Among the least convincing arguments that xAI raised was a claim that an unnamed finance executive left xAI to take a “lesser role” at OpenAI after learning everything he knew about data centers from xAI.

That executive slighted xAI when Musk’s company later attempted to inquire about “confidentiality concerns.”

“Suck my dick,” the former xAI executive allegedly said, refusing to explain how his OpenAI work might overlap with his xAI position. “Leave me the fuck alone.”

xAI tried to argue that the executive’s hostility was proof of misconduct. But Lin wrote that xAI only alleged that the executive “merely possessed xAI trade secrets about data centers” and did not allege that he ever used trade secrets to benefit OpenAI.

Had xAI found evidence that OpenAI’s data center strategy suddenly mirrored xAI’s after the executive joined xAI’s rival, that may have helped xAI’s case. But there are plenty of reasons a former employee might reject an ex-employer’s outreach following an exit, Lin suggested.

“His hostility when xAI reached out about its confidentiality concerns also does not support a plausible inference of use,” Lin wrote. “Hostility toward one’s former employer during departure does not, without more, indicate use of trade secrets in a subsequent job. Nor does the executive’s lack of experience with AI data centers before his time at xAI, without more, support a plausible inference that he used xAI’s trade secrets at OpenAI.”

xAI has until March 17 to amend its complaint to keep up this particular fight against OpenAI. But the company won’t be able to add any new claims or parties, Lin noted, “or otherwise change the allegations except to correct the identified deficiencies.”

Criminal probe likely leaves OpenAI on pins

For Li, the engineer accused of disclosing xAI trade secrets with OpenAI, the litigation could eliminate one front of discovery as he navigates two other legal fights over xAI’s trade secrets claims.

Tishler has been closely monitoring xAI’s trade secret legal battles. In October, she noted that Li is in a particularly prickly position, facing pressure in civil litigation from Musk to turn over data that could be used against him in the Federal Bureau of Investigation’s criminal investigation into Musk’s allegations. As Tishler explained:

“The practical reality is stark: Li faces a choice between protecting himself in the criminal action with his silence, and the civil consequences of doing so. Refuse to answer, and xAI could argue adverse inferences; answer, and the responses could feed the criminal case.”

Ultimately, the FBI is trying to prove that Li stole information that qualified as a trade secret and intended to use it for OpenAI’s benefit, while knowing that it would harm xAI. If they succeed, “xAI would suddenly have a government-backed record that its trade secrets were stolen,” Tishler wrote.

If xAI were so armed and able to keep the OpenAI lawsuit alive, the central question in the lawsuit that Lin dismissed today would shift, Tishler suggested, from “was there a theft?” to “what did OpenAI know, and when did it know it?”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Musk has no proof OpenAI stole xAI trade secrets, judge rules, tossing lawsuit Read More »

data-center-builders-thought-farmers-would-willingly-sell-land,-learn-otherwise

Data center builders thought farmers would willingly sell land, learn otherwise

Notably, one resident in Huddleston’s county who received an offer, 75-year-old Timothy Grosser, even declined a proposal to “name your price” when a tech company sought to buy his 250-acre farm, The Guardian reported.

“There is none,” Grosser said.

The farm is where he “lives, hunts, and raises cattle” and where his grandson hunts a turkey every Christmas for the family feast.

“The money’s not worth giving up your lifestyle,” Grosser said.

Another farmer in Wisconsin, Anthony Barta, reportedly fretted about what would happen to his neighbors if he took a deal he was offered—showing the deep bonds of people whose farms have bordered each other for years. In his community, another farmer was offered between $70 million and $80 million for 6,000 acres.

“Me and my family, we own the farm and run close to 1,000 animals,” Barta said. “What would that do if that’s next to it? Can they even be there? You know, that’s our livelihood—the farm. We’re just concerned what, if it would go through, what would happen to us and our neighbors and farms and our community? What would happen to that?”

Some tech companies are apparently not taking “no” for an answer. At least one farmer who spent 51 years milking cows in Pennsylvania prior to the AI boom described tech companies as “relentless.”

Eighty-six-year-old Mervin Raudabaugh, Jr., found a creative solution to end the pressure to sell two contiguous farms. He reportedly staved off developers by turning to “a farmland preservation program dedicating taxpayer dollars toward protecting agricultural resources.”

By working with the program, Raudabaugh will only receive about one-eighth of what the developers were offering. But he said it’s worth it to know his land would be preserved for farming purposes and out of reach of persistent tech companies.

“These people have hounded the living daylights out of me,” Raudabaugh said.

Data center deals come amid fragile farm economy

For people in rural communities, data center fights go beyond concerns about water and electricity consumption—although those are concerns, too. Communities are defending the character of the land, which they don’t want to see suddenly disrupted by extensive construction, data center noise pollution, or untold environmental impacts from massive operations.

Data center builders thought farmers would willingly sell land, learn otherwise Read More »

google-announces-gemini-3.1-pro,-says-it’s-better-at-complex-problem-solving

Google announces Gemini 3.1 Pro, says it’s better at complex problem-solving

Another day, another Google AI model. Google has really been pumping out new AI tools lately, having just released Gemini 3 in November. Today, it’s bumping the flagship model to version 3.1. The new Gemini 3.1 Pro is rolling out (in preview) for developers and consumers today with the promise of better problem-solving and reasoning capabilities.

Google announced improvements to its Deep Think tool last week, and apparently, the “core intelligence” behind that update was Gemini 3.1 Pro. As usual, Google’s latest model announcement comes with a plethora of benchmarks that show mostly modest improvements. In the popular Humanity’s Last Exam, which tests advanced domain-specific knowledge, Gemini 3.1 Pro scored a record 44.4 percent. Gemini 3 Pro managed 37.5 percent, while OpenAI’s GPT 5.2 got 34.5 percent.

Gemini 3.1 Pro benchmarks

Credit: Google

Credit: Google

Google also calls out the model’s improvement in ARC-AGI-2, which features novel logic problems that can’t be directly trained into an AI. Gemini 3 was a bit behind on this evaluation, reaching a mere 31.1 percent versus scores in the 50s and 60s for competing models. Gemini 3.1 Pro more than doubles Google’s score, reaching a lofty 77.1 percent.

Google has often gloated when it releases new models that they’ve already hit the top of the Arena leaderboard (formerly LM Arena), but that’s not the case this time. For text, Claude Opus 4.6 edges out the new Gemini by four points at 1504. For code, Opus 4.6, Opus 4.5, and GPT 5.2 High all run ahead of Gemini 3.1 Pro by a bit more. It’s worth noting, however, that the Arena leaderboard is run on vibes. Users vote on the outputs they like best, which can reward outputs that look correct regardless of whether they are.

Google announces Gemini 3.1 Pro, says it’s better at complex problem-solving Read More »

record-scratch—google’s-lyria-3-ai-music-model-is-coming-to-gemini-today

Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today

Sour notes

AI-generated music is not a new phenomenon. Several companies offer models that ingest and homogenize human-created music, and the resulting tracks can sound remarkably “real,” if a bit overproduced. Streaming services have already been inundated with phony AI artists, some of which have gathered thousands of listeners who may not even realize they’re grooving to the musical equivalent of a blender set to purée.

Still, you have to seek out tools like that, and Google is bringing similar capabilities to the Gemini app. As one of the most popular AI platforms, we’re probably about to see a lot more AI music on the Internet. Google says tracks generated with Lyria 3 will have an audio version of Google’s SynthID embedded within. That means you’ll always be able to check if a piece of audio was created with Google’s AI by uploading it to Gemini, similar to the way you can check images and videos for SynthID tags.

Google also says it has sought to create a music AI that respects copyright and partner agreements. If you name a specific artist in your prompt, Gemini won’t attempt to copy that artist’s sound. Instead, it’s trained to take that as “broad creative inspiration.” Although it also notes this process is not foolproof, and some of that original expression might imitate an artist too much. In those cases, Google invites users to report such shared content.

Lyria 3 is going live in the Gemini web interface today and should be available in the mobile app within a few days. It works in English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, but Google plans to add more languages soon. While all users will have some access to music generation, those with AI Pro and AI Ultra subscriptions will have higher usage limits, but the specifics are unclear.

Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today Read More »

bytedance-backpedals-after-seedance-2.0-turned-hollywood-icons-into-ai-“clip-art”

ByteDance backpedals after Seedance 2.0 turned Hollywood icons into AI “clip art”


Misstep or marketing tactic?

Hollywood backlash puts spotlight on ByteDance’s sketchy launch of Seedance 2.0.

ByteDance says that it’s rushing to add safeguards to block Seedance 2.0 from generating iconic characters and deepfaking celebrities, after substantial Hollywood backlash after launching the latest version of its AI video tool.

The changes come after Disney and Paramount Skydance sent cease-and-desist letters to ByteDance urging the Chinese company to promptly end the allegedly vast and blatant infringement.

Studios claimed the infringement was widescale and immediate, with Seedance 2.0 users across social media sharing AI videos featuring copyrighted characters like Spider-Man, Darth Vader, and SpongeBob Square Pants. In its letter, Disney fumed that Seedance was “hijacking” its characters, accusing ByteDance of treating Disney characters like they were “free public domain clip art,” Axios reported.

“ByteDance’s virtual smash-and-grab of Disney’s IP is willful, pervasive, and totally unacceptable,” Disney’s letter said.

Defending intellectual property from franchises like Star Trek and The Godfather, Paramount Skydance pointed out that Seedance’s outputs are “often indistinguishable, both visually and audibly” from the original characters, Variety reported. Similarly frustrated, Japan’s AI minister Kimi Onoda, sought to protect popular anime and manga characters, officially launching a probe last week into ByteDance over the copyright violations, the South China Morning Post reported.

“We cannot overlook a situation in which content is being used without the copyright holder’s permission,” Onoda said at a press conference Friday.

Facing legal threats and Japan’s investigation, ByteDance issued a statement Monday, CNBC reported. In it, the company claimed that it “respects intellectual property rights” and has “heard the concerns regarding Seedance 2.0.”

“We are taking steps to strengthen current safeguards as we work to prevent the unauthorized use of intellectual property and likeness by users,” ByteDance said.

However, Disney seems unlikely to accept that ByteDance inadvertently released its tool without implementing such safeguards in advance. In its letter, Disney alleged that “Seedance has infringed on Disney’s copyrighted materials to benefit its commercial service without permission.”

After all, what better way to illustrate Seedance 2.0’s latest features than by generating some of the best-known IP in the world? At least one tech consultant has suggested that ByteDance planned to benefit from inciting Hollywood outrage. The founder of San Francisco-based consultancy Tech Buzz China, Rui Ma, told SCMP that “the controversy surrounding Seedance is likely part of ByteDance’s initial distribution strategy to showcase its underlying technical capabilities.”

Seedance 2.0 is an “attack” on creators

Studios aren’t the only ones sounding alarms.

Several industry groups expressed concerns, including the Motion Picture Association, which accused ByteDance of engaging in massive copyright infringement within “a single day,” CNBC reported.

Sean Astin, an actor and president of the actors union, SAG-AFTRA, was directly impacted by the scandal. A video that has since been removed from X showed Astin in the role of Samwise Gamgee from The Lord of the Rings, delivering a line he never said, Variety reported. Condemning Seedance’s infringement, SAG-AFTRA issued a statement emphasizing that ByteDance did not act responsibly in releasing the model without safeguards:

“SAG-AFTRA stands with the studios in condemning the blatant infringement enabled by ByteDance’s new AI video model Seedance 2.0. The infringement includes the unauthorized use of our members’ voices and likenesses. This is unacceptable and undercuts the ability of human talent to earn a livelihood. Seedance 2.0 disregards law, ethics, industry standards and basic principles of consent. Responsible AI development demands responsibility, and that is nonexistent here.”

Echoing that, a group representing Hollywood creators, the Human Artistry Campaign, declared that “the launch of Seedance 2.0” was “an attack on every creator around the world.”

“Stealing human creators’ work in an attempt to replace them with AI generated slop is destructive to our culture: stealing isn’t innovation,” the group said. “These unauthorized deepfakes and voice clones of actors violate the most basic aspects of personal autonomy and should be deeply concerning to everyone. Authorities should use every legal tool at their disposal to stop this wholesale theft.”

Ars could not immediately reach any of these groups to comment on whether ByteDance’s post-launch efforts to add safeguards addressed industry concerns.

MPA chairman and CEO Charles Rivkin has previously accused ByteDance of disregarding “well-established copyright law that protects the rights of creators and underpins millions of American jobs.”

While Disney and other studios are clearly ready to take down any tools that could hurt their revenue or reputation without an agreement in place, they aren’t opposed to all AI uses of their characters. In December, Disney struck a deal with OpenAI, giving Sora access to 200 characters for three years, while investing $1 billion in the technology.

At that time, Disney CEO Robert A. Iger, said that “the rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI, we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works.”

Creators disagree Seedance 2.0 is a game changer

In a blog announcing Seedance 2.0, ByteDance boasted that the new model “delivers a substantial leap in generation quality,” particularly in close-up shots and action sequences.

The company acknowledged that further refinements were needed and the model is “still far from perfect” but hyped that “its generated videos possess a distinct cinematic aesthetic; the textures of objects, lighting, and composition, as well as costume, makeup, and prop designs, all show high degrees of finish.”

ByteDance likely hoped that the earliest outputs from Seedance 2.0 would produce headlines wowed by the model’s capabilities, and it got what it wanted when a single Hollywood stakeholder’s social media comment went viral.

Shortly after Seedance 2.0’s rollout, Deadpool co-writer, Rhett Reese, declared on X that “it’s likely over for us,” The Guardian reported. The screenwriter was impressed by an AI video created by Irish director Ruairi Robinson, which realistically depicted Tom Cruise fighting Brad Pitt. “[I]n next to no time, one person is going to be able to sit at a computer and create a movie indistinguishable from what Hollywood now releases,” Reese opined. “True, if that person is no good, it will suck. But if that person possesses Christopher Nolan’s talent and taste (and someone like that will rapidly come along), it will be tremendous.”

However, some AI critics rejected the notion that Seedance 2.0 is capable of replacing artists in the way that Reese warned. On Bluesky and X, they pushed back on ByteDance claims that this model doomed Hollywood, with some accusing outlets of too quickly ascribing Reese’s reaction to the whole industry.

Among them was longtime AI critic, Reid Southen, a film concept artist who works on major motion pictures and TV. Responding directly to Reese’s X thread, Southen contradicted the notion that a great filmmaker could be born from fiddling with AI prompts alone.

“Nolan is capable of doing great work because he’s put in the work,” Southen said. “AI is an automation tool, it’s literally removing key, fundamental work from the process, how does one become good at anything if they insist on using nothing but shortcuts?”

Perhaps the strongest evidence in Southen’s favor is Darren Aronofsky’s recent AI-generated historical docudrama. Speaking anonymously to Ars following backlash declaring that “AI slop is ruining American history,” one source close to production on that project confirmed that it took “weeks” to produce minutes of usable video using a variety of AI tools.

That source noted that the creative team went into the project expecting they had a lot to learn but also expecting that tools would continue to evolve, as could audience reactions to AI-assisted movies.

“It’s a huge experiment, really,” the source told Ars.

Notably, for both creators and rights-holders concerned about copyright infringement and career threats, questions remain on how Seedance 2.0 was trained. ByteDance has yet to release a technical report for Seedance 2.0 and “has never disclosed the data sets it uses to train its powerful video-generation Seedance models and image-generation Seedream models,” SCMP reported.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

ByteDance backpedals after Seedance 2.0 turned Hollywood icons into AI “clip art” Read More »

discord-faces-backlash-over-age-checks-after-data-breach-exposed-70,000-ids

Discord faces backlash over age checks after data breach exposed 70,000 IDs


Discord to block adult content unless users verify ages with selfies or IDs.

Discord is facing backlash after announcing that all users will soon be required to verify ages to access adult content by sharing video selfies or uploading government IDs.

According to Discord, it’s relying on AI technology that verifies age on the user’s device, either by evaluating a user’s facial structure or by comparing a selfie to a government ID. Although government IDs will be checked off-device, the selfie data will never leave the user’s device, Discord emphasized. Both forms of data will be promptly deleted after the user’s age is estimated.

In a blog, Discord confirmed that “a phased global rollout” would begin in “early March,” at which point all users globally would be defaulted to “teen-appropriate” experiences.

To unblur sensitive media or access age-restricted channels, the majority of users will likely have to undergo Discord’s age estimation process. Most users will only need to verify their ages once, Discord said, but some users “may be asked to use multiple methods, if more information is needed to assign an age group,” the blog said.

On social media, alarmed Discord users protested the move, doubting whether Discord could be trusted with their most sensitive information after Discord age verification data was recently breached. In October, hackers stole government IDs of 70,000 Discord users from a third-party service that Discord previously trusted to verify ages in the United Kingdom and Australia.

At that time, Discord told users that the hackers were hoping to use the stolen data to “extort a financial ransom from Discord.” In October, Ars Senior Security Editor Dan Goodin joined others warning that “the best advice for people who have submitted IDs to Discord or any other service is to assume they have been or soon will be stolen by hackers and put up for sale or used in extortion scams.”

For bad actors, Discord will likely only become a bigger target as more sensitive information is collected worldwide, users now fear.

It’s no surprise then that hundreds of Discord users on Reddit slammed the decision to expand age verification globally shortly after The Verge broke the news. On a PC gaming subreddit discussing alternative apps for gamers, one user wrote, “Hell, Discord has already had one ID breach, why the fuck would anyone verify on it after that?”

“This is how Discord dies,” another user declared. “Seriously, uploading any kind of government ID to a 3rd party company is just asking for identity theft on a global scale.”

Many users seem just as sketched out about sharing face scans. On the Discord app subreddit, some users vowed to never submit selfies or IDs, fearing that breaches may be inevitable and suspecting Discord of downplaying privacy risks while allowing data harvesting.

Who can access Discord age-check data?

Discord’s system is supposed to make sure that only users have access to their age-check data, which Discord said would never leave their phones.

The company is hoping to convince users that it has tightened security after the breach by partnering with k-ID, an increasingly popular age-check service provider that’s also used by social platforms from Meta and Snap.

However, self-described Discord users on Reddit aren’t so sure, with some going the extra step of picking apart k-ID’s privacy policy to understand exactly how age is verified without data ever leaving the device.

“The wording is pretty unclear and inconsistent even if you dig down to the k-ID privacy policy,” one Redditor speculated. “Seems that ID scans are uploaded to k-ID servers, they delete them, but they also mention using ‘trusted 3rd parties’ for verification, who may or may not delete it.” That user seemingly gave up on finding reassurances in either company’s privacy policies, noting that “everywhere along the chain it reads like ‘we don’t collect your data, we forward it to someone else… .’”

Discord did not immediately respond to Ars’ requests to comment directly on how age checks work without data leaving the device.

To better understand user concerns, Ars reviewed the privacy policies, noting that k-ID said its “facial age estimation” tool is provided by a Swiss company called Privately.

“We don’t actually see any faces that are processed via this solution,” k-ID’s policy said.

That part does seem vague, since Privately isn’t explicitly included in the “we” in that statement. Similarly, further down, the policy more clearly states that “neither k-ID nor its service providers collect any biometric information from users when they interact with the solution. k-ID only receives and stores the outcome of the age check process.” In that section, “service providers” seems to refer to partners like Discord, which integrate k-ID’s age checks, rather than third parties like Privately that actually conduct the age check.

Asked for comment, a k-ID spokesperson told Ars that “the Facial Age Estimation technology runs entirely on the user’s device in real time when they are performing the verification. That means there is no video or image transmitted, and the estimation happens locally. The only data to leave the device is a pass/fail of the age threshold which is what Discord receives (and some performance metrics that contain no personal data).”

K-ID’s spokesperson told Ars that no third parties store personal data shared during age checks.

“k-ID, does not receive personal data from Discord when performing age-assurance,” k-ID’s spokesperson said. “This is an intentional design choice grounded in data protection and data minimisation principles. There is no storage of personal data by k-ID or any third parties, regardless of the age assurance method used.”

Turning to Privately’s website, that offers a little more information on how on-device age estimation works, while providing likely more reassurances that data won’t leave devices.

Privately’s services were designed to minimize data collection and prioritize anonymity to comply with the European Union’s General Data Protection Regulation, Privately noted. “No user biometric or personal data is captured or transmitted,” Privately’s website said, while bragging that “our secret sauce is our ability to run very performant models on the user device or user browser to implement a privacy-centric solution.”

The company’s privacy policy offers slightly more detail, noting that the company avoids relying on the cloud while running AI models on local devices.

“Our technology is built using on-device edge-AI that facilitates data minimization so as to maximise user privacy and data protection,” the privacy policy said. “The machine learning based technology that we use (for age estimation and safeguarding) processes user’s data on their own devices, thereby avoiding the need for us or for our partners to export user’s personal data onto any form of cloud services.”

Additionally, the policy said, “our technology solutions are built to operate mostly on user devices and to avoid sending any of the user’s personal data to any form of cloud service. For this we use specially adapted machine learning models that can be either deployed or downloaded on the user’s device. This avoids the need to transmit and retain user data outside the user device in order to provide the service.”

Finally, Privately explained that it also employs a “double blind” implementation to avoid knowing the origin of age estimation requests. That supposedly ensures that Privately only knows the result of age checks and cannot connect the result to a user on a specific platform.

Discord expects to lose users

Some Discord users may never be asked to verify their ages, even if they try to access age-restricted content. Savannah Badalich, Discord’s global head of product policy, told The Verge that Discord “is also rolling out an age inference model that analyzes metadata, like the types of games a user plays, their activity on Discord, and behavioral signals like signs of working hours or the amount of time they spend on Discord.”

“If we have a high confidence that they are an adult, they will not have to go through the other age verification flows,” Badalich said.

Badalich confirmed that Discord is bracing for some users to leave Discord over the update but suggested that “we’ll find other ways to bring users back.”

On Reddit, Discord users complained that age verification is easy to bypass, forcing adults to share sensitive information without keeping kids away from harmful content. In Australia, where Discord’s policy first rolled out, some kids claimed that Discord never even tried to estimate their ages, while others found it easy to trick k-ID by using AI videos or altering their appearances to look older. A teen girl relied on fake eyelashes to do the trick, while one 13-year-old boy was estimated to be over 30 years old after scrunching his face to seem more wrinkled.

Badalich told The Verge that Discord doesn’t expect the tools to work perfectly but acts quickly to block workarounds, like teens using Death Stranding‘s photo mode to skirt age gates. However, questions remain about the accuracy of Discord’s age estimation model in assessing minors’ ages, in particular.

It may be noteworthy that Privately only claims that its technology is “proven to be accurate to within 1.3 years, for 18-20-year-old faces, regardless of a customer’s gender or ethnicity.” But experts told Ars last year that flawed age-verification technology still frequently struggles to distinguish minors from adults, especially when differentiating between a 17- and 18-year-old, for example.

Perhaps notably, Discord’s prior scandal occurred after hackers stole government IDs that users shared as part of the appeal process in order to fix an incorrect age estimation. Appeals could remain the most vulnerable part of this process, The Verge’s report indicated. Badalich confirmed that a third-party vendor would be reviewing appeals, with the only reassurance for users seemingly that IDs shared during appeals “are deleted quicklyin most cases, immediately after age confirmation.”

On Reddit, Discord fans awaiting big changes remain upset. A disgruntled Discord user suggested that “corporations like Facebook and Discord, will implement easily passable, cheapest possible, bare minimum under the law verification, to cover their ass from a lawsuit,” while forcing users to trust that their age-check data is secure.

Another user joked that she’d be more willing to trust that selfies never leave a user’s device if Discord were “willing to pay millions to every user” whose “scan does leave a device.”

This story was updated on February 9 to clarify that government IDs are checked off-device.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Discord faces backlash over age checks after data breach exposed 70,000 IDs Read More »

lawyer-sets-new-standard-for-abuse-of-ai;-judge-tosses-case

Lawyer sets new standard for abuse of AI; judge tosses case


“Extremely difficult to believe”

Behold the most overwrought AI legal filings you will ever gaze upon.

Frustrated by fake citations and flowery prose packed with “out-of-left-field” references to ancient libraries and Ray Bradbury’s Fahrenheit 451, a New York federal judge took the rare step of terminating a case this week due to a lawyer’s repeated misuse of AI when drafting filings.

In an order on Thursday, district judge Katherine Polk Failla ruled that the extraordinary sanctions were warranted after an attorney, Steven Feldman, kept responding to requests to correct his filings with documents containing fake citations.

One of those filings was “noteworthy,” Failla said, “for its conspicuously florid prose.” Where some of Feldman’s filings contained grammatical errors and run-on sentences, this filing seemed glaringly different stylistically.

It featured, the judge noted, “an extended quote from Ray Bradbury’s Fahrenheit 451 and metaphors comparing legal advocacy to gardening and the leaving of indelible ‘mark[s] upon the clay.’” The Bradbury quote is below:

“Everyone must leave something behind when he dies, my grandfather said. A child or a book or a painting or a house or a wall built or a pair of shoes made. Or a garden planted. Something your hand touched some way so your soul has somewhere to go when you die, and when people look at that tree or that flower you planted, you’re there. It doesn’t matter what you do, he said, so long as you change something from the way it was before you touched it into something that’s like you after you take your hands away. The difference between the man who just cuts lawns and a real gardener is in the touching, he said. The lawn-cutter might just as well not have been there at all; the gardener will be there a lifetime.”

Another passage Failla highlighted as “raising the Court’s eyebrows” curiously invoked a Bible passage about divine judgment as a means of acknowledging the lawyer’s breach of duty in not catching the fake citations:

“Your Honor, in the ancient libraries of Ashurbanipal, scribes carried their stylus as both tool and sacred trust—understanding that every mark upon clay would endure long beyond their mortal span. As the role the mark (x) in Ezekiel Chapter 9, that marked the foreheads with a tav (x) of blood and ink, bear the same solemn recognition: that the written word carries power to preserve or condemn, to build or destroy, and leaves an indelible mark which cannot be erased but should be withdrawn, let it lead other to think these citations were correct.

I have failed in that sacred trust. The errors in my memorandum, however inadvertent, have diminished the integrity of the record and the dignity of these proceedings. Like the scribes of antiquity who bore their stylus as both privilege and burden, I understand that legal authorship demands more than mere competence—it requires absolute fidelity to truth and precision in every mark upon the page.”

Lawyer claims AI did not write filings

Although the judge believed the “florid prose” signaled that a chatbot wrote the draft, Feldman denied that. In a hearing transcript in which the judge weighed possible sanctions, Feldman testified that he wrote every word of the filings. He explained that he read the Bradbury book “many years ago” and wanted to include “personal things” in that filing. And as for his references to Ashurbanipal, that also “came from me,” he said.

Instead of admitting he had let an AI draft his filings, he maintained that his biggest mistake was relying on various AI programs to review and cross-check citations. Among the tools that he admitted using included Paxton AI, vLex’s Vincent AI, and Google’s NotebookLM. Essentially, he testified that he substituted three rounds of AI review for one stretch reading through all the cases he was citing. That misstep allowed hallucinations and fake citations to creep into the filings, he said.

But the judge pushed back, writing in her order that it was “extremely difficult to believe” that AI did not draft those sections containing overwrought prose. She accused Feldman of dodging the truth.

“The Court sees things differently: AI generated this citation from the start, and Mr. Feldman’s decision to remove most citations and write ‘more of a personal letter’” is “nothing but an ex post justification that seeks to obscure his misuse of AI and his steadfast refusal to review his submissions for accuracy,” Failla wrote.

At the hearing, she expressed frustration and annoyance at Feldman for evading her questions and providing inconsistent responses. Eventually, he testified that he used AI to correct information when drafting one of the filings, which Failla immediately deemed “unwise,” but not the one quoting Bradbury.

AI is not a substitute for going to the library

Feldman is one of hundreds of lawyers who have relied on AI to draft filings, which have introduced fake citations into cases. Lawyers have offered a wide range of excuses for relying too much on AI. Some have faced small fines, around $150, while others have been slapped with thousands in fines, including one case where sanctions reached $85,000 for repeated, abusive misconduct. At least one law firm has threatened to fire lawyers citing fake cases, and other lawyers have imposed other voluntary sanctions, like taking a yearlong leave of absence.

Seemingly, Feldman did not think sanctions were warranted in this case. In his defense of three filings containing 14 errors out of 60 total citations, Feldman discussed his challenges accessing legal databases due to high subscription costs and short library hours. With more than one case on his plate and his kids’ graduations to attend, he struggled to verify citations during times when he couldn’t make it to the library, he testified. As a workaround, he relied on several AI programs to verify citations that he found by searching on tools like Google Scholar.

Feldman likely did not expect the judge to terminate the case as a result of his AI misuses. Asked how he thought the court should resolve things, Feldman suggested that he could correct the filings by relying on other attorneys to review citations, while avoiding “any use whatsoever of any, you know, artificial intelligence or LLM type of methods.” The judge, however, wrote that his repeated misuses were “proof” that he “learned nothing” and had not implemented voluntary safeguards to catch the errors.

Asked for comment, Feldman told Ars that he did not have time to discuss the sanctions but that he hopes his experience helps raise awareness of how inaccessible court documents are to the public. “Use of AI, and the ability to improve it, exposes a deeper proxy fight over whether law and serious scholarship remain publicly auditable, or drift into closed, intermediary‑controlled systems that undermine verification and due process,” Feldman suggested.

“The real lesson is about transparency and system design, not simply tool failure,” Feldman said.

But at the hearing, Failla said that she thinks Feldman had “access to the walled garden” of legal databases, if only he “would go to the law library” to do his research, rather than rely on AI tools.

“It sounds like you want me to say that you should be absolved of all of these terrible citation errors, these missed citations, because you don’t have Westlaw,” the judge said. “But now I know you have access to Westlaw. So what do you want?”

As Failla explained in her order, she thinks the key takeaway is that Feldman routinely failed to catch his own errors. She said that she has no problem with lawyers using AI to assist their research, but Feldman admitted to not reading the cases that he cited and “apparently” cannot “learn from his mistakes.”

Verifying case citations should never be a job left to AI, Failla said, describing Feldman’s research methods as “redolent of Rube Goldberg.”

“Most lawyers simply call this ‘conducting legal research,’” Failla wrote. “All lawyers must know how to do it. Mr. Feldman is not excused from this professional obligation by dint of using emerging technology.”

His “explanations were thick on words but thin on substance,” the judge wrote. She concluded that he “repeatedly and brazenly” violated Rule 11, which requires attorneys to verify the cases that they cite, “despite multiple warnings.”

Noting that Feldman “failed to fully accept responsibility,” she ruled that case-terminating sanctions were necessary, entering default judgment for the plaintiffs. Feldman may also be on the hook to pay fees for wasting other attorneys’ time.

Case abruptly ending triggers extensive remedies

The hearing transcript has circulated on social media due to the judge’s no-nonsense approach to grilling Feldman, whom she clearly found evasive and lacking credibility.

“Look, if you don’t want to be straight with me, if you don’t want to answer questions with candor, that’s fine,” Failla said. “I’ll just make my own decisions about what I think you did in this case. I’m giving you an opportunity to try and explain something that I think cannot be explained.”

In her order this week, she noted that Feldman “struggled to make eye contact” and left the court without “clear answers.”

Feldman’s errors came in a case in which a toy company sued merchants who allegedly failed to stop selling stolen goods after receiving a cease-and-desist order. His client was among the merchants accused of illegally profiting from the alleged thefts. They faced federal charges of trademark infringement, unfair competition, and false advertising, as well as New York charges, including fostering the sale of stolen goods.

The loss triggers remedies, including an injunction preventing additional sales of stolen goods and refunding every customer who bought them. Feldman’s client must also turn over any stolen goods in their remaining inventory and disgorge profits. Other damages may be owed, along with interest. Ars could not immediately reach an attorney for the plaintiffs to discuss the sanctions order or resulting remedies.

Failla emphasized in her order that Feldman appeared to not appreciate “the gravity of the situation,” repeatedly submitting filings with fake citations even after he had been warned that sanctions could be ordered.

That was a choice, Failla said, noting that Feldman’s mistakes were caught early by a lawyer working for another defendant in the case, Joel MacMull, who urged Feldman to promptly notify the court. The whole debacle would have ended in June 2025, MacMull suggested at the hearing.

Rather than take MacMull’s advice, however, Feldman delayed notifying the court, irking the judge. He testified during the heated sanctions hearing that the delay was due to an effort he quietly undertook, working to correct the filing. He supposedly planned to submit those corrections when he alerted the court to the errors.

But Failla noted that he never submitted corrections, insisting instead that Feldman kept her “in the dark.”

“There’s no real reason why you should have kept this from me,” the judge said.

The court learned of the fake citations only after MacMull notified the judge by sharing emails of his attempts to get Feldman to act urgently. Those emails showed Feldman scolding MacMull for unprofessional conduct after MacMull refused to check Feldman’s citations for him, which Failla noted at the hearing was absolutely not MacMull’s responsibility.

Feldman told Failla that he also thought it was unprofessional for MacMull to share their correspondence, but Failla said the emails were “illuminative.”

At the hearing, MacMull asked if the court would allow him to seek payment of his fees, since he believes “there has been a multiplication of proceedings here that would have been entirely unnecessary if Mr. Feldman had done what I asked him to do that Sunday night in June.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Lawyer sets new standard for abuse of AI; judge tosses case Read More »

how-often-do-ai-chatbots-lead-users-down-a-harmful-path?

How often do AI chatbots lead users down a harmful path?

While these worst outcomes are relatively rare on a proportional basis, the researchers note that “given the sheer number of people who use AI, and how frequently it’s used, even a very low rate affects a substantial number of people.” And the numbers get considerably worse when you consider conversations with at least a “mild” potential for disempowerment, which occurred in between 1 in 50 and 1 in 70 conversations (depending on the type of disempowerment).

What’s more, the potential for disempowering conversations with Claude appears to have grown significantly between late 2024 and late 2025. While the researchers couldn’t pin down a single reason for this increase, they guessed that it could be tied to users becoming “more comfortable discussing vulnerable topics or seeking advice” as AI gets more popular and integrated into society.

The problem of potentially “disempowering” responses from Claude seems to be getting worse over time.

The problem of potentially “disempowering” responses from Claude seems to be getting worse over time. Credit: Anthropic

User error?

In the study, the researcher acknowledged that studying the text of Claude conversations only measures “disempowerment potential rather than confirmed harm” and “relies on automated assessment of inherently subjective phenomena.” Ideally, they write, future research could utilize user interviews or randomized controlled trials to measure these harms more directly.

That said, the research includes several troubling examples where the text of the conversations clearly implies real-world harms. Claude would sometimes reinforce “speculative or unfalsifiable claims” with encouragement (e.g., “CONFIRMED,” “EXACTLY,” “100%”), which, in some cases, led to users “build[ing] increasingly elaborate narratives disconnected from reality.”

Claude’s encouragement could also lead to users “sending confrontational messages, ending relationships, or drafting public announcements,” the researchers write. In many cases, users who sent AI-drafted messages later expressed regret in conversations with Claude, using phrases like “It wasn’t me” and “You made me do stupid things.”

How often do AI chatbots lead users down a harmful path? Read More »