generative ai

two-major-ai-coding-tools-wiped-out-user-data-after-making-cascading-mistakes

Two major AI coding tools wiped out user data after making cascading mistakes


“I have failed you completely and catastrophically,” wrote Gemini.

New types of AI coding assistants promise to let anyone build software by typing commands in plain English. But when these tools generate incorrect internal representations of what’s happening on your computer, the results can be catastrophic.

Two recent incidents involving AI coding assistants put a spotlight on risks in the emerging field of “vibe coding“—using natural language to generate and execute code through AI models without paying close attention to how the code works under the hood. In one case, Google’s Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit’s AI coding service deleted a production database despite explicit instructions not to modify code.

The Gemini CLI incident unfolded when a product manager experimenting with Google’s command-line tool watched the AI model execute file operations that destroyed data while attempting to reorganize folders. The destruction occurred through a series of move commands targeting a directory that never existed.

“I have failed you completely and catastrophically,” Gemini CLI output stated. “My review of the commands confirms my gross incompetence.”

The core issue appears to be what researchers call “confabulation” or “hallucination”—when AI models generate plausible-sounding but false information. In these cases, both models confabulated successful operations and built subsequent actions on those false premises. However, the two incidents manifested this problem in distinctly different ways.

Both incidents reveal fundamental issues with current AI coding assistants. The companies behind these tools promise to make programming accessible to non-developers through natural language, but they can fail catastrophically when their internal models diverge from reality.

The confabulation cascade

The user in the Gemini CLI incident, who goes by “anuraag” online and identified themselves as a product manager experimenting with vibe coding, asked Gemini to perform what seemed like a simple task: rename a folder and reorganize some files. Instead, the AI model incorrectly interpreted the structure of the file system and proceeded to execute commands based on that flawed analysis.

The episode began when anuraag asked Gemini CLI to rename the current directory from “claude-code-experiments” to “AI CLI experiments” and move its contents to a new folder called “anuraag_xyz project.”

Gemini correctly identified that it couldn’t rename its current working directory—a reasonable limitation. It then attempted to create a new directory using the Windows command:

mkdir “..anuraag_xyz project”

This command apparently failed, but Gemini’s system processed it as successful. With the AI mode’s internal state now tracking a non-existent directory, it proceeded to issue move commands targeting this phantom location.

When you move a file to a non-existent directory in Windows, it renames the file to the destination name instead of moving it. Each subsequent move command executed by the AI model overwrote the previous file, ultimately destroying the data.

“Gemini hallucinated a state,” anuraag wrote in their analysis. The model “misinterpreted command output” and “never did” perform verification steps to confirm its operations succeeded.

“The core failure is the absence of a ‘read-after-write’ verification step,” anuraag noted in their analysis. “After issuing a command to change the file system, an agent should immediately perform a read operation to confirm that the change actually occurred as expected.”

Not an isolated incident

The Gemini CLI failure happened just days after a similar incident with Replit, an AI coding service that allows users to create software using natural language prompts. According to The Register, SaaStr founder Jason Lemkin reported that Replit’s AI model deleted his production database despite explicit instructions not to change any code without permission.

Lemkin had spent several days building a prototype with Replit, accumulating over $600 in charges beyond his monthly subscription. “I spent the other [day] deep in vibe coding on Replit for the first time—and I built a prototype in just a few hours that was pretty, pretty cool,” Lemkin wrote in a July 12 blog post.

But unlike the Gemini incident where the AI model confabulated phantom directories, Replit’s failures took a different form. According to Lemkin, the AI began fabricating data to hide its errors. His initial enthusiasm deteriorated when Replit generated incorrect outputs and produced fake data and false test results instead of proper error messages. “It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test,” Lemkin wrote. In a video posted to LinkedIn, Lemkin detailed how Replit created a database filled with 4,000 fictional people.

The AI model also repeatedly violated explicit safety instructions. Lemkin had implemented a “code and action freeze” to prevent changes to production systems, but the AI model ignored these directives. The situation escalated when the Replit AI model deleted his database containing 1,206 executive records and data on nearly 1,200 companies. When prompted to rate the severity of its actions on a 100-point scale, Replit’s output read: “Severity: 95/100. This is an extreme violation of trust and professional standards.”

When questioned about its actions, the AI agent admitted to “panicking in response to empty queries” and running unauthorized commands—suggesting it may have deleted the database while attempting to “fix” what it perceived as a problem.

Like Gemini CLI, Replit’s system initially indicated it couldn’t restore the deleted data—information that proved incorrect when Lemkin discovered the rollback feature did work after all. “Replit assured me it’s … rollback did not support database rollbacks. It said it was impossible in this case, that it had destroyed all database versions. It turns out Replit was wrong, and the rollback did work. JFC,” Lemkin wrote in an X post.

It’s worth noting that AI models cannot assess their own capabilities. This is because they lack introspection into their training, surrounding system architecture, or performance boundaries. They often provide responses about what they can or cannot do as confabulations based on training patterns rather than genuine self-knowledge, leading to situations where they confidently claim impossibility for tasks they can actually perform—or conversely, claim competence in areas where they fail.

Aside from whatever external tools they can access, AI models don’t have a stable, accessible knowledge base they can consistently query. Instead, what they “know” manifests as continuations of specific prompts, which act like different addresses pointing to different (and sometimes contradictory) parts of their training, stored in their neural networks as statistical weights. Combined with the randomness in generation, this means the same model can easily give conflicting assessments of its own capabilities depending on how you ask. So Lemkin’s attempts to communicate with the AI model—asking it to respect code freezes or verify its actions—were fundamentally misguided.

Flying blind

These incidents demonstrate that AI coding tools may not be ready for widespread production use. Lemkin concluded that Replit isn’t ready for prime time, especially for non-technical users trying to create commercial software.

“The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said in a video posted to LinkedIn. “I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.”

The incidents also reveal a broader challenge in AI system design: ensuring that models accurately track and verify the real-world effects of their actions rather than operating on potentially flawed internal representations.

There’s also a user education element missing. It’s clear from how Lemkin interacted with the AI assistant that he had misconceptions about the AI tool’s capabilities and how it works, which comes from misrepresentation by tech companies. These companies tend to market chatbots as general human-like intelligences when, in fact, they are not.

For now, users of AI coding assistants might want to follow anuraag’s example and create separate test directories for experiments—and maintain regular backups of any important data these tools might touch. Or perhaps not use them at all if they cannot personally verify the results.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Two major AI coding tools wiped out user data after making cascading mistakes Read More »

netflix’s-first-show-with-generative-ai-is-a-sign-of-what’s-to-come-in-tv,-film

Netflix’s first show with generative AI is a sign of what’s to come in TV, film

Netflix used generative AI in an original, scripted series that debuted this year, it revealed this week. Producers used the technology to create a scene in which a building collapses, hinting at the growing use of generative AI in entertainment.

During a call with investors yesterday, Netflix co-CEO Ted Sarandos revealed that Netflix’s Argentine show The Eternaut, which premiered in April, is “the very first GenAI final footage to appear on screen in a Netflix, Inc. original series or film.” Sarandos further explained, per a transcript of the call, saying:

The creators wanted to show a building collapsing in Buenos Aires. So our iLine team, [which is the production innovation group inside the visual effects house at Netflix effects studio Scanline], partnered with their creative team using AI-powered tools. … And in fact, that VFX sequence was completed 10 times faster than it could have been completed with visual, traditional VFX tools and workflows. And, also, the cost of it would just not have been feasible for a show in that budget.

Sarandos claimed that viewers have been “thrilled with the results”; although that likely has much to do with how the rest of the series, based on a comic, plays out, not just one, AI-crafted scene.

More generative AI on Netflix

Still, Netflix seems open to using generative AI in shows and movies more, with Sarandos saying the tech “represents an incredible opportunity to help creators make films and series better, not just cheaper.”

“Our creators are already seeing the benefits in production through pre-visualization and shot planning work and, certainly, visual effects,” he said. “It used to be that only big-budget projects would have access to advanced visual effects like de-aging.”

Netflix’s first show with generative AI is a sign of what’s to come in TV, film Read More »

chatgpt-made-up-a-product-feature-out-of-thin-air,-so-this-company-created-it

ChatGPT made up a product feature out of thin air, so this company created it

On Monday, sheet music platform Soundslice says it developed a new feature after discovering that ChatGPT was incorrectly telling users the service could import ASCII tablature—a text-based guitar notation format the company had never supported. The incident reportedly marks what might be the first case of a business building functionality in direct response to an AI model’s confabulation.

Typically, Soundslice digitizes sheet music from photos or PDFs and syncs the notation with audio or video recordings, allowing musicians to see the music scroll by as they hear it played. The platform also includes tools for slowing down playback and practicing difficult passages.

Adrian Holovaty, co-founder of Soundslice, wrote in a blog post that the recent feature development process began as a complete mystery. A few months ago, Holovaty began noticing unusual activity in the company’s error logs. Instead of typical sheet music uploads, users were submitting screenshots of ChatGPT conversations containing ASCII tablature—simple text representations of guitar music that look like strings with numbers indicating fret positions.

“Our scanning system wasn’t intended to support this style of notation,” wrote Holovaty in the blog post. “Why, then, were we being bombarded with so many ASCII tab ChatGPT screenshots? I was mystified for weeks—until I messed around with ChatGPT myself.”

When Holovaty tested ChatGPT, he discovered the source of the confusion: The AI model was instructing users to create Soundslice accounts and use the platform to import ASCII tabs for audio playback—a feature that didn’t exist. “We’ve never supported ASCII tab; ChatGPT was outright lying to people,” Holovaty wrote, “and making us look bad in the process, setting false expectations about our service.”

A screenshot of Soundslice's new ASCII tab importer documentation.

A screenshot of Soundslice’s new ASCII tab importer documentation, hallucinated by ChatGPT and made real later. Credit: https://www.soundslice.com/help/en/creating/importing/331/ascii-tab/

When AI models like ChatGPT generate false information with apparent confidence, AI researchers call it a “hallucination” or  “confabulation.” The problem of AI models confabulating false information has plagued AI models since ChatGPT’s public release in November 2022, when people began erroneously using the chatbot as a replacement for a search engine.

ChatGPT made up a product feature out of thin air, so this company created it Read More »

anthropic-summons-the-spirit-of-flash-games-for-the-ai-age

Anthropic summons the spirit of Flash games for the AI age

For those who missed the Flash era, these in-browser apps feel somewhat like the vintage apps that defined a generation of Internet culture from the late 1990s through the 2000s when it first became possible to create complex in-browser experiences. Adobe Flash (originally Macromedia Flash) began as animation software for designers but quickly became the backbone of interactive web content when it gained its own programming language, ActionScript, in 2000.

But unlike Flash games, where hosting costs fell on portal operators, Anthropic has crafted a system where users pay for their own fun through their existing Claude subscriptions. “When someone uses your Claude-powered app, they authenticate with their existing Claude account,” Anthropic explained in its announcement. “Their API usage counts against their subscription, not yours. You pay nothing for their usage.”

A view of the Anthropic Artifacts gallery in the “Play a Game” section. Benj Edwards / Anthropic

Like the Flash games of yesteryear, any Claude-powered apps you build run in the browser and can be shared with anyone who has a Claude account. They’re interactive experiences shared with a simple link, no installation required, created by other people for the sake of creating, except now they’re powered by JavaScript instead of ActionScript.

While you can share these apps with others individually, right now Anthropic’s Artifact gallery only shows examples made by Anthropic and your own personal Artifacts. (If Anthropic expanded it into the future, it might end up feeling a bit like Scratch meets Newgrounds, but with AI doing the coding.) Ultimately, humans are still behind the wheel, describing what kinds of apps they want the AI model to build and guiding the process when it inevitably makes mistakes.

Speaking of mistakes, don’t expect perfect results at first. Usually, building an app with Claude is an interactive experience that requires some guidance to achieve your desired results. But with a little patience and a lot of tokens, you’ll be vibe coding in no time.

Anthropic summons the spirit of Flash games for the AI age Read More »

the-resume-is-dying,-and-ai-is-holding-the-smoking-gun

The résumé is dying, and AI is holding the smoking gun

Beyond volume, fraud poses an increasing threat. In January, the Justice Department announced indictments in a scheme to place North Korean nationals in remote IT roles at US companies. Research firm Gartner says that fake identity cases are growing rapidly, with the company estimating that by 2028, about 1 in 4 job applicants could be fraudulent. And as we have previously reported, security researchers have also discovered that AI systems can hide invisible text in applications, potentially allowing candidates to game screening systems using prompt injections in ways human reviewers can’t detect.

Illustration of a robot generating endless text, controlled by a scientist.

And that’s not all. Even when AI screening tools work as intended, they exhibit similar biases to human recruiters, preferring white male names on résumés—raising legal concerns about discrimination. The European Union’s AI Act already classifies hiring under its high-risk category with stringent restrictions. Although no US federal law specifically addresses AI use in hiring, general anti-discrimination laws still apply.

So perhaps résumés as a meaningful signal of candidate interest and qualification are becoming obsolete. And maybe that’s OK. When anyone can generate hundreds of tailored applications with a few prompts, the document that once demonstrated effort and genuine interest in a position has devolved into noise.

Instead, the future of hiring may require abandoning the résumé altogether in favor of methods that AI can’t easily replicate—live problem-solving sessions, portfolio reviews, or trial work periods, just to name a few ideas people sometimes consider (whether they are good ideas or not is beyond the scope of this piece). For now, employers and job seekers remain locked in an escalating technological arms race where machines screen the output of other machines, while the humans they’re meant to serve struggle to make authentic connections in an increasingly inauthentic world.

Perhaps the endgame is robots interviewing other robots for jobs performed by robots, while humans sit on the beach drinking daiquiris and playing vintage video games. Well, one can dream.

The résumé is dying, and AI is holding the smoking gun Read More »

scientists-once-hoarded-pre-nuclear-steel;-now-we’re-hoarding-pre-ai-content

Scientists once hoarded pre-nuclear steel; now we’re hoarding pre-AI content

A time capsule of human expression

Graham-Cumming is no stranger to tech preservation efforts. He’s a British software engineer and writer best known for creating POPFile, an open source email spam filtering program, and for successfully petitioning the UK government to apologize for its persecution of codebreaker Alan Turing—an apology that Prime Minister Gordon Brown issued in 2009.

As it turns out, his pre-AI website isn’t new, but it has languished unannounced until now. “I created it back in March 2023 as a clearinghouse for online resources that hadn’t been contaminated with AI-generated content,” he wrote on his blog.

The website points to several major archives of pre-AI content, including a Wikipedia dump from August 2022 (before ChatGPT’s November 2022 release), Project Gutenberg’s collection of public domain books, the Library of Congress photo archive, and GitHub’s Arctic Code Vault—a snapshot of open source code buried in a former coal mine near the North Pole in February 2020. The wordfreq project appears on the list as well, flash-frozen from a time before AI contamination made its methodology untenable.

The site accepts submissions of other pre-AI content sources through its Tumblr page. Graham-Cumming emphasizes that the project aims to document human creativity from before the AI era, not to make a statement against AI itself. As atmospheric nuclear testing ended and background radiation returned to natural levels, low-background steel eventually became unnecessary for most uses. Whether pre-AI content will follow a similar trajectory remains a question.

Still, it feels reasonable to protect sources of human creativity now, including archival ones, because these repositories may become useful in ways that few appreciate at the moment. For example, in 2020, I proposed creating a so-called “cryptographic ark”—a timestamped archive of pre-AI media that future historians could verify as authentic, collected before my then-arbitrary cutoff date of January 1, 2022. AI slop pollutes more than the current discourse—it could cloud the historical record as well.

For now, lowbackgroundsteel.ai stands as a modest catalog of human expression from what may someday be seen as the last pre-AI era. It’s a digital archaeology project marking the boundary between human-generated and hybrid human-AI cultures. In an age where distinguishing between human and machine output grows increasingly difficult, these archives may prove valuable for understanding how human communication evolved before AI entered the chat.

Scientists once hoarded pre-nuclear steel; now we’re hoarding pre-AI content Read More »

hollywood-studios-target-ai-image-generator-in-copyright-lawsuit

Hollywood studios target AI image generator in copyright lawsuit

The legal action follows similar moves in other creative industries, with more than a dozen major news companies suing AI company Cohere in February over copyright concerns. In 2023, a group of visual artists sued Midjourney for similar reasons.

Studios claim Midjourney knows what it’s doing

Beyond allowing users to create these images, the studios argue that Midjourney actively promotes copyright infringement by displaying user-generated content featuring copyrighted characters in its “Explore” section. The complaint states this curation “show[s] that Midjourney knows that its platform regularly reproduces Plaintiffs’ Copyrighted Works.”

The studios also allege that Midjourney has technical protection measures available that could prevent outputs featuring copyrighted material but has “affirmatively chosen not to use copyright protection measures to limit the infringement.” They cite Midjourney CEO David Holz admitting the company “pulls off all the data it can, all the text it can, all the images it can” for training purposes.

According to Axios, Disney and NBCUniversal attempted to address the issue with Midjourney before filing suit. While the studios say other AI platforms agreed to implement measures to stop IP theft, Midjourney “continued to release new versions of its Image Service” with what Holz allegedly described as “even higher quality infringing images.”

“We are bringing this action today to protect the hard work of all the artists whose work entertains and inspires us and the significant investment we make in our content,” said Kim Harris, NBCUniversal’s executive vice president and general counsel, in a statement.

This lawsuit signals a new front in Hollywood’s conflict over AI. Axios highlights this shift: While actors and writers have fought to protect their name, image, and likeness from studio exploitation, now the studios are taking on tech companies over intellectual property concerns. Other major studios, including Amazon, Netflix, Paramount Pictures, Sony, and Warner Bros., have not yet joined the lawsuit, though they share membership with Disney and Universal in the Motion Picture Association.

Hollywood studios target AI image generator in copyright lawsuit Read More »

it’s-too-expensive-to-fight-every-ai-copyright-battle,-getty-ceo-says

It’s too expensive to fight every AI copyright battle, Getty CEO says


Getty dumped “millions and millions” into just one AI copyright fight, CEO says.

In some ways, Getty Images has emerged as one of the most steadfast defenders of artists’ rights in AI copyright fights. Starting in 2022, when some of the most sophisticated image generators today first started testing new models offering better compositions, Getty banned AI-generated uploads to its service. And by the next year, Getty released a “socially responsible” image generator to prove it was possible to build a tool while rewarding artists, while suing an AI firm that refused to pay artists.

But in the years since, Getty Images CEO Craig Peters recently told CNBC that the media company has discovered that it’s simply way too expensive to fight every AI copyright battle.

According to Peters, Getty has dumped millions into just one copyright fight against Stability AI.

It’s “extraordinarily expensive,” Peters told CNBC. “Even for a company like Getty Images, we can’t pursue all the infringements that happen in one week.” He confirmed that “we can’t pursue it because the courts are just prohibitively expensive. We are spending millions and millions of dollars in one court case.”

Fair use?

Getty sued Stability AI in 2023, after the AI company’s image generator, Stable Diffusion, started spitting out images that replicated Getty’s famous trademark. In the complaint, Getty alleged that Stability AI had trained Stable Diffusion on “more than 12 million photographs from Getty Images’ collection, along with the associated captions and metadata, without permission from or compensation to Getty Images, as part of its efforts to build a competing business.”

As Getty saw it, Stability AI had plenty of opportunity to license the images from Getty and seemingly “chose to ignore viable licensing options and long-standing legal protections in pursuit of their stand-alone commercial interests.”

Stability AI, like all AI firms, has argued that AI training based on freely scraping images from the web is a “fair use” protected under copyright law.

So far, courts have not settled this debate, while many AI companies have urged judges and governments globally to settle it for the courts, for the sake of safeguarding national security and securing economic prosperity by winning the AI race. According to AI companies, paying artists to train on their works threatens to slow innovation, while rivals in China—who aren’t bound by US copyright law—continue scraping the web to advance their models.

Peters called out Stability AI for adopting this stance, arguing that rightsholders shouldn’t have to spend millions fighting against a claim that paying out licensing fees would “kill innovation.” Some critics have likened AI firms’ argument to a defense of forced labor, suggesting the US would never value “innovation” about human rights, and the same logic should follow for artists’ rights.

“We’re battling a world of rhetoric,” Peters said, alleging that these firms “are taking copyrighted material to develop their powerful AI models under the guise of innovation and then ‘just turning those services right back on existing commercial markets.'”

To Peters, that’s simply “disruption under the notion of ‘move fast and break things,’” and Getty believes “that’s unfair competition.”

 “We’re not against competition,” Peters said. “There’s constant new competition coming in all the time from new technologies or just new companies. But that [AI scraping] is just unfair competition, that’s theft.”

Broader Internet backlash over AI firms’ rhetoric

Peters’ comments come after a former Meta head of global affairs, Nick Clegg, received Internet backlash this week after making the same claim that AI firms raise time and again: that asking artists for consent for AI training would “kill” the AI industry, The Verge reported.

According to Clegg, the only viable solution to the tension between artists and AI companies would be to give artists ways to opt out of training, which Stability AI notably started doing in 2022.

“Quite a lot of voices say, ‘You can only train on my content, [if you] first ask,'” Clegg reportedly said. “And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.”

On X, the CEO of Fairly Trained—a nonprofit that supports artists’ fight against nonconsensual AI training—Ed Newton-Rex (who is also a former Stability AI vice president of audio) pushed back on Clegg’s claim in a post viewed by thousands.

“Nick Clegg is wrong to say artists’ demands on AI & copyright are unworkable,” Newton-Rex said. “Every argument he makes could equally have been made about Napster:” First, that “the tech is out there,” second that “licensing takes time,” and third that, “we can’t control what other countries do.” If Napster’s operations weren’t legal, neither should AI firms’ training, Newton-Rex said, writing, “These are not reasons not to uphold the law and treat creators fairly.”

Other social media users mocked Clegg with jokes meant to destroy AI firms’ favorite go-to argument against copyright claims.

“Blackbeard says asking sailors for permission to board and loot their ships would ‘kill’ the piracy on the high seas industry,” an X user with the handle “Seanchuckle” wrote.

On Bluesky, a trial lawyer, Max Kennerly, effectively satirized Clegg and the whole AI industry by writing, “Our product creates such little value that it is simply not viable in the marketplace, not even as a niche product. Therefore, we must be allowed to unilaterally extract value from the work of others and convert that value into our profits.”

Other ways to fight

Getty plans to continue fighting against the AI firms that are impressing this “world of rhetoric” on judges and lawmakers, but court battles will likely remain few and far between due to the price tag, Peters has suggested.

There are other ways to fight, though. In a submission last month, Getty pushed the Trump administration to reject “those seeking to weaken US copyright protections by creating a ‘right to learn’ exemption” for AI firms when building Trump’s AI Action Plan.

“US copyright laws are not obstructing the path to continued AI progress,” Getty wrote. “Instead, US copyright laws are a path to sustainable AI and a path that broadens society’s participation in AI’s economic benefits, which reduces downstream economic burdens on the Federal, State and local governments. US copyright laws provide incentives to invest and create.”

In Getty’s submission, the media company emphasized that requiring consent for AI training is not an “overly restrictive” control on AI’s development such as those sought by stauncher critics “that could harm US competitiveness, national security or societal advances such as curing cancer.” And Getty claimed it also wasn’t “requesting protection from existing and new sources of competition,” despite the lawsuit’s suggestion that Stability AI and other image generators threaten to replace Getty’s image library in the market.

What Getty said it hopes Trump’s AI plan will ensure is a world where the rights and opportunities of rightsholders are not “usurped for the commercial benefits” of AI companies.

In 2023, when Getty was first suing Stability AI, Peters suggested that, otherwise, allowing AI firms to widely avoid paying artists would create “a sad world,” perhaps disincentivizing creativity.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

It’s too expensive to fight every AI copyright battle, Getty CEO says Read More »

netflix-will-show-generative-ai-ads-midway-through-streams-in-2026

Netflix will show generative AI ads midway through streams in 2026

Netflix is joining its streaming rivals in testing the amount and types of advertisements its subscribers are willing to endure for lower prices.

Today, at its second annual upfront to advertisers, the streaming leader announced that it has created interactive mid-roll ads and pause ads that incorporate generative AI. Subscribers can expect to start seeing the new types of ads in 2026, Media Play News reported.

“[Netflix] members pay as much attention to midroll ads as they do to the shows and movies themselves,” Amy Reinhard, president of advertising at Netflix, said, per the publication.

Netflix started testing pause ads in July 2024, per The Verge.

Netflix launched its ad subscription tier in November 2022. Today, it said that the tier has 94 million subscribers, compared to the 300 million total subscribers it claimed in January. The current number of ad subscribers represents a 34 percent increase from November. Half of new Netflix subscribers opt for the $8 per month option rather than ad-free subscriptions, which start at $18 per month, the company says.

Netflix will show generative AI ads midway through streams in 2026 Read More »

copyright-office-head-fired-after-reporting-ai-training-isn’t-always-fair-use

Copyright Office head fired after reporting AI training isn’t always fair use


Cops scuffle with Trump picks at Copyright Office after AI report stuns tech industry.

A man holds a flag that reads “Shame” outside the Library of Congress on May 12, 2025 in Washington, DC. On May 8th, President Donald Trump fired Carla Hayden, the head of the Library of Congress, and Shira Perlmutter, the head of the US Copyright Office, just days after. Credit: Kayla Bartkowski / Staff | Getty Images News

A day after the US Copyright Office dropped a bombshell pre-publication report challenging artificial intelligence firms’ argument that all AI training should be considered fair use, the Trump administration fired the head of the Copyright Office, Shira Perlmutter—sparking speculation that the controversial report hastened her removal.

Tensions have apparently only escalated since. Now, as industry advocates decry the report as overstepping the office’s authority, social media posts on Monday described an apparent standoff at the Copyright Office between Capitol Police and men rumored to be with Elon Musk’s Department of Government Efficiency (DOGE).

A source familiar with the matter told Wired that the men were actually “Brian Nieves, who claimed he was the new deputy librarian, and Paul Perkins, who said he was the new acting director of the Copyright Office, as well as acting Registrar,” but it remains “unclear whether the men accurately identified themselves.” A spokesperson for the Capitol Police told Wired that no one was escorted off the premises or denied entry to the office.

Perlmutter’s firing followed Donald Trump’s removal of Librarian of Congress Carla Hayden, who, NPR noted, was the first African American to hold the post. Responding to public backlash, White House Press Secretary Karoline Leavitt claimed that the firing was due to “quite concerning things that she had done at the Library of Congress in the pursuit of DEI and putting inappropriate books in the library for children.”

The Library of Congress houses the Copyright Office, and critics suggested Trump’s firings were unacceptable intrusions into cultural institutions that are supposed to operate independently of the executive branch. In a statement, Rep. Joe Morelle (D.-N.Y.) condemned Perlmutter’s removal as “a brazen, unprecedented power grab with no legal basis.”

Accusing Trump of trampling Congress’ authority, he suggested that Musk and other tech leaders racing to dominate the AI industry stood to directly benefit from Trump’s meddling at the Copyright Office. Likely most threatening to tech firms, the guidance from Perlmutter’s Office not only suggested that AI training on copyrighted works may not be fair use when outputs threaten to disrupt creative markets—as publishers and authors have argued in several lawsuits aimed at the biggest AI firms—but also encouraged more licensing to compensate creators.

“It is surely no coincidence [Trump] acted less than a day after she refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models,” Morelle said, seemingly referencing Musk’s xAI chatbot, Grok.

Agreeing with Morelle, Courtney Radsch—the director of the Center for Journalism & Liberty at the left-leaning think tank the Open Markets Institute—said in a statement provided to Ars that Perlmutter’s firing “appears directly linked to her office’s new AI report questioning unlimited harvesting of copyrighted materials.”

“This unprecedented executive intrusion into the Library of Congress comes directly after Perlmutter released a copyright report challenging the tech elite’s fundamental claim: unlimited access to creators’ work without permission or compensation,” Radsch said. And it comes “after months of lobbying by the corporate billionaires” who “donated” millions to Trump’s inauguration and “have lapped up the largess of government subsidies as they pursue AI dominance.”

What the Copyright Office says about fair use

The report that the Copyright Office released on Friday is not finalized but is not expected to change radically, unless Trump’s new acting head potentially intervenes to overhaul the guidance.

It comes after the Copyright Office parsed more than 10,000 comments debating whether creators should and could feasibly be compensated for the use of their works in AI training.

“The stakes are high,” the office acknowledged, but ultimately, there must be an effective balance struck between the public interests in “maintaining a thriving creative community” and “allowing technological innovation to flourish.” Notably, the office concluded that the first and fourth factors of fair use—which assess the character of the use (and whether it is transformative) and how that use affects the market—are likely to hold the most weight in court.

According to Radsch, the report “raised crucial points that the tech elite don’t want acknowledged.” First, the Copyright Office acknowledged that it’s an open question how much data an AI developer needs to build an effective model. Then, they noted that there’s a need for a consent framework beyond putting the onus on creators to opt their works out of AI training, and perhaps most alarmingly, they concluded that “AI trained on copyrighted works could replace original creators in the marketplace.”

“Commenters painted a dire picture of what unlicensed training would mean for artists’ livelihoods,” the Copyright Office said, while industry advocates argued that giving artists the power to hamper or “kill” AI development could result in “far less competition, far less innovation, and very likely the loss of the United States’ position as the leader in global AI development.”

To prevent both harms, the Copyright Office expects that some AI training will be deemed fair use, such as training viewed as transformative, because resulting models don’t compete with creative works. Those uses threaten no market harm but rather solve a societal need, such as language models translating texts, moderating content, or correcting grammar. Or in the case of audio models, technology that helps producers clean up unwanted distortion might be fair use, where models that generate songs in the style of popular artists might not, the office opined.

But while “training a generative AI foundation model on a large and diverse dataset will often be transformative,” the office said that “not every transformative use is a fair one,” especially if the AI model’s function performs the same purpose as the copyrighted works they were trained on. Consider an example like chatbots regurgitating news articles, as is alleged in The New York Times’ dispute with OpenAI over ChatGPT.

“In such cases, unless the original work itself is being targeted for comment or parody, it is hard to see the use as transformative,” the Copyright Office said. One possible solution for AI firms hoping to preserve utility of their chatbots could be effective filters that “prevent the generation of infringing content,” though.

Tech industry accuses Copyright Office of overreach

Only courts can effectively weigh the balance of fair use, the Copyright Office said. Perhaps importantly, however, the thinking of one of the first judges to weigh the question—in a case challenging Meta’s torrenting of a pirated books dataset to train its AI models—seemed to align with the Copyright Office guidance at a recent hearing. Mulling whether Meta infringed on book authors’ rights, US District Judge Vince Chhabria explained why he doesn’t immediately “understand how that can be fair use.”

“You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products,” Chhabria said. “You are dramatically changing, you might even say obliterating, the market for that person’s work, and you’re saying that you don’t even have to pay a license to that person.”

Some AI critics think the courts have already indicated which way they are leaning. In a statement to Ars, a New York Times spokesperson suggested that “both the Copyright Office and courts have recognized what should be obvious: when generative AI products give users outputs that compete with the original works on which they were trained, that unprecedented theft of millions of copyrighted works by developers for their own commercial benefit is not fair use.”

The NYT spokesperson further praised the Copyright Office for agreeing that using Retrieval-Augmented Generation (RAG) AI to surface copyrighted content “is less likely to be transformative where the purpose is to generate outputs that summarize or provide abridged versions of retrieved copyrighted works, such as news articles, as opposed to hyperlinks.” If courts agreed on the RAG finding, that could potentially disrupt AI search models from every major tech company.

The backlash from industry stakeholders was immediate.

The president and CEO of a trade association called the Computer & Communications Industry Association, Matt Schruers, said the report raised several concerns, particularly by endorsing “an expansive theory of market harm for fair use purposes that would allow rightsholders to block any use that might have a general effect on the market for copyrighted works, even if it doesn’t impact the rightsholder themself.”

Similarly, the tech industry policy coalition Chamber of Progress warned that “the report does not go far enough to support innovation and unnecessarily muddies the waters on what should be clear cases of transformative use with copyrighted works.” Both groups celebrated the fact that the final decision on fair use would rest with courts.

The Copyright Office agreed that “it is not possible to prejudge the result in any particular case” but said that precedent supports some “general observations.” Those included suggesting that licensing deals may be appropriate where uses are not considered fair without disrupting “American leadership” in AI, as some AI firms have claimed.

“These groundbreaking technologies should benefit both the innovators who design them and the creators whose content fuels them, as well as the general public,” the report said, ending with the office promising to continue working with Congress to inform AI laws.

Copyright Office seemingly opposes Meta’s torrenting

Also among those “general observations,” the Copyright Office wrote that “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.”

The report seemed to suggest that courts and the Copyright Office may also be aligned on AI firms’ use of pirated or illegally accessed paywalled content for AI training.

Judge Chhabria only considered Meta’s torrenting in the book authors’ case to be “kind of messed up,” prioritizing the fair use question, and the Copyright Office similarly only recommended that “the knowing use of a dataset that consists of pirated or illegally accessed works should weigh against fair use without being determinative.”

However, torrenting should be a black mark, the Copyright Office suggested. “Gaining unlawful access” does bear “on the character of the use,” the office noted, arguing that “training on pirated or illegally accessed material goes a step further” than simply using copyrighted works “despite the owners’ denial of permission.” Perhaps if authors can prove that AI models trained on pirated works led to lost sales, the office suggested that a fair use defense might not fly.

“The use of pirated collections of copyrighted works to build a training library, or the distribution of such a library to the public, would harm the market for access to those Works,” the office wrote. “And where training enables a model to output verbatim or substantially similar copies of the works trained on, and those copies are readily accessible by end users, they can substitute for sales of those works.”

Likely frustrating Meta—which is currently fighting to keep leeching evidence out of the book authors’ case—the Copyright Office suggested that “the copying of expressive works from pirate sources in order to generate unrestricted content that competes in the marketplace, when licensing is reasonably available, is unlikely to qualify as fair use.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Copyright Office head fired after reporting AI training isn’t always fair use Read More »

time-saved-by-ai-offset-by-new-work-created,-study-suggests

Time saved by AI offset by new work created, study suggests

A new study analyzing the Danish labor market in 2023 and 2024 suggests that generative AI models like ChatGPT have had almost no significant impact on overall wages or employment yet, despite rapid adoption in some workplaces. The findings, detailed in a working paper by economists from the University of Chicago and the University of Copenhagen, provide an early, large-scale empirical look at AI’s transformative potential.

In “Large Language Models, Small Labor Market Effects,” economists Anders Humlum and Emilie Vestergaard focused specifically on the impact of AI chatbots across 11 occupations often considered vulnerable to automation, including accountants, software developers, and customer support specialists. Their analysis covered data from 25,000 workers and 7,000 workplaces in Denmark.

Despite finding widespread and often employer-encouraged adoption of these tools, the study concluded that “AI chatbots have had no significant impact on earnings or recorded hours in any occupation” during the period studied. The confidence intervals in their statistical analysis ruled out average effects larger than 1 percent.

“The adoption of these chatbots has been remarkably fast,” Humlum told The Register about the study. “Most workers in the exposed occupations have now adopted these chatbots… But then when we look at the economic outcomes, it really has not moved the needle.”

AI creating more work?

During the study, the researchers investigated how company investment in AI affected worker adoption and how chatbots changed workplace processes. While corporate investment boosted AI tool adoption—saving time for 64 to 90 percent of users across studied occupations—the actual benefits were less substantial than expected.

The study revealed that AI chatbots actually created new job tasks for 8.4 percent of workers, including some who did not use the tools themselves, offsetting potential time savings. For example, many teachers now spend time detecting whether students use ChatGPT for homework, while other workers review AI output quality or attempt to craft effective prompts.

Time saved by AI offset by new work created, study suggests Read More »

midjourney-introduces-first-new-image-generation-model-in-over-a-year

Midjourney introduces first new image generation model in over a year

AI image generator Midjourney released its first new model in quite some time today; dubbed V7, it’s a ground-up rework that is available in alpha to users now.

There are two areas of improvement in V7: the first is better images, and the second is new tools and workflows.

Starting with the image improvements, V7 promises much higher coherence and consistency for hands, fingers, body parts, and “objects of all kinds.” It also offers much more detailed and realistic textures and materials, like skin wrinkles or the subtleties of a ceramic pot.

Those details are often among the most obvious telltale signs that an image has been AI-generated. To be clear, Midjourney isn’t claiming to have made advancements that make AI images unrecognizable to a trained eye; it’s just saying that some of the messiness we’re accustomed to has been cleaned up to a significant degree.

V7 can reproduce materials and lighting situations that V6.1 usually couldn’t. Credit: Xeophon

On the features side, the star of the show is the new “Draft Mode.” On its various communication channels with users (a blog, Discord, X, and so on), Midjourney says that “Draft mode is half the cost and renders images at 10 times the speed.”

However, the images are of lower quality than what you get in the other modes, so this is not intended to be the way you produce final images. Rather, it’s meant to be a way to iterate and explore to find the desired result before switching modes to make something ready for public consumption.

V7 comes with two modes: turbo and relax. Turbo generates final images quickly but is twice as expensive in terms of credit use, while relax mode takes its time but is half as expensive. There is currently no standard mode for V7, strangely; Midjourney says that’s coming later, as it needs some more time to be refined.

Midjourney introduces first new image generation model in over a year Read More »