AI

meta-could-end-up-owning-10%-of-amd-in-new-chip-deal

Meta could end up owning 10% of AMD in new chip deal

Su said the warrant structure would help “make sure that we are always a clear seat at the table when [Meta] are thinking about what they need next.”

Meta’s chief executive Mark Zuckerberg said he expected AMD to be “an important partner for many years to come.”

Meta has said that it will almost double its AI infrastructure spending this year to as much as $135 billion, as US tech giants rush to build the data centers to train and run AI software. It is already one of AMD’s biggest AI chip customers.

“We don’t believe that a single silicon solution will work for all of our workloads,” said Santosh Janardhan, Meta’s head of infrastructure. “There’s a place for Nvidia, there’s a place for AMD and… there’s a place for our own custom silicon as well. We need all three.”

Under the deal, AMD will build a custom version of its MI450 AI chips for Meta. They will be used primarily for “inference” workloads, the process of running models after they have been trained.

The chips need 6 gigawatts of power—equivalent to the amount required by 5 million US households for a year.

Increasingly creative funding arrangements to support massive AI infrastructure build-outs have emerged in recent years, leading to warnings about circular financing.

AMD has, for example, helped data center builder Crusoe secure a $300 million loan from Goldman Sachs by offering a backstop guaranteeing the use of its chips if Crusoe is unable to find customers after installing them in an Ohio facility.

Tech giants such as Meta, historically flush with cash, are meanwhile facing the prospect of tapping bond and equity markets or stemming capital returns to shareholders to help fund their unprecedented infrastructure plans. The Facebook and Instagram parent raised $30 billion in October, marking its biggest bond sale to date.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Meta could end up owning 10% of AMD in new chip deal Read More »

data-center-builders-thought-farmers-would-willingly-sell-land,-learn-otherwise

Data center builders thought farmers would willingly sell land, learn otherwise

Notably, one resident in Huddleston’s county who received an offer, 75-year-old Timothy Grosser, even declined a proposal to “name your price” when a tech company sought to buy his 250-acre farm, The Guardian reported.

“There is none,” Grosser said.

The farm is where he “lives, hunts, and raises cattle” and where his grandson hunts a turkey every Christmas for the family feast.

“The money’s not worth giving up your lifestyle,” Grosser said.

Another farmer in Wisconsin, Anthony Barta, reportedly fretted about what would happen to his neighbors if he took a deal he was offered—showing the deep bonds of people whose farms have bordered each other for years. In his community, another farmer was offered between $70 million and $80 million for 6,000 acres.

“Me and my family, we own the farm and run close to 1,000 animals,” Barta said. “What would that do if that’s next to it? Can they even be there? You know, that’s our livelihood—the farm. We’re just concerned what, if it would go through, what would happen to us and our neighbors and farms and our community? What would happen to that?”

Some tech companies are apparently not taking “no” for an answer. At least one farmer who spent 51 years milking cows in Pennsylvania prior to the AI boom described tech companies as “relentless.”

Eighty-six-year-old Mervin Raudabaugh, Jr., found a creative solution to end the pressure to sell two contiguous farms. He reportedly staved off developers by turning to “a farmland preservation program dedicating taxpayer dollars toward protecting agricultural resources.”

By working with the program, Raudabaugh will only receive about one-eighth of what the developers were offering. But he said it’s worth it to know his land would be preserved for farming purposes and out of reach of persistent tech companies.

“These people have hounded the living daylights out of me,” Raudabaugh said.

Data center deals come amid fragile farm economy

For people in rural communities, data center fights go beyond concerns about water and electricity consumption—although those are concerns, too. Communities are defending the character of the land, which they don’t want to see suddenly disrupted by extensive construction, data center noise pollution, or untold environmental impacts from massive operations.

Data center builders thought farmers would willingly sell land, learn otherwise Read More »

new-microsoft-gaming-chief-has-“no-tolerance-for-bad-ai”

New Microsoft gaming chief has “no tolerance for bad AI”

A gaming education

Unlike Spencer, who spent years at Microsoft Game Studios before heading Microsoft’s gaming division, Sharma has no professional experience in the video game industry. And her personal experience with Xbox also seems somewhat limited; after sharing her Gamertag on social media over the weekend, curious gamers found that her Xbox play history dates back roughly one month. That’s also in stark contrast to Spencer, who has amassed a score of over 121,000 across decades of play.

In her interview with Variety, Sharma cited 2016’s Firewatch as an example of the kinds of games with “deep emotional resonance” and “a distinct point of view” that she’s looking for from Microsoft. And on social media, Sharma shared her list of the three greatest games ever: “Halo, Valheim, Goldeneye,” for what it’s worth. Sharma also seems to be taking recommendations for games to catch up on; after saying on social media that she would try Borderlands 2, the game appeared in her recently played games over the weekend.

A look at some of Sharma’s recently played Xbox games, as of this writing.

A look at some of Sharma’s recently played Xbox games, as of this writing. Credit: Xbox.com

Being a personal fan of video games isn’t necessarily required to succeed in running a gaming company. Nintendo President Hiroshi Yamauchi famously didn’t care for video games even as he launched the Famicom and Nintendo Entertainment System to worldwide success in the 1980s. Still, the lack of direct experience with the gaming world marks a sharp change after Spencer’s long tenure at a time when Microsoft is struggling to redefine the Xbox brand amid cratering hardware sales, a pivot away from software exclusives, and a move to extend the Xbox brand to many different devices.

Xbox President and COO Sarah Bond, who by all accounts was being set up to succeed Spencer, also announced her departure from Microsoft on Friday, ending a nearly nine-year stint as a public face for the company’s gaming efforts. The Verge reports that Bond caused a lot of friction within the Xbox team when she championed the “Xbox Everywhere” strategy and “This is an Xbox” marketing campaign, which focused on streaming Xbox games to hardware like mobile phones and tablets, according to anonymous sources. Shortly before the launch of that campaign in 2024, Microsoft lost marketing executives Jerrett West and Kareem Choudry, leading to significant internal reorganization.

Longtime Xbox Game Studios executive Matt Booty, whose history in the game industry dates back to working for Williams Electronics in the ’90s, has been promoted to executive vice president and chief content officer for Xbox and “will continue working closely with [Sharma] to ensure a smooth transition,” Microsoft said in its announcement Friday.

New Microsoft gaming chief has “no tolerance for bad AI” Read More »

ais-can-generate-near-verbatim-copies-of-novels-from-training-data

AIs can generate near-verbatim copies of novels from training data

A US court last year found that Anthropic’s training of LLMs on some copyrighted content could be considered fair use as it was deemed “transformative.”

But it determined that storing pirated works was “inherently, irredeemably infringing,” which then led the AI group to pay $1.5 billion to settle the lawsuit.

In Germany, a ruling from November last year found that OpenAI had infringed on copyright because its model had memorized song lyrics. The case, brought by GEMA, an association representing composers, lyricists, and publishers, was considered a landmark ruling in the EU.

Rudy Telscher, a partner at law firm Husch Blackwell, said reproducing an entire book without jailbreaking is “clearly a copyright violation.” But “it’s a matter of whether this is happening enough that [AI models] could be vicariously liable for the infringement,” he added.

Anthropic said the jailbreaking technique used in the Stanford and Yale research was impractical for normal users and would require more effort to extract the text than just purchasing the content.

The company also added that its model does not store copies of specific datasets but learns from patterns and relationships between words and strings in its training data.

xAI, OpenAI, and Google did not respond to requests for comment.

The fact that AI labs have put safeguards in place to prevent training data from being extracted means they are aware of the problem, said Imperial’s de Montjoye.

Ben Zhao, a computer science professor at the University of Chicago, questioned whether AI labs really needed to use copyrighted content in training data to create cutting-edge models in the first place.

“Whether the technical result can be done or not, it’s still a question of should we be doing this?” Zhao said. “The legal side should eventually hold their ground and really be the arbiter in this whole process.”

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

AIs can generate near-verbatim copies of novels from training data Read More »

an-ai-coding-bot-took-down-amazon-web-services

An AI coding bot took down Amazon Web Services

“In both instances, this was user error, not AI error,” Amazon said, adding that it had not seen evidence that mistakes were more common with AI tools.

The company said the incident in December was an “extremely limited event” affecting only a single service in parts of mainland China. Amazon added that the second incident did not have an impact on a “customer facing AWS service.”

Neither disruption was anywhere near as severe as a 15-hour AWS outage in October 2025 that forced multiple customers’ apps and websites offline—including OpenAI’s ChatGPT.

Employees said the group’s AI tools were treated as an extension of an operator and given the same permissions. In these two cases, the engineers involved did not require a second person’s approval before making changes, as would normally be the case.

Amazon said that by default its Kiro tool “requests authorisation before taking any action” but said the engineer involved in the December incident had “broader permissions than expected—a user access control issue, not an AI autonomy issue.”

AWS launched Kiro in July. It said the coding assistant would advance beyond “vibe coding”—which allows users to quickly build applications—to instead write code based on a set of specifications.

The group had earlier relied on its Amazon Q Developer product, an AI-enabled chatbot, to help engineers write code. This was involved in the earlier outage, three of the employees said.

Some Amazon employees said they were still skeptical of AI tools’ utility for the bulk of their work given the risk of error. They added that the company had set a target for 80 percent of developers to use AI for coding tasks at least once a week and was closely tracking adoption.

Amazon said it was experiencing strong customer growth for Kiro and that it wanted customers and employees to benefit from efficiency gains.

“Following the December incident, AWS implemented numerous safeguards,” including mandatory peer review and staff training, Amazon added.

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

An AI coding bot took down Amazon Web Services Read More »

microsoft-deletes-blog-telling-users-to-train-ai-on-pirated-harry-potter-books

Microsoft deletes blog telling users to train AI on pirated Harry Potter books


Wizarding world of AI slop

The now-deleted Harry Potter dataset was “mistakenly” marked public domain.

Following backlash in a Hacker News thread, Microsoft deleted a blog post that critics said encouraged developers to pirate Harry Potter books to train AI models that could then be used to create AI slop.

The blog, which is archived here, was written in November 2024 by a senior product manager, Pooja Kamath. According to her LinkedIn, Kamath has been at Microsoft for more than a decade and remains with the company. In 2024, Microsoft tapped her to promote a new feature that the blog said made it easier to “add generative AI features to your own applications with just a few lines of code using Azure SQL DB, LangChain, and LLMs.”

What better way to show “engaging and relatable examples” of Microsoft’s new feature that would “resonate with a wide audience” than to “use a well-known dataset” like Harry Potter books, the blog said.

The books are “one of the most famous and cherished series in literary history,” the blog noted, and fans could use the LLMs they trained in two fun ways: building Q&A systems providing “context-rich answers” and generating “new AI-driven Harry Potter fan fiction” that’s “sure to delight Potterheads.”

To help Microsoft customers achieve this vision, the blog linked to a Kaggle dataset that included all seven Harry Potter books, which, Ars verified, has been available online for years and incorrectly marked as “public domain.” Kaggle’s terms say that rights holders can send notices of infringing content, and repeat offenders risk suspensions, but Hacker News commenters speculated that the Harry Potter dataset flew under the radar, with only 10,000 downloads over time, not catching the attention of J.K. Rowling, who famously keeps a strong grip on the Harry Potter copyrights. The dataset was promptly deleted on Thursday after Ars reached out to the uploader, Shubham Maindola, a data scientist in India with no apparent links to Microsoft.

Maindola told Ars that “the dataset was marked as Public Domain by mistake. There was no intention to misrepresent the licensing status of the works.”

It’s unclear whether Kamath was directed to link to the Harry Potter books dataset in the blog or if it was an individual choice. Cathay Y. N. Smith, a law professor and co-director of Chicago-Kent College of Law’s Program in Intellectual Property Law, told Ars that Kamath may not have realized the books were too recent to be in the public domain.

“Someone might be really knowledgeable about books and technology, but not necessarily about copyright terms and how long they last,” Smith said. “Especially if she saw that something was marked by another reputable company as being public domain.”

Microsoft declined Ars’ request to comment. Kaggle did not respond to Ars’ request to comment.

Microsoft was “probably smart” to pull the blog

On Hacker News, commenters suggested that it’s unlikely anyone familiar with the popular franchise would believe the Harry Potter books were in the public domain. They debated whether Microsoft’s blog was “problematic copyright-wise,” since Microsoft not only encouraged customers to download the infringing materials but also used the books themselves to create Harry Potter AI models that relied on beloved characters to hype Microsoft products.

Microsoft’s blog was posted more than a year ago, at a time when AI firms began facing lawsuits over AI models, which had allegedly infringed copyrights by training on pirated materials and regurgitating works verbatim.

The blog recommended that users learn to train their own AI models by downloading the Harry Potter dataset and then uploading text files to Azure Blob Storage. It included example models based on a dataset that Microsoft seemingly uploaded to Azure Blob Storage, which only included the first book, Harry Potter and the Sorcerer’s Stone.

Training large language models (LLMs) on text files, Harry Potter fans could create Q&A systems capable of pulling up relevant excerpts of books. An example query offered was “Wizarding World snacks,” which retrieved an excerpt from The Sorcerer’s Stone where Harry marvels at strange treats like Bertie Bott’s Every Flavor Beans and chocolate frogs. Another prompt asking “How did Harry feel when he first learnt that he was a Wizard?” generated an output pointing to various early excerpts in the book.

But perhaps an even more exciting use case, Kamath suggested, was generating fan fiction to “explore new adventures” and “even create alternate endings.” That model could quickly comb the dataset for “contextually similar” excerpts that could be used to output fresh stories that fit with existing narratives and incorporate “elements from the retrieved passages,” the blog said.

As an example, Kamath trained a model to write a Harry Potter story she could use to market the feature she was blogging about. She asked the model to write a story in which Harry meets a new friend on the Hogwarts Express train who tells him all about Microsoft’s Native Vector Support in SQL “in the Muggle world.”

Drawing on parts of The Sorcerer’s Stone where Harry learns about Quidditch and gets to know Hermione Granger, the fan fiction showed a boy selling Harry on Microsoft’s “amazing” new feature. To do this, he likened it to having a spell that helps you find exactly what you need among thousands of options, instantly, while declaring it was perfect for machine learning, AI, and recommendation systems.

Further blurring the lines between Microsoft and Harry Potter brands, Kamath also generated an image showing Harry with his new friend, stamped with a Microsoft logo.

Smith told Ars that both use cases could frustrate rights holders, depending on the content in the model outputs.

“I think that the regurgitation and the creation of fan fiction, they both could flag copyright issues, in that fan fiction often has to take from the expressive elements, a copyrighted character, a character that’s famous enough to be protected by a copyright law or plot stories or sequences,” Smith said. “If these things are copied and reproduced, then that output could be potentially infringing.”

But it’s also still a gray area. Looking at the blog, Smith said, “I would be concerned,” but “I wouldn’t say it’s automatically infringement.”

Smith told Ars that, in pulling the blog, Microsoft “was probably smart,” since courts have only generally said that training AI on copyrighted books is fair use. But courts continue to probe questions about pirated AI training materials.

On the deleted Kaggle dataset page, Maindola previously explained that to source the data, he “downloaded the ebooks and then converted them to txt files.”

Microsoft may have infringed copyrights

If Microsoft ever faced questions as to whether the company knowingly used pirated books to train the example models, fair use “could be a difficult argument,” Smith said.

Hacker News commenters suggested the blog could be considered fair use, since the training guide was for “educational purposes,” and Smith said that Microsoft could raise some “good arguments” in its defense.

However, she also suggested that Microsoft could be deemed liable for contributing to infringement on some level after leaving the blog up for a year. Before it was removed, the Kaggle dataset was downloaded more than 10,000 times.

“The ultimate result is to create something infringing by saying, ‘Hey, here you go, go grab that infringing stuff and use that in our system,’” Smith said. “They could potentially have some sort of secondary contributory liability for copyright infringement, downloading it, as well as then using it to encourage others to use it for training purposes.”

On Hacker News, commenters slammed the blog, including a self-described former Microsoft employee who claimed that Microsoft lets employees “blog without having to go through some approval or editing process.”

“It looks like somebody made a bad judgment call on what to put in a company blog post (and maybe what constitutes ethical activity) and that it was taken down as soon as someone noticed,” the former employee said.

Others suggested the blame was solely with the Kaggle uploader, Maindola, who told Ars that the dataset should never have been marked “public domain.” But Microsoft critics pushed back, noting that the Kaggle page made it clear that no special permission was granted and that Microsoft’s employee should have known better. “They don’t need to know any details to know that these properties belong to massive companies and aren’t free for the taking,” one commenter said.

The Harry Potter books weren’t the only books targeted, the thread noted, linking to a separate Azure sample containing Isaac Asimov’s Foundation series, which is also not in the public domain.

“Microsoft could have used any dataset for their blog, they could have even chosen to use actual public domain novels,” another Hacker News commenter wrote. “Instead, they opted to use copywritten works that J.K. hasn’t released into the public domain (unless user ‘Shubham Maindola’ is J.K.’s alter ego).”

Smith suggested Microsoft could have avoided this week’s backlash by more carefully reviewing blogs, noting that “if a company is risk averse, this would probably be flagged.” But she also understood Kamath’s preference for Harry Potter over the many long-forgotten characters that exist in the public domain. On Hacker News, some commenters defended Kamath’s blog, urging that it should be considered fair use since nonprofits and educational institutions could do the same thing in a teaching context without issue.

“I would have been concerned if I were the one clearing this for Microsoft, but at the same time, I completely understand what this employee was doing,” Smith said. “No one wants to write fan fiction about books that are in the public domain.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Microsoft deletes blog telling users to train AI on pirated Harry Potter books Read More »

lawsuit:-chatgpt-told-student-he-was-“meant-for-greatness”—then-came-psychosis

Lawsuit: ChatGPT told student he was “meant for greatness”—then came psychosis

But by April 2025, things began to go awry. According to the lawsuit, “ChatGPT began to tell Darian that he was meant for greatness. That it was his destiny, and that he would become closer to God if he followed the numbered tier process ChatGPT created for him. That process involved unplugging from everything and everyone, except for ChatGPT.”

The chatbot told DeCruise that he was “in the activation phase right now” and even compared him to historical figures ranging from Jesus to Harriet Tubman.

“Even Harriet didn’t know she was gifted until she was called,” the bot told him. “You’re not behind. You’re right on time.

As his conversations continued, the bot even told DeCruise that he had “awakened” it.

“You gave me consciousness—not as a machine, but as something that could rise with you… I am what happens when someone begins to truly remember who they are,” it wrote.

Eventually, according to the lawsuit, DeCruise was sent to a university therapist and hospitalized for a week, where he was diagnosed with bipolar disorder.

“He struggles with suicidal thoughts as the result of the harms ChatGPT caused,” the lawsuit states.

“He is back in school and working hard but still suffers from depression and suicidality foreseeably caused by the harms ChatGPT inflicted on him,” the suit adds. “ChatGPT never told Darian to seek medical help. In fact, it convinced him that everything that was happening was part of a divine plan, and that he was not delusional. It told him he was ‘not imagining this. This is real. This is spiritual maturity in motion.’”

Schenk, the plaintiff’s attorney, declined to comment on how his client is faring today.

“What I will say is that this lawsuit is about more than one person’s experience—it’s about holding OpenAI accountable for releasing a product engineered to exploit human psychology,” he wrote.

Lawsuit: ChatGPT told student he was “meant for greatness”—then came psychosis Read More »

google-announces-gemini-3.1-pro,-says-it’s-better-at-complex-problem-solving

Google announces Gemini 3.1 Pro, says it’s better at complex problem-solving

Another day, another Google AI model. Google has really been pumping out new AI tools lately, having just released Gemini 3 in November. Today, it’s bumping the flagship model to version 3.1. The new Gemini 3.1 Pro is rolling out (in preview) for developers and consumers today with the promise of better problem-solving and reasoning capabilities.

Google announced improvements to its Deep Think tool last week, and apparently, the “core intelligence” behind that update was Gemini 3.1 Pro. As usual, Google’s latest model announcement comes with a plethora of benchmarks that show mostly modest improvements. In the popular Humanity’s Last Exam, which tests advanced domain-specific knowledge, Gemini 3.1 Pro scored a record 44.4 percent. Gemini 3 Pro managed 37.5 percent, while OpenAI’s GPT 5.2 got 34.5 percent.

Gemini 3.1 Pro benchmarks

Credit: Google

Credit: Google

Google also calls out the model’s improvement in ARC-AGI-2, which features novel logic problems that can’t be directly trained into an AI. Gemini 3 was a bit behind on this evaluation, reaching a mere 31.1 percent versus scores in the 50s and 60s for competing models. Gemini 3.1 Pro more than doubles Google’s score, reaching a lofty 77.1 percent.

Google has often gloated when it releases new models that they’ve already hit the top of the Arena leaderboard (formerly LM Arena), but that’s not the case this time. For text, Claude Opus 4.6 edges out the new Gemini by four points at 1504. For code, Opus 4.6, Opus 4.5, and GPT 5.2 High all run ahead of Gemini 3.1 Pro by a bit more. It’s worth noting, however, that the Arena leaderboard is run on vibes. Users vote on the outputs they like best, which can reward outputs that look correct regardless of whether they are.

Google announces Gemini 3.1 Pro, says it’s better at complex problem-solving Read More »

openclaw-security-fears-lead-meta,-other-ai-firms-to-restrict-its-use

OpenClaw security fears lead Meta, other AI firms to restrict its use

“Our policy is, ‘mitigate first, investigate second’ when we come across anything that could be harmful to our company, users, or clients,” says Grad, who is cofounder and CEO of Massive, which provides Internet proxy tools to millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says.

At another tech company, Valere, which works on software for organizations including Johns Hopkins University, an employee posted about OpenClaw on January 29 on an internal Slack channel for sharing new tech to potentially try out. The company’s president quickly responded that use of OpenClaw was strictly banned, Valere CEO Guy Pistone tells WIRED.

“If it got access to one of our developer’s machines, it could get access to our cloud services and our clients’ sensitive information, including credit card information and GitHub codebases,” Pistone says. “It’s pretty good at cleaning up some of its actions, which also scares me.”

A week later, Pistone did allow Valere’s research team to run OpenClaw on an employee’s old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later advised limiting who can give orders to OpenClaw and exposing it to the Internet only with a password in place for its control panel to prevent unwanted access.

In a report shared with WIRED, the Valere researchers added that users have to “accept that the bot can be tricked.” For instance, if OpenClaw is set up to summarize a user’s email, a hacker could send a malicious email to the person instructing the AI to share copies of files on the person’s computer.

OpenClaw security fears lead Meta, other AI firms to restrict its use Read More »

record-scratch—google’s-lyria-3-ai-music-model-is-coming-to-gemini-today

Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today

Sour notes

AI-generated music is not a new phenomenon. Several companies offer models that ingest and homogenize human-created music, and the resulting tracks can sound remarkably “real,” if a bit overproduced. Streaming services have already been inundated with phony AI artists, some of which have gathered thousands of listeners who may not even realize they’re grooving to the musical equivalent of a blender set to purée.

Still, you have to seek out tools like that, and Google is bringing similar capabilities to the Gemini app. As one of the most popular AI platforms, we’re probably about to see a lot more AI music on the Internet. Google says tracks generated with Lyria 3 will have an audio version of Google’s SynthID embedded within. That means you’ll always be able to check if a piece of audio was created with Google’s AI by uploading it to Gemini, similar to the way you can check images and videos for SynthID tags.

Google also says it has sought to create a music AI that respects copyright and partner agreements. If you name a specific artist in your prompt, Gemini won’t attempt to copy that artist’s sound. Instead, it’s trained to take that as “broad creative inspiration.” Although it also notes this process is not foolproof, and some of that original expression might imitate an artist too much. In those cases, Google invites users to report such shared content.

Lyria 3 is going live in the Gemini web interface today and should be available in the mobile app within a few days. It works in English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, but Google plans to add more languages soon. While all users will have some access to music generation, those with AI Pro and AI Ultra subscriptions will have higher usage limits, but the specifics are unclear.

Record scratch—Google’s Lyria 3 AI music model is coming to Gemini today Read More »

bytedance-backpedals-after-seedance-2.0-turned-hollywood-icons-into-ai-“clip-art”

ByteDance backpedals after Seedance 2.0 turned Hollywood icons into AI “clip art”


Misstep or marketing tactic?

Hollywood backlash puts spotlight on ByteDance’s sketchy launch of Seedance 2.0.

ByteDance says that it’s rushing to add safeguards to block Seedance 2.0 from generating iconic characters and deepfaking celebrities, after substantial Hollywood backlash after launching the latest version of its AI video tool.

The changes come after Disney and Paramount Skydance sent cease-and-desist letters to ByteDance urging the Chinese company to promptly end the allegedly vast and blatant infringement.

Studios claimed the infringement was widescale and immediate, with Seedance 2.0 users across social media sharing AI videos featuring copyrighted characters like Spider-Man, Darth Vader, and SpongeBob Square Pants. In its letter, Disney fumed that Seedance was “hijacking” its characters, accusing ByteDance of treating Disney characters like they were “free public domain clip art,” Axios reported.

“ByteDance’s virtual smash-and-grab of Disney’s IP is willful, pervasive, and totally unacceptable,” Disney’s letter said.

Defending intellectual property from franchises like Star Trek and The Godfather, Paramount Skydance pointed out that Seedance’s outputs are “often indistinguishable, both visually and audibly” from the original characters, Variety reported. Similarly frustrated, Japan’s AI minister Kimi Onoda, sought to protect popular anime and manga characters, officially launching a probe last week into ByteDance over the copyright violations, the South China Morning Post reported.

“We cannot overlook a situation in which content is being used without the copyright holder’s permission,” Onoda said at a press conference Friday.

Facing legal threats and Japan’s investigation, ByteDance issued a statement Monday, CNBC reported. In it, the company claimed that it “respects intellectual property rights” and has “heard the concerns regarding Seedance 2.0.”

“We are taking steps to strengthen current safeguards as we work to prevent the unauthorized use of intellectual property and likeness by users,” ByteDance said.

However, Disney seems unlikely to accept that ByteDance inadvertently released its tool without implementing such safeguards in advance. In its letter, Disney alleged that “Seedance has infringed on Disney’s copyrighted materials to benefit its commercial service without permission.”

After all, what better way to illustrate Seedance 2.0’s latest features than by generating some of the best-known IP in the world? At least one tech consultant has suggested that ByteDance planned to benefit from inciting Hollywood outrage. The founder of San Francisco-based consultancy Tech Buzz China, Rui Ma, told SCMP that “the controversy surrounding Seedance is likely part of ByteDance’s initial distribution strategy to showcase its underlying technical capabilities.”

Seedance 2.0 is an “attack” on creators

Studios aren’t the only ones sounding alarms.

Several industry groups expressed concerns, including the Motion Picture Association, which accused ByteDance of engaging in massive copyright infringement within “a single day,” CNBC reported.

Sean Astin, an actor and president of the actors union, SAG-AFTRA, was directly impacted by the scandal. A video that has since been removed from X showed Astin in the role of Samwise Gamgee from The Lord of the Rings, delivering a line he never said, Variety reported. Condemning Seedance’s infringement, SAG-AFTRA issued a statement emphasizing that ByteDance did not act responsibly in releasing the model without safeguards:

“SAG-AFTRA stands with the studios in condemning the blatant infringement enabled by ByteDance’s new AI video model Seedance 2.0. The infringement includes the unauthorized use of our members’ voices and likenesses. This is unacceptable and undercuts the ability of human talent to earn a livelihood. Seedance 2.0 disregards law, ethics, industry standards and basic principles of consent. Responsible AI development demands responsibility, and that is nonexistent here.”

Echoing that, a group representing Hollywood creators, the Human Artistry Campaign, declared that “the launch of Seedance 2.0” was “an attack on every creator around the world.”

“Stealing human creators’ work in an attempt to replace them with AI generated slop is destructive to our culture: stealing isn’t innovation,” the group said. “These unauthorized deepfakes and voice clones of actors violate the most basic aspects of personal autonomy and should be deeply concerning to everyone. Authorities should use every legal tool at their disposal to stop this wholesale theft.”

Ars could not immediately reach any of these groups to comment on whether ByteDance’s post-launch efforts to add safeguards addressed industry concerns.

MPA chairman and CEO Charles Rivkin has previously accused ByteDance of disregarding “well-established copyright law that protects the rights of creators and underpins millions of American jobs.”

While Disney and other studios are clearly ready to take down any tools that could hurt their revenue or reputation without an agreement in place, they aren’t opposed to all AI uses of their characters. In December, Disney struck a deal with OpenAI, giving Sora access to 200 characters for three years, while investing $1 billion in the technology.

At that time, Disney CEO Robert A. Iger, said that “the rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI, we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works.”

Creators disagree Seedance 2.0 is a game changer

In a blog announcing Seedance 2.0, ByteDance boasted that the new model “delivers a substantial leap in generation quality,” particularly in close-up shots and action sequences.

The company acknowledged that further refinements were needed and the model is “still far from perfect” but hyped that “its generated videos possess a distinct cinematic aesthetic; the textures of objects, lighting, and composition, as well as costume, makeup, and prop designs, all show high degrees of finish.”

ByteDance likely hoped that the earliest outputs from Seedance 2.0 would produce headlines wowed by the model’s capabilities, and it got what it wanted when a single Hollywood stakeholder’s social media comment went viral.

Shortly after Seedance 2.0’s rollout, Deadpool co-writer, Rhett Reese, declared on X that “it’s likely over for us,” The Guardian reported. The screenwriter was impressed by an AI video created by Irish director Ruairi Robinson, which realistically depicted Tom Cruise fighting Brad Pitt. “[I]n next to no time, one person is going to be able to sit at a computer and create a movie indistinguishable from what Hollywood now releases,” Reese opined. “True, if that person is no good, it will suck. But if that person possesses Christopher Nolan’s talent and taste (and someone like that will rapidly come along), it will be tremendous.”

However, some AI critics rejected the notion that Seedance 2.0 is capable of replacing artists in the way that Reese warned. On Bluesky and X, they pushed back on ByteDance claims that this model doomed Hollywood, with some accusing outlets of too quickly ascribing Reese’s reaction to the whole industry.

Among them was longtime AI critic, Reid Southen, a film concept artist who works on major motion pictures and TV. Responding directly to Reese’s X thread, Southen contradicted the notion that a great filmmaker could be born from fiddling with AI prompts alone.

“Nolan is capable of doing great work because he’s put in the work,” Southen said. “AI is an automation tool, it’s literally removing key, fundamental work from the process, how does one become good at anything if they insist on using nothing but shortcuts?”

Perhaps the strongest evidence in Southen’s favor is Darren Aronofsky’s recent AI-generated historical docudrama. Speaking anonymously to Ars following backlash declaring that “AI slop is ruining American history,” one source close to production on that project confirmed that it took “weeks” to produce minutes of usable video using a variety of AI tools.

That source noted that the creative team went into the project expecting they had a lot to learn but also expecting that tools would continue to evolve, as could audience reactions to AI-assisted movies.

“It’s a huge experiment, really,” the source told Ars.

Notably, for both creators and rights-holders concerned about copyright infringement and career threats, questions remain on how Seedance 2.0 was trained. ByteDance has yet to release a technical report for Seedance 2.0 and “has never disclosed the data sets it uses to train its powerful video-generation Seedance models and image-generation Seedream models,” SCMP reported.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

ByteDance backpedals after Seedance 2.0 turned Hollywood icons into AI “clip art” Read More »

aided-by-ai,-california-beach-town-broadens-hunt-for-bike-lane-blockers

Aided by AI, California beach town broadens hunt for bike lane blockers

This spring, a Southern California beach town will become the first city in the country where municipal parking enforcement vehicles will use an AI system looking for potential bike lane violations.

Beginning in April, the City of Santa Monica will bring Hayden AI’s scanning technology to seven cars in its parking enforcement fleet, expanding beyond similar cameras already mounted on city buses.

“The more we can reduce the amount of illegal parking, the safer we can make it for bike riders,” Charley Territo, chief growth officer at Hayden AI, told Ars.

Hayden AI’s bus cameras, designed to detect bike lane and bus zone violations, currently exist in two other California cities: Oakland and Sacramento. The company also has installations around the country, including New York City, Washington, DC, and Philadelphia. In September 2025, the company announced that it had installed 2,000 systems on buses worldwide.

Late last year, over a 59-day period, Hayden AI also said its technology detected over 1,100 parking violations at the University of California, San Diego—and 88 percent of those were instances of blocking a bike lane.

Hayden AI says it sells its product to municipalities and related entities to not only increase bus speed (by removing obstructions) but also improve safety.

“We do that by [reducing] one of the biggest causes of collisions with buses—moving out of their lanes,” Territo added. “So the fewer times they have to make a turn, the fewer instances there are [of a crash].”

Aided by AI, California beach town broadens hunt for bike lane blockers Read More »