Artificial Intelligence

spotify-peeved-after-10,000-users-sold-data-to-build-ai-tools

Spotify peeved after 10,000 users sold data to build AI tools


Spotify sent a warning to stop data sales, but developers say they never got it.

For millions of Spotify users, the “Wrapped” feature—which crunches the numbers on their annual listening habits—is a highlight of every year’s end, ever since it debuted in 2015. NPR once broke down exactly why our brains find the feature so “irresistible,” while Cosmopolitan last year declared that sharing Wrapped screenshots of top artists and songs had by now become “the ultimate status symbol” for tens of millions of music fans.

It’s no surprise then that, after a decade, some Spotify users who are especially eager to see Wrapped evolve are no longer willing to wait to see if Spotify will ever deliver the more creative streaming insights they crave.

With the help of AI, these users expect that their data can be more quickly analyzed to potentially uncover overlooked or never-considered patterns that could offer even more insights into what their listening habits say about them.

Imagine, for example, accessing a music recap that encapsulates a user’s full listening history—not just their top songs and artists. With that unlocked, users could track emotional patterns, analyzing how their music tastes reflected their moods over time and perhaps helping them adjust their listening habits to better cope with stress or major life events. And for users particularly intrigued by their own data, there’s even the potential to use AI to cross data streams from different platforms and perhaps understand even more about how their music choices impact their lives and tastes more broadly.

Likely just as appealing as gleaning deeper personal insights, though, users could also potentially build AI tools to compare listening habits with their friends. That could lead to nearly endless fun for the most invested music fans, where AI could be tapped to assess all kinds of random data points, like whose breakup playlists are more intense or who really spends the most time listening to a shared favorite artist.

In pursuit of supporting developers offering novel insights like these, more than 18,000 Spotify users have joined “Unwrapped,” a collective launched in February that allows them to pool and monetize their data.

Voting as a group through the decentralized data platform Vana—which Wired profiled earlier this year—these users can elect to sell their dataset to developers who are building AI tools offering fresh ways for users to analyze streaming data in ways that Spotify likely couldn’t or wouldn’t.

In June, the group made its first sale, with 99.5 percent of members voting yes. Vana co-founder Anna Kazlauskas told Ars that the collective—at the time about 10,000 members strong—sold a “small portion” of its data (users’ artist preferences) for $55,000 to Solo AI.

While each Spotify user only earned about $5 in cryptocurrency tokens—which Kazlauskas suggested was not “ideal,” wishing the users had earned about “a hundred times” more—she said the deal was “meaningful” in showing Spotify users that their data “is actually worth something.”

“I think this is what shows how these pools of data really act like a labor union,” Kazlauskas said. “A single Spotify user, you’re not going to be able to go say like, ‘Hey, I want to sell you my individual data.’ You actually need enough of a pool to sort of make it work.”

Spotify sent warning to Unwrapped

Unsurprisingly, Spotify is not happy about Unwrapped, which is perhaps a little too closely named to its popular branded feature for the streaming giant’s comfort. A spokesperson told Ars that Spotify sent a letter to the contact info listed for Unwrapped developers on their site, outlining concerns that the collective could be infringing on Spotify’s Wrapped trademark.

Further, the letter warned that Unwrapped violates Spotify’s developer policy, which bans using the Spotify platform or any Spotify content to build machine learning or AI models. And developers may also be violating terms by facilitating users’ sale of streaming data.

“Spotify honors our users’ privacy rights, including the right of portability,” Spotify’s spokesperson said. “All of our users can receive a copy of their personal data to use as they see fit. That said, UnwrappedData.org is in violation of our Developer Terms which prohibit the collection, aggregation, and sale of Spotify user data to third parties.”

But while Spotify suggests it has already taken steps to stop Unwrapped, the Unwrapped team told Ars that it never received any communication from Spotify. It plans to defend users’ right to “access, control, and benefit from their own data,” its statement said, while providing reassurances that it will “respect Spotify’s position as a global music leader.”

Unwrapped “does not distribute Spotify’s content, nor does it interfere with Spotify’s business,” developers argued. “What it provides is community-owned infrastructure that allows individuals to exercise rights they already hold under widely recognized data protection frameworks—rights to access their own listening history, preferences, and usage data.”

“When listeners choose to share or monetize their data together, they are not taking anything away from Spotify,” developers said. “They are simply exercising digital self-determination. To suggest otherwise is to claim that users do not truly own their data—that Spotify owns it for them.”

Jacob Hoffman-Andrews, a senior staff technologist for the digital rights group the Electronic Frontier Foundation, told Ars that—while EFF objects to data dividend schemes “where users are encouraged to share personal information in exchange for payment”—Spotify users should nevertheless always maintain control of their data.

“In general, listeners should have control of their own data, which includes exporting it for their own use,” Hoffman-Andrews said. “An individual’s musical history is of use not just to Spotify but also to the individual who created it. And there’s a long history of services that enable this sort of data portability, for instance Last.fm, which integrates with Spotify and many other services.”

To EFF, it seems ill-advised to sell data to AI companies, Hoffman-Andrews said, emphasizing “privacy isn’t a market commodity, it’s a fundamental right.”

“Of course, so is the right to control one’s own data,” Hoffman-Andrews noted, seeming to agree with Unwrapped developers in concluding that “ultimately, listeners should get to do what they want with their own information.”

Users’ right to privacy is the primary reason why Unwrapped developers told Ars that they’re hoping Spotify won’t try to block users from selling data to build AI.

“This is the heart of the issue: If Spotify seeks to restrict or penalize people for exercising these rights, it sends a chilling message that its listeners should have no say in how their own data is used,” the Unwrapped team’s statement said. “That is out of step not only with privacy law, but with the values of transparency, fairness, and community-driven innovation that define the next era of the Internet.”

Unwrapped sign-ups limited due to alleged Spotify issues

There could be more interest in Unwrapped. But Kazlauskas alleged to Ars that in the more than six months since Unwrapped’s launch, “Spotify has made it extraordinarily difficult” for users to port over their data. She claimed that developers have found that “every time they have an easy way for users to get their data,” Spotify shuts it down “in some way.”

Supposedly because of Spotify’s interference, Unwrapped remains in an early launch phase and can only offer limited spots for new users seeking to sell their data. Kazlauskas told Ars that about 300 users can be added each day due to the cumbersome and allegedly shifting process for porting over data.

Currently, however, Unwrapped is working on an update that could make that process more stable, Kazlauskas said, as well as changes to help users regularly update their streaming data. Those updates could perhaps attract more users to the collective.

Critics of Vana, like TechCrunch’s Kyle Wiggers, have suggested that data pools like Unwrapped will never reach “critical mass,” likely only appealing to niche users drawn to decentralization movements. Kazlauskas told Ars that data sale payments issued in cryptocurrency are one barrier for crypto-averse or crypto-shy users interested in Vana.

“The No. 1 thing I would say is, this kind of user experience problem where when you’re using any new kind of decentralized technology, you need to set up a wallet, then you’re getting tokens,” Kazlauskas explained. Users may feel culture shock, wondering, “What does that even mean? How do I vote with this thing? Is this real money?”

Kazlauskas is hoping that Vana supports a culture shift, striving to reach critical mass by giving users a “commercial lens” to start caring about data ownership. She also supports legislation like the Digital Choice Act in Utah, which “requires actually real-time API access, so people can get their data.” If the US had a federal law like that, Kazlauskas suspects that launching Unwrapped would have been “so much easier.”

Although regulations like Utah’s law could serve as a harbinger of a sea change, Kazlauskas noted that Big Tech companies that currently control AI markets employ a fierce lobbying force to maintain control over user data that decentralized movements just don’t have.

As Vana partners with Flower AI, striving, as Wired reported, to “shake up the AI industry” by releasing “a giant 100 billion-parameter model” later this year, Kazlauskas remains committed to ensuring that users are in control and “not just consumed.” She fears a future where tech giants may be motivated to use AI to surveil, influence, or manipulate users, when instead users could choose to band together and benefit from building more ethical AI.

“A world where a single company controls AI is honestly really dystopian,” Kazlauskas told Ars. “I think that it is really scary. And so I think that the path that decentralized AI offers is one where a large group of people are still in control, and you still get really powerful technology.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Spotify peeved after 10,000 users sold data to build AI tools Read More »

judge:-anthropic’s-$1.5b-settlement-is-being-shoved-“down-the-throat-of-authors”

Judge: Anthropic’s $1.5B settlement is being shoved “down the throat of authors”

At a hearing Monday, US district judge William Alsup blasted a proposed $1.5 billion settlement over Anthropic’s rampant piracy of books to train AI.

The proposed settlement comes in a case where Anthropic could have owed more than $1 trillion in damages after Alsup certified a class that included up to 7 million claimants whose works were illegally downloaded by the AI company.

Instead, critics fear Anthropic will get off cheaply, striking a deal with authors suing that covers less than 500,000 works and paying a small fraction of its total valuation (currently $183 billion) to get away with the massive theft. Defector noted that the settlement doesn’t even require Anthropic to admit wrongdoing, while the company continues raising billions based on models trained on authors’ works. Most recently, Anthropic raised $13 billion in a funding round, making back about 10 times the proposed settlement amount after announcing the deal.

Alsup expressed grave concerns that lawyers rushed the deal, which he said now risks being shoved “down the throat of authors,” Bloomberg Law reported.

In an order, Alsup clarified why he thought the proposed settlement was a chaotic mess. The judge said he was “disappointed that counsel have left important questions to be answered in the future,” seeking approval for the settlement despite the Works List, the Class List, the Claim Form, and the process for notification, allocation, and dispute resolution all remaining unresolved.

Denying preliminary approval of the settlement, Alsup suggested that the agreement is “nowhere close to complete,” forcing Anthropic and authors’ lawyers to “recalibrate” the largest publicly reported copyright class-action settlement ever inked, Bloomberg reported.

Of particular concern, the settlement failed to outline how disbursements would be managed for works with multiple claimants, Alsup noted. Until all these details are ironed out, Alsup intends to withhold approval, the order said.

One big change the judge wants to see is the addition of instructions requiring “anyone with copyright ownership” to opt in, with the consequence that the work won’t be covered if even one rights holder opts out, Bloomberg reported. There should also be instruction that any disputes over ownership or submitted claims should be settled in state court, Alsup said.

Judge: Anthropic’s $1.5B settlement is being shoved “down the throat of authors” Read More »

“first-of-its-kind”-ai-settlement:-anthropic-to-pay-authors-$1.5-billion

“First of its kind” AI settlement: Anthropic to pay authors $1.5 billion

Authors revealed today that Anthropic agreed to pay $1.5 billion and destroy all copies of the books the AI company pirated to train its artificial intelligence models.

In a press release provided to Ars, the authors confirmed that the settlement is “believed to be the largest publicly reported recovery in the history of US copyright litigation.” Covering 500,000 works that Anthropic pirated for AI training, if a court approves the settlement, each author will receive $3,000 per work that Anthropic stole. “Depending on the number of claims submitted, the final figure per work could be higher,” the press release noted.

Anthropic has already agreed to the settlement terms, but a court must approve them before the settlement is finalized. Preliminary approval may be granted this week, while the ultimate decision may be delayed until 2026, the press release noted.

Justin Nelson, a lawyer representing the three authors who initially sued to spark the class action—Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber—confirmed that if the “first of its kind” settlement “in the AI era” is approved, the payouts will “far” surpass “any other known copyright recovery.”

“It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” Nelson said. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”

Groups representing authors celebrated the settlement on Friday. The CEO of the Authors’ Guild, Mary Rasenberger, said it was “an excellent result for authors, publishers, and rightsholders generally.” Perhaps most critically, the settlement shows “there are serious consequences when” companies “pirate authors’ works to train their AI, robbing those least able to afford it,” Rasenberger said.

“First of its kind” AI settlement: Anthropic to pay authors $1.5 billion Read More »

warner-bros.-sues-midjourney-to-stop-ai-knockoffs-of-batman,-scooby-doo

Warner Bros. sues Midjourney to stop AI knockoffs of Batman, Scooby-Doo


AI would’ve gotten away with it too…

Warner Bros. case builds on arguments raised in a Disney/Universal lawsuit.

DVD art for the animated movie Scooby-Doo & Batman: The Brave and the Bold. Credit: Warner Bros. Discovery

Warner Bros. hit Midjourney with a lawsuit Thursday, crafting a complaint that strives to shoot down defenses that the AI company has already raised in a similar lawsuit filed by Disney and Universal Studios earlier this year.

The big film studios have alleged that Midjourney profits off image generation models trained to produce outputs of popular characters. For Disney and Universal, intellectual property rights to pop icons like Darth Vader and the Simpsons were allegedly infringed. And now, the WB complaint defends rights over comic characters like Superman, Wonder Woman, and Batman, as well as characters considered “pillars of pop culture with a lasting impact on generations,” like Scooby-Doo and Bugs Bunny, and modern cartoon characters like Rick and Morty.

“Midjourney brazenly dispenses Warner Bros. Discovery’s intellectual property as if it were its own,” the WB complaint said, accusing Midjourney of allowing subscribers to “pick iconic” copyrighted characters and generate them in “every imaginable scene.”

Planning to seize Midjourney’s profits from allegedly using beloved characters to promote its service, Warner Bros. described Midjourney as “defiant and undeterred” by the Disney/Universal lawsuit. Despite that litigation, WB claimed that Midjourney has recently removed copyright protections in its supposedly shameful ongoing bid for profits. Nothing but a permanent injunction will end Midjourney’s outputs of allegedly “countless infringing images,” WB argued, branding Midjourney’s alleged infringements as “vast, intentional, and unrelenting.”

Examples of closely matching outputs include prompts for “screencaps” showing specific movie frames, a search term that at least one artist, Reid Southen, had optimistically predicted Midjourney would block last year, but it apparently did not.

Here are some examples included in WB’s complaint:

Midjourney’s output for the prompt, “Superman, classic cartoon character, DC comics.”

Midjourney could face devastating financial consequences in a loss. At trial, WB is hoping discovery will show the true extent of Midjourney’s alleged infringement, asking the court for maximum statutory damages, at $150,000 per infringing output. Just 2,000 infringing outputs unearthed could cost Midjourney more than its total revenue for 2024, which was approximately $300 million, the WB complaint said.

Warner Bros. hopes to hobble Midjourney’s best defense

For Midjourney, the WB complaint could potentially hit harder than the Disney/Universal lawsuit. WB’s complaint shows how closely studios are monitoring AI copyright litigation, likely choosing ideal moments to strike when studios feel they can better defend their property. So, while much of WB’s complaint echoes Disney and Universal’s arguments—which Midjourney has already begun defending against—IP attorney Randy McCarthy suggested in statements provided to Ars that WB also looked for seemingly smart ways to potentially overcome some of Midjourney’s best defenses when filing its complaint.

WB likely took note when Midjourney filed its response to the Disney/Universal lawsuit last month, arguing that its system is “trained on billions of publicly available images” and generates images not by retrieving a copy of an image in its database but based on “complex statistical relationships between visual features and words in the text-image pairs are encoded within the model.”

This defense could allow Midjourney to avoid claims that it copied WB images and distributes copies through its models. But hoping to dodge this defense, WB didn’t argue that Midjourney retains copies of its images. Rather, the entertainment giant raised a more nuanced argument that:

Midjourney used software, servers, and other technology to store and fix data associated with Warner Bros. Discovery’s Copyrighted Works in such a manner that those works are thereby embodied in the model, from which Midjourney is then able to generate, reproduce, publicly display, and distribute unlimited “copies” and “derivative works” of Warner Bros. Discovery’s works as defined by the Copyright Act.”

McCarthy noted that WB’s argument pushes the court to at least consider that even though “Midjourney does not store copies of the works in its model,” its system “nonetheless accesses the data relating to the works that are stored by Midjourney’s system.”

“This seems to be a very clever way to counter MJ’s ‘statistical pattern analysis’ arguments,” McCarthy said.

If it’s a winning argument, that could give WB a path to wipe Midjourney’s models. WB argued that each time Midjourney provides a “substantially new” version of its image generator, it “repeats this process.” And that ongoing activity—due to Midjourney’s initial allegedly “massive copying” of WB works—allows Midjourney to “further reproduce, publicly display, publicly perform, and distribute image and video outputs that are identical or virtually identical to Warner Bros. Discovery’s Copyrighted Works in response to simple prompts from subscribers.”

Perhaps further strengthening the WB’s argument, the lawsuit noted that Midjourney promotes allegedly infringing outputs on its 24/7 YouTube channel and appears to have plans to compete with traditional TV and streaming services. Asking the court to block Midjourney’s outputs instead, WB claims it’s already been “substantially and irreparably harmed” and risks further damages if the AI image generator is left unchecked.

As alleged proof that the AI company knows its tool is being used to infringe WB property, WB pointed to Midjourney’s own Discord server and subreddit, where users post outputs depicting WB characters and share tips to help others do the same. They also called out Midjourney’s “Explore” page, which allows users to drop a WB-referencing output into the prompt field to generate similar images.

“It is hard to imagine copyright infringement that is any more willful than what Midjourney is doing here,” the WB complaint said.

WB and Midjourney did not immediately respond to Ars’ request to comment.

Midjourney slammed for promising “fewer blocked jobs”

McCarthy noted that WB’s legal strategy differs in other ways from the arguments Midjourney’s already weighed in the Disney/Universal lawsuit.

The WB complaint also anticipates Midjourney’s likely defense that users are generating infringing outputs, not Midjourney, which could invalidate any charges of direct copyright infringement.

In the Disney/Universal lawsuit, Midjourney argued that courts have recently found that AI tools referencing copyrighted works is “a quintessentially transformative fair use,” accusing studios of trying to censor “an instrument for user expression.” They claim that Midjourney cannot know about infringing outputs unless studios use the company’s DMCA process, while noting that subscribers have “any number of legitimate, noninfringing grounds to create images incorporating characters from popular culture,” including “non-commercial fan art, experimentation and ideation, and social commentary and criticism.”

To avoid losing on that front, the WB complaint doesn’t depend on a ruling that Midjourney directly infringed copyrights. Instead, the complaint “more fully” emphasizes how Midjourney may be “secondarily liable for infringement via contributory, inducement and/or vicarious liability by inducing its users to directly infringe,” McCarthy suggested.

Additionally, WB’s complaint “seems to be emphasizing” that Midjourney “allegedly has the technical means to prevent its system from accepting prompts that directly reference copyrighted characters,” and “that would prevent infringing outputs from being displayed,” McCarthy said.

The complaint noted that Midjourney is in full control of what outputs can be generated. Noting that Midjourney “temporarily refused to ‘animate'” outputs of WB characters after launching video generations, the lawsuit appears to have been filed in response to Midjourney “deliberately” removing those protections and then announcing that subscribers would experience “fewer blocked jobs.”

Together, these arguments “appear to be intended to lead to the inference that Midjourney is willfully enticing its users to infringe,” McCarthy said.

WB’s complaint details simple user prompts that generate allegedly infringing outputs without any need to manipulate the system. The ease of generating popular characters seems to make Midjourney a destination for users frustrated by other AI image generators that make it harder to generate infringing outputs, WB alleged.

On top of that, Midjourney also infringes copyrights by generating WB characters, “even in response to generic prompts like ‘classic comic book superhero battle.'” And while Midjourney has seemingly taken steps to block WB characters from appearing on its “Explore” page, where users can find inspiration for prompts, these guardrails aren’t perfect, but rather “spotty and suspicious,” WB alleged. Supposedly, searches for correctly spelled character names like “Batman” are blocked, but any user who accidentally or intentionally mispells a character’s name like “Batma” can learn an easy way to work around that block.

Additionally, WB alleged, “the outputs often contain extensive nuance and detail, background elements, costumes, and accessories beyond what was specified in the prompt.” And every time that Midjourney outputs an allegedly infringing image, it “also trains on the outputs it has generated,” the lawsuit noted, creating a never-ending cycle of continually enhanced AI fakes of pop icons.

Midjourney could slow down the cycle and “minimize” these allegedly infringing outputs, if it cannot automatically block them all, WB suggested. But instead, “Midjourney has made a calculated and profit-driven decision to offer zero protection for copyright owners even though Midjourney knows about the breathtaking scope of its piracy and copyright infringement,” WB alleged.

Fearing a supposed scheme to replace WB in the market by stealing its best-known characters, WB accused Midjourney of willfully allowing WB characters to be generated in order to “generate more money for Midjourney” to potentially compete in streaming markets.

Midjourney will remove protections “on a whim”

As Midjourney’s efforts to expand its features escalate, WB claimed that trust is lost. Even if Midjourney takes steps to address rightsholders’ concerns, WB argued, studios must remain watchful of every upgrade, since apparently, “Midjourney can and will remove copyright protection measures on a whim.”

The complaint noted that Midjourney just this week announced “plans to continue deploying new versions” of its image generator, promising to make it easier to search for and save popular artists’ styles—updating a feature that many artists loathe.

Without an injunction, Midjourney’s alleged infringement could interfere with WB’s licensing opportunities for its content, while “illegally and unfairly” diverting customers who buy WB products like posters, wall art, prints, and coloring books, the complaint said.

Perhaps Midjourney’s strongest defense could be efforts to prove that WB benefits from its image generator. In the Disney/Universal lawsuit, Midjourney pointed out that studios “benefit from generative AI models,” claiming that “many dozens of Midjourney subscribers are associated with” Disney and Universal corporate email addresses. If WB corporate email addresses are found among subscribers, Midjourney could claim that WB is trying to “have it both ways” by “seeking to profit” from AI tools while preventing Midjourney and its subscribers from doing the same.

McCarthy suggested it’s too soon to say how the WB battle will play out, but Midjourney’s response will reveal how it intends to shift tactics to avoid courts potentially picking apart its defense of its training data, while keeping any blame for copyright-infringing outputs squarely on users.

“As with the Disney/Universal lawsuit, we need to wait to see how Midjourney answers these latest allegations,” McCarthy said. “It is definitely an interesting development that will have widespread implications for many sectors of our society.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Warner Bros. sues Midjourney to stop AI knockoffs of Batman, Scooby-Doo Read More »

google-improves-gemini-ai-image-editing-with-“nano-banana”-model

Google improves Gemini AI image editing with “nano banana” model

Something unusual happened in the world of AI image editing recently. A new model, known as “nano banana,” started making the rounds with impressive abilities that landed it at the top of the LMArena leaderboard. Now, Google has revealed that nano banana is an innovation from Google DeepMind, and it’s being rolled out to the Gemini app today.

AI image editing allows you to modify images with a prompt rather than mucking around in Photoshop. Google first provided editing capabilities in Gemini earlier this year, and the model was more than competent out of the gate. But like all generative systems, the non-deterministic nature meant that elements of the image would often change in unpredictable ways. Google says nano banana (technically Gemini 2.5 Flash Image) has unrivaled consistency across edits—it can actually remember the details instead of rolling the dice every time you make a change.

Google says subjects will retain their appearance as you edit.

This unlocks several interesting uses for AI image editing. Google suggests uploading a photo of a person and changing their style or attire. For example, you can reimagine someone as a matador or a ’90s sitcom character. Because the nano banana model can maintain consistency through edits, the results should still look like the person in the original source image. This is also the case when you make multiple edits in a row. Google says that even down the line, the results should look like the original source material.

Google improves Gemini AI image editing with “nano banana” model Read More »

bank-forced-to-rehire-workers-after-lying-about-chatbot-productivity,-union-says

Bank forced to rehire workers after lying about chatbot productivity, union says

As banks around the world prepare to replace many thousands of workers with AI, Australia’s biggest bank is scrambling to rehire 45 workers after allegedly lying about chatbots besting staff by handling higher call volumes.

In a statement Thursday flagged by Bloomberg, Australia’s main financial services union, the Finance Sector Union (FSU), claimed a “massive win” for 45 union members whom the Commonwealth Bank of Australia (CBA) had replaced with an AI-powered “voice bot.”

The FSU noted that some of these workers had been with CBA for decades. Those workers in particular were shocked when CBA announced last month that their jobs had become redundant. At that time, CBA claimed that launching the chatbot supposedly “led to a reduction in call volumes” by 2,000 a week, FSU said.

But “this was an outright lie,” fired workers told FSU. Instead, call volumes had been increasing at the time they were dismissed, with CBA supposedly “scrambling”—offering staff overtime and redirecting management to join workers answering phones to keep up.

To uncover the truth, FSU escalated the dispute to a fair work tribunal, where the union accused CBA of failing to explain how workers’ roles were ruled redundant. The union also alleged that CBA was hiring for similar roles in India, Bloomberg noted, which made it appear that CBA had perhaps used the chatbot to cover up a shady pivot to outsource jobs.

While the dispute was being weighed, CBA admitted that “they didn’t properly consider that an increase in calls” happening while staff was being fired “would continue over a number of months,” FSU said.

“This error meant the roles were not redundant,” CBA confirmed at the tribunal.

Bank forced to rehire workers after lying about chatbot productivity, union says Read More »

us-government-agency-drops-grok-after-mechahitler-backlash,-report-says

US government agency drops Grok after MechaHitler backlash, report says

xAI apparently lost a government contract after a tweak to Grok’s prompting triggered an antisemitic meltdown where the chatbot praised Hitler and declared itself MechaHitler last month.

Despite the scandal, xAI announced that its products would soon be available for federal workers to purchase through the General Services Administration. At the time, xAI claimed this was an “important milestone” for its government business.

But Wired reviewed emails and spoke to government insiders, which revealed that GSA leaders abruptly decided to drop xAI’s Grok from their contract offering. That decision to pull the plug came after leadership allegedly rushed staff to make Grok available as soon as possible following a persuasive sales meeting with xAI in June.

It’s unclear what exactly caused the GSA to reverse course, but two sources told Wired that they “believe xAI was pulled because of Grok’s antisemitic tirade.”

As of this writing, xAI’s “Grok for Government” website has not been updated to reflect GSA’s supposed removal of Grok from an offering that xAI noted would have allowed “every federal government department, agency, or office, to access xAI’s frontier AI products.”

xAI did not respond to Ars’ request to comment and so far has not confirmed that the GSA offering is off the table. If Wired’s report is accurate, GSA’s decision also seemingly did not influence the military’s decision to move forward with a $200 million xAI contract the US Department of Defense granted last month.

Government’s go-to tools will come from xAI’s rivals

If Grok is cut from the contract, that would suggest that Grok’s meltdown came at perhaps the worst possible moment for xAI, which is building the “world’s biggest supercomputer” as fast as it can to try to get ahead of its biggest AI rivals.

Grok seemingly had the potential to become a more widely used tool if federal workers opted for xAI’s models. Through Donald Trump’s AI Action Plan, the president has similarly emphasized speed, pushing for federal workers to adopt AI as quickly as possible. Although xAI may no longer be involved in that broad push, other AI companies like OpenAI, Anthropic, and Google have partnered with the government to help Trump pull that off and stand to benefit long-term if their tools become entrenched in certain agencies.

US government agency drops Grok after MechaHitler backlash, report says Read More »

google-releases-pint-size-gemma-open-ai-model

Google releases pint-size Gemma open AI model

Big tech has spent the last few years creating ever-larger AI models, leveraging rack after rack of expensive GPUs to provide generative AI as a cloud service. But tiny AI matters, too. Google has announced a tiny version of its Gemma open model designed to run on local devices. Google says the new Gemma 3 270M can be tuned in a snap and maintains robust performance despite its small footprint.

Google released its first Gemma 3 open models earlier this year, featuring between 1 billion and 27 billion parameters. In generative AI, the parameters are the learned variables that control how the model processes inputs to estimate output tokens. Generally, the more parameters in a model, the better it performs. With just 270 million parameters, the new Gemma 3 can run on devices like smartphones or even entirely inside a web browser.

Running an AI model locally has numerous benefits, including enhanced privacy and lower latency. Gemma 3 270M was designed with these kinds of use cases in mind. In testing with a Pixel 9 Pro, the new Gemma was able to run 25 conversations on the Tensor G4 chip and use just 0.75 percent of the device’s battery. That makes it by far the most efficient Gemma model.

Small Gemma benchmark

Gemma 3 270M shows strong instruction-following for its small size.

Credit: Google

Gemma 3 270M shows strong instruction-following for its small size. Credit: Google

Developers shouldn’t expect the same performance level of a multi-billion-parameter model, but Gemma 3 270M has its uses. Google used the IFEval benchmark, which tests a model’s ability to follow instructions, to show that its new model punches above its weight. Gemma 3 270M hits a score of 51.2 percent in this test, which is higher than other lightweight models that have more parameters. The new Gemma falls predictably short of 1 billion-plus models like Llama 3.2, but it gets closer than you might think for having just a fraction of the parameters.

Google releases pint-size Gemma open AI model Read More »

sam-altman-finally-stood-up-to-elon-musk-after-years-of-x-trolling

Sam Altman finally stood up to Elon Musk after years of X trolling


Elon Musk and Sam Altman are beefing. But their relationship is complicated.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Much attention was paid to OpenAI’s Sam Altman and xAI’s Elon Musk trading barbs on X this week after Musk threatened to sue Apple over supposedly biased App Store rankings privileging ChatGPT over Grok.

But while the heated social media exchanges were among the most tense ever seen between the two former partners who cofounded OpenAI—more on that below—it seems likely that their jabs were motivated less by who’s in the lead on Apple’s “Must Have” app list than by an impending order in a lawsuit that landed in the middle of their public beefing.

Yesterday, a court ruled that OpenAI can proceed with claims that Musk was so incredibly stung by OpenAI’s success after his exit didn’t doom the nascent AI company that he perpetrated a “years-long harassment campaign” to take down OpenAI.

Musk’s motivation? To clear the field for xAI to dominate the AI industry instead, OpenAI alleged.

OpenAI’s accusations arose as counterclaims in a lawsuit that Musk initially filed in 2024. Musk has alleged that Altman and OpenAI had made a “fool” of Musk, goading him into $44 million in donations by “preying on Musk’s humanitarian concern about the existential dangers posed by artificial intelligence.”

But OpenAI insists that Musk’s lawsuit is just one prong in a sprawling, “unlawful,” and “unrelenting” harassment campaign that Musk waged to harm OpenAI’s business by forcing the company to divert resources or expend money on things like withdrawn legal claims and fake buyouts.

“Musk could not tolerate seeing such success for an enterprise he had abandoned and declared doomed,” OpenAI argued. “He made it his project to take down OpenAI, and to build a direct competitor that would seize the technological lead—not for humanity but for Elon Musk.”

Most significantly, OpenAI alleged that Musk forced OpenAI to entertain a “sham” bid to buy the company in February. Musk then shared details of the bid with The Wall Street Journal to artificially raise the price of OpenAI and potentially spook investors, OpenAI alleged. The company further said that Musk never intended to buy OpenAI and is willing to go to great lengths to mislead the public about OpenAI’s business so he can chip away at OpenAI’s head start in releasing popular generative AI products.

“Musk has tried every tool available to harm OpenAI,” Altman’s company said.

To this day, Musk maintains that Altman pretended that OpenAI would remain a nonprofit serving the public good in order to seize access to Musk’s money and professional connections in its first five years and gain a lead in AI. As Musk sees it, Altman always intended to “betray” these promises in pursuit of personal gains, and Musk is hoping a court will return any ill-gotten gains to Musk and xAI.

In a small win for Musk, the court ruled that OpenAI will have to wait until the first phase of the trial litigating Musk’s claims concludes before the court will weigh OpenAI’s theories on Musk’s alleged harassment campaign. US District Judge Yvonne Gonzalez Rogers noted that all of OpenAI’s counterclaims occurred after the period in which Musk’s claims about a supposed breach of contract occurred, necessitating a division of the lawsuit into two parts. Currently, the jury trial is scheduled for March 30, 2026, presumably after which, OpenAI’s claims can be resolved.

If yesterday’s X clash between the billionaires is any indication, it seems likely that tensions between Altman and Musk will only grow as discovery and expert testimony on Musk’s claims proceed through December.

Whether OpenAI will prevail on its counterclaims is anybody’s guess. Gonzalez Rogers noted that Musk and OpenAI have been hypocritical in arguments raised so far, condemning the “gamesmanship of both sides” as “obvious, as each flip flops.” However, “for the purposes of pleading an unfair or fraudulent business practice, it is sufficient [for OpenAI] to allege that the bid was a sham and designed to mislead,” Gonzalez Rogers said, since OpenAI has alleged the sham bid “ultimately did” harm its business.

In April, OpenAI told the court that the AI company risks “future irreparable harm” if Musk’s alleged campaign continues. Fast-forward to now, and Musk’s legal threat to OpenAI’s partnership with Apple seems to be the next possible front Musk may be exploring to allegedly harass Altman and intimidate OpenAI.

“With every month that has passed, Musk has intensified and expanded the fronts of his campaign against OpenAI,” OpenAI argued. Musk “has proven himself willing to take ever more dramatic steps to seek a competitive advantage for xAI and to harm Altman, whom, in the words of the President of the United States, Musk ‘hates.'”

Tensions escalate as Musk brands Altman a “liar”

On Monday evening, Musk threatened to sue Apple for supposedly favoring ChatGPT in App Store rankings, which he claimed was “an unequivocal antitrust violation.”

Seemingly defending Apple later that night, Altman called Musk’s claim “remarkable,” claiming he’s heard allegations that Musk manipulates “X to benefit himself and his own companies and harm his competitors and people he doesn’t like.”

At 4 am on Tuesday, Musk appeared to lose his cool, firing back a post that sought to exonerate the X owner of any claims that he tweaks his social platform to favor his own posts.

“You got 3M views on your bullshit post, you liar, far more than I’ve received on many of mine, despite me having 50 times your follower count!” Musk responded.

Altman apparently woke up ready to keep the fight going, suggesting that his post got more views as a fluke. He mocked X as running into a “skill issue” or “bots” messing with Musk’s alleged agenda to boost his posts above everyone else. Then, in what may be the most explosive response to Musk yet, Altman dared Musk to double down on his defense, asking, “Will you sign an affidavit that you have never directed changes to the X algorithm in a way that has hurt your competitors or helped your own companies? I will apologize if so.”

Court filings from each man’s legal team show how fast their friendship collapsed. But even as Musk’s alleged harassment campaign started taking shape, their social media interactions show that underlying the legal battles and AI ego wars, the tech billionaires are seemingly hiding profound respect for—and perhaps jealousy of—each other’s accomplishments.

A brief history of Musk and Altman’s feud

Musk and Altman’s friendship started over dinner in July 2015. That’s when Musk agreed to help launch “an AGI project that could become and stay competitive with DeepMind, an AI company under the umbrella of Google,” OpenAI’s filing said. At that time, Musk feared that a private company like Google would never be motivated to build AI to serve the public good.

The first clash between Musk and Altman happened six months later. Altman wanted OpenAI to be formed as a nonprofit, but Musk thought that was not “optimal,” OpenAI’s filing said. Ultimately, Musk was overruled, and he joined the nonprofit as a “member” while also becoming co-chair of OpenAI’s board.

But perhaps the first major disagreement, as Musk tells it, came in 2016, when Altman and Microsoft struck a deal to sell compute to OpenAI at a “steep discount”—”so long as the non-profit agreed to publicly promote Microsoft’s products.” Musk rejected the “marketing ploy,” telling Altman that “this actually made me feel nauseous.”

Next, OpenAI claimed that Musk had a “different idea” in 2017 when OpenAI “began considering an organizational change that would allow supporters not just to donate, but to invest.” Musk wanted “sole control of the new for-profit,” OpenAI alleged, and he wanted to be CEO. The other founders, including Altman, “refused to accept” an “AGI dictatorship” that was “dominated by Musk.”

“Musk was incensed,” OpenAI said, threatening to leave OpenAI over the disagreement, “or I’m just being a fool who is essentially providing free funding for you to create a startup.”

But Musk floated one more idea between 2017 and 2018 before severing ties—offering to sell OpenAI to Tesla so that OpenAI could use Tesla as a “cash cow.” But Altman and the other founders still weren’t comfortable with Musk controlling OpenAI, rejecting the idea and prompting Musk’s exit.

In his filing, Musk tells the story a little differently, however. He claimed that he only “briefly toyed with the idea of using Tesla as OpenAI’s ‘cash cow'” after Altman and others pressured him to agree to a for-profit restructuring. According to Musk, among the last straws was a series of “get-rich-quick schemes” that Altman proposed to raise funding, including pushing a strategy where OpenAI would launch a cryptocurrency that Musk worried threatened the AI company’s credibility.

When Musk left OpenAI, it was “noisy but relatively amicable,” OpenAI claimed. But Musk continued to express discomfort from afar, still donating to OpenAI as Altman grabbed the CEO title in 2019 and created a capped-profit entity that Musk seemed to view as shady.

“Musk asked Altman to make clear to others that he had ‘no financial interest in the for-profit arm of OpenAI,'” OpenAI noted, and Musk confirmed he issued the demand “with evident displeasure.”

Although they often disagreed, Altman and Musk continued to publicly play nice on Twitter (the platform now known as X), casually chatting for years about things like movies, space, and science, including repeatedly joking about Musk’s posts about using drugs like Ambien.

By 2019, it seemed like none of these disagreements had seriously disrupted the friendship. For example, at that time, Altman defended Musk against people rooting against Tesla’s success, writing that “betting against Elon is historically a mistake” and seemingly hyping Tesla by noting that “the best product usually wins.”

The niceties continued into 2021, when Musk publicly praised “nice work by OpenAI” integrating its coding model into GitHub’s AI tool. “It is hard to do useful things,” Musk said, drawing a salute emoji from Altman.

This was seemingly the end of Musk playing nice with OpenAI, though. Soon after ChatGPT’s release in November 2022, Musk allegedly began his attacks, seemingly willing to change his tactics on a whim.

First, he allegedly deemed OpenAI “irrelevant,” predicting it would “obviously” fail. Then, he started sounding alarms, joining a push for a six-month pause on generative AI development. Musk specifically claimed that any model “more advanced than OpenAI’s just-released GPT-4” posed “profound risks to society and humanity,” OpenAI alleged, seemingly angling to pause OpenAI’s development in particular.

However, in the meantime, Musk started “quietly building a competitor,” xAI, without announcing those efforts in March 2023, OpenAI alleged. Allegedly preparing to hobble OpenAI’s business after failing with the moratorium push, Musk had his personal lawyer contact OpenAI and demand “access to OpenAI’s confidential and commercially sensitive internal documents.”

Musk claimed the request was to “ensure OpenAI was not being taken advantage of or corrupted by Microsoft,” but two weeks later, he appeared on national TV, insinuating that OpenAI’s partnership with Microsoft was “improper,” OpenAI alleged.

Eventually, Musk announced xAI in July 2023, and that supposedly motivated Musk to deepen his harassment campaign, “this time using the courts and a parallel, carefully coordinated media campaign,” OpenAI said, as well as his own social media platform.

Musk “supercharges” X attacks

As OpenAI’s success mounted, the company alleged that Musk began specifically escalating his social media attacks on X, including broadcasting to his 224 million followers that “OpenAI is a house of cards” after filing his 2024 lawsuit.

Claiming he felt conned, Musk also pressured regulators to probe OpenAI, encouraging attorneys general of California and Delaware to “force” OpenAI, “without legal basis, to auction off its assets for the benefit of Musk and his associates,” OpenAI said.

By 2024, Musk had “supercharged” his X attacks, unleashing a “barrage of invective against the enterprise and its leadership, variously describing OpenAI as a ‘digital Frankenstein’s monster,’ ‘a lie,’ ‘evil,’ and ‘a total scam,'” OpenAI alleged.

These attacks allegedly culminated in Musk’s seemingly fake OpenAI takeover attempt in 2025, which OpenAI claimed a Musk ally, Ron Baron, admitted on CNBC was “pitched to him” as not an attempt to actually buy OpenAI’s assets, “but instead to obtain ‘discovery’ and get ‘behind the wall’ at OpenAI.”

All of this makes it harder for OpenAI to achieve the mission that Musk is supposedly suing to defend, OpenAI claimed. They told the court that “OpenAI has borne costs, and been harmed, by Musk’s abusive tactics and unrelenting efforts to mislead the public for his own benefit and to OpenAI’s detriment and the detriment of its mission.”

But Musk argues that it’s Altman who always wanted sole control over OpenAI, accusing his former partner of rampant self-dealing and “locking down the non-profit’s technology for personal gain” as soon as “OpenAI reached the threshold of commercially viable AI.” He further claimed OpenAI blocked xAI funding by reportedly asking investors to avoid backing rival startups like Anthropic or xAI.

Musk alleged:

Altman alone stands to make billions from the non-profit Musk co-founded and invested considerable money, time, recruiting efforts, and goodwill in furtherance of its stated mission. Altman’s scheme has now become clear: lure Musk with phony philanthropy; exploit his money, stature, and contacts to secure world-class AI scientists to develop leading technology; then feed the non-profit’s lucrative assets into an opaque profit engine and proceed to cash in as OpenAI and Microsoft monopolize the generative AI market.

For Altman, this week’s flare-up, where he finally took a hard jab back at Musk on X, may be a sign that Altman is done letting Musk control the narrative on X after years of somewhat tepidly pushing back on Musk’s more aggressive posts.

In 2022, for example, Musk warned after ChatGPT’s release that the chatbot was “scary good,” warning that “we are not far from dangerously strong AI.” Altman responded, cautiously agreeing that OpenAI was “dangerously” close to “strong AI in the sense of an AI that poses e.g. a huge cybersecurity risk” but “real” artificial general intelligence still seemed at least a decade off.

And Altman gave no response when Musk used Grok’s jokey programming to mock GPT-4 as “GPT-Snore” in 2024.

However, Altman seemingly got his back up after Musk mocked OpenAI’s $500 billion Stargate Project, which launched with the US government in January of this year. On X, Musk claimed that OpenAI doesn’t “actually have the money” for the project, which Altman said was “wrong,” while mockingly inviting Musk to visit the worksite.

“This is great for the country,” Altman said, retorting, “I realize what is great for the country isn’t always what’s optimal for your companies, but in your new role [at the Department of Government Efficiency], I hope you’ll mostly put [America] first.”

It remains to be seen whether Altman wants to keep trading jabs with Musk, who is generally a huge fan of trolling on X. But Altman seems more emboldened this week than he was back in January before Musk’s breakup with Donald Trump. Back then, even when he was willing to push back on Musk’s Stargate criticism by insulting Musk’s politics, he still took the time to let Musk know that he still cares.

“I genuinely respect your accomplishments and think you are the most inspiring entrepreneur of our time,” Altman told Musk in January.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Sam Altman finally stood up to Elon Musk after years of X trolling Read More »

musk-threatens-to-sue-apple-so-grok-can-get-top-app-store-ranking

Musk threatens to sue Apple so Grok can get top App Store ranking

After spending last week hyping Grok’s spicy new features, Elon Musk kicked off this week by threatening to sue Apple for supposedly gaming the App Store rankings to favor ChatGPT over Grok.

“Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation,” Musk wrote on X, without providing any evidence. “xAI will take immediate legal action.”

In another post, Musk tagged Apple, asking, “Why do you refuse to put either X or Grok in your ‘Must Have’ section when X is the #1 news app in the world and Grok is #5 among all apps?”

“Are you playing politics?” Musk asked. “What gives? Inquiring minds want to know.”

Apple did not respond to the post and has not responded to Ars’ request to comment.

At the heart of Musk’s complaints is an OpenAI partnership that Apple announced last year, integrating ChatGPT into versions of its iPhone, iPad, and Mac operating systems.

Musk has alleged that this partnership incentivized Apple to boost ChatGPT rankings. OpenAI’s popular chatbot “currently holds the top spot in the App Store’s ‘Top Free Apps’ section for iPhones in the US,” Reuters noted, “while xAI’s Grok ranks fifth and Google’s Gemini chatbot sits at 57th.” Sensor Tower data shows ChatGPT similarly tops Google Play Store rankings.

While Musk seems insistent that ChatGPT is artificially locked in the lead, fact-checkers on X added a community note to his post. They confirmed that at least one other AI tool has somewhat recently unseated ChatGPT in the US rankings. Back in January, DeepSeek topped App Store charts and held the lead for days, ABC News reported.

OpenAI did not immediately respond to Ars’ request to comment on Musk’s allegations, but an OpenAI developer, Steven Heidel, did add a quip in response to one of Musk’s posts, writing, “Don’t forget to also blame Google for OpenAI being #1 on Android, and blame SimilarWeb for putting ChatGPT above X on the most-visited websites list, and blame….”

Musk threatens to sue Apple so Grok can get top App Store ranking Read More »

reddit-blocks-internet-archive-to-end-sneaky-ai-scraping

Reddit blocks Internet Archive to end sneaky AI scraping

“Until they’re able to defend their site and comply with platform policies (e.g., respecting user privacy, re: deleting removed content) we’re limiting some of their access to Reddit data to protect redditors,” Rathschmidt said.

A review of social media comments suggests that in the past, some Redditors have used the Wayback Machine to research deleted comments or threads. Those commenters noted that myriad other tools exist for surfacing deleted posts or researching a user’s activity, with some suggesting that the Wayback Machine was maybe not the easiest platform to navigate for that purpose.

Redditors have also turned to resources like IA during times when Reddit’s platform changes trigger content removals. Most recently in 2023, when changes to Reddit’s public API threatened to kill beloved subreddits, archives stepped in to preserve content before it was lost.

IA has not signaled whether it’s looking into fixes to get Reddit’s restrictions lifted and did not respond to Ars’ request to comment on how this change might impact the archive’s utility as an open web resource, given Reddit’s popularity.

The director of the Wayback Machine, Mark Graham, told Ars that IA has “a longstanding relationship with Reddit” and continues to have “ongoing discussions about this matter.”

It seems likely that Reddit is financially motivated to restrict AI firms from taking advantage of Wayback Machine archives, perhaps hoping to spur more lucrative licensing deals like Reddit struck with OpenAI and Google. The terms of the OpenAI deal were kept quiet, but the Google deal was reportedly worth $60 million. Over the next three years, Reddit expects to make more than $200 million off such licensing deals.

Disclosure: Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.

Reddit blocks Internet Archive to end sneaky AI scraping Read More »

ai-industry-horrified-to-face-largest-copyright-class-action-ever-certified

AI industry horrified to face largest copyright class action ever certified

According to the groups, allowing copyright class actions in AI training cases will result in a future where copyright questions remain unresolved and the risk of “emboldened” claimants forcing enormous settlements will chill investments in AI.

“Such potential liability in this case exerts incredibly coercive settlement pressure for Anthropic,” industry groups argued, concluding that “as generative AI begins to shape the trajectory of the global economy, the technology industry cannot withstand such devastating litigation. The United States currently may be the global leader in AI development, but that could change if litigation stymies investment by imposing excessive damages on AI companies.”

Some authors won’t benefit from class actions

Industry groups joined Anthropic in arguing that, generally, copyright suits are considered a bad fit for class actions because each individual author must prove ownership of their works. And the groups weren’t alone.

Also backing Anthropic’s appeal, advocates representing authors—including Authors Alliance, the Electronic Frontier Foundation, American Library Association, Association of Research Libraries, and Public Knowledge—pointed out that the Google Books case showed that proving ownership is anything but straightforward.

In the Anthropic case, advocates for authors criticized Alsup for basically judging all 7 million books in the lawsuit by their covers. The judge allegedly made “almost no meaningful inquiry into who the actual members are likely to be,” as well as “no analysis of what types of books are included in the class, who authored them, what kinds of licenses are likely to apply to those works, what the rightsholders’ interests might be, or whether they are likely to support the class representatives’ positions.”

Ignoring “decades of research, multiple bills in Congress, and numerous studies from the US Copyright Office attempting to address the challenges of determining rights across a vast number of books,” the district court seemed to expect that authors and publishers would easily be able to “work out the best way to recover” damages.

AI industry horrified to face largest copyright class action ever certified Read More »