AI

staff-complain-that-xai-is-flailing-because-of-constant-upheaval

Staff complain that xAI is flailing because of constant upheaval

After the departures, only Manuel Kroiss—known as “Makro”—and Ross Nordeen will remain of the 11 cofounders who helped Musk set up xAI in San Francisco in March 2023.

Last month, Musk criticized the coding team for falling behind in a town hall meeting that was posted online. He detailed a reorganization after several other co-founders had been removed, including Greg Yang, Tony Wu, and Jimmy Ba.

Toby Pohlen, a former DeepMind researcher, was put in charge of the “Macrohard” project to build digital agents that Musk said could replicate entire software companies. Musk said it was the “most important” drive at the company. The name is a “funny” reference to Microsoft, the billionaire added. Pohlen left 16 days later.

Musk has redeployed Ashok Elluswamy, head of AI software at Tesla, to reboot the Macrohard effort and review the work done previously. Musk said that Tesla and xAI would work together to develop a “digital Optimus” that would combine the car and robot maker’s real-world AI expertise and Grok’s large language models.

Staff complain that the constant upheaval is destroying morale and preventing xAI from achieving its potential.

Musk has built a vast data center in Memphis, Tennessee, with more than 200,000 specialized AI chips, which he plans to expand to 1 million GPUs over time. It also benefits from the data fed in by his social media network X, which was merged with xAI last year and now promotes the Grok chatbot.

Employees were sent a memo denying that there would be mass layoffs on Wednesday, the people said. However, researchers continue to quit because of burnout from Musk’s “extremely hardcore” work demands or after receiving better offers from rivals, multiple people familiar with the departures said.

The layoffs and departures have left xAI with many roles to fill. Recruiters have been contacting unsuccessful candidates from previous interviews and assessments to offer them jobs, often on better financial terms, the people said.

“Many talented people over the past few years were declined an offer or even an interview at xAI. My apologies,” Musk posted on Friday morning. He said he would be “going through the company interview history and reaching back out to promising candidates.”

Musk still has the ability to recruit top Silicon Valley talent. This week, xAI poached two staff from popular AI coding app Cursor—Andrew Milich and Jason Ginsberg—to help improve the “Grok Code Fast” product.

Musk welcomed them in a post on Thursday, adding: “Orbital space centers and mass drivers on the Moon will be incredible.”

© 2026 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Staff complain that xAI is flailing because of constant upheaval Read More »

figuring-out-why-ais-get-flummoxed-by-some-games

Figuring out why AIs get flummoxed by some games


When winning depends on intuiting a mathematical function, AIs come up short.

Oddly, the training methods that work great for chess fail on far simpler games. Credit: SimpleImages

With its Alpha series of game-playing AIs, Google’s DeepMind group seemed to have found a way for its AIs to tackle any game, mastering games like chess and Go by repeatedly playing itself during training. But then some odd things happened as people started identifying Go positions that would lose against relative newcomers to the game but easily defeat a similar Go-playing AI.

While beating an AI at a board game may seem relatively trivial, it can help us identify failure modes of the AI, or ways in which we can improve their training to avoid having them develop these blind spots in the first place—things that may become critical as people rely on AI input for a growing range of problems.

A recent paper published in Machine Learning describes an entire category of games where the method used to train AlphaGo and AlphaChess fails. The games in question can be remarkably simple, as exemplified by the one the researchers worked with: Nim, which involves two players taking turns removing matchsticks from a pyramid-shaped board until one is left without a legal move.

Impartiality

Nim involves setting up a set of rows of matchsticks, with the top row having a single match, and every row below it having two more than the one above. This creates a pyramid-shaped board. Two players then take turns removing matchsticks from the board, choosing a row and then removing anywhere from one item to the entire contents of the row. The game goes until there are no legal moves left. It’s a simple game that can easily be taught to children.

It also turns out to be a critical example of an entire category of rule sets that define “impartial games.” These differ from something like chess, where each player has their own set of pieces; in impartial games, the two players share the same pieces and are bound by the same set of rules. Nim’s importance stems from a theorem showing that any position in an impartial game can be represented by a configuration of a Nim pyramid. Meaning that if something applies to Nim, it applies to all impartial games.

One of the distinctive features of Nim and other impartial games is that, at any point in the game, it’s easy to evaluate the board and determine which player has the potential to win. Put another way, you can size up the board and know that, if you play the optimal moves from then on, you will likely win. Doing so just requires feeding the board’s configuration into a parity function, which does the math to tell you whether you’re winning.

(Obviously, the person who is currently winning could play a suboptimal move and end up losing. And the exact series of optimal moves is not determined until the end, since they will depend on exactly what your opponent does.)

The new work, done by Bei Zhou and Soren Riis, asks a simple question: What happens if you take the AlphaGo approach to training an AI to play games, and try to develop a Nim-playing AI? Put differently: They asked whether an AI could develop a representation of a parity function purely by playing itself in Nim.

When self-teaching fails

AlphaZero, the chess-playing version, was trained from only the rules of chess. By playing itself, it can associate different board configurations with a probability of winning. To keep it from getting stuck in ruts, there’s also a random sampling element that allows it to continue exploring new territory. And, once it can identify a limited number of high-value moves, it’s able to explore deeper into future possibilities that arise from those moves. The more games it plays, the higher the probability that it will be able to assign values to potential board configurations that could arise from a given position (although the benefits of more games tend to tail off after a sufficient number are played).

In Nim, there is a limited number of optimal moves for a given board configuration. If you don’t play one of them, then you essentially cede control to your opponent, who can go on to win if they play nothing but optimal moves. And again, the optimal moves can be identified by evaluating a mathematical parity function.

So, there are reasons to think that the training process that worked for chess might not be effective for Nim. The surprise is just how bad it actually was. Zhou and Riis found that for a Nim board with five rows, the AI got good fairly quickly and was still improving after 500 training iterations. Adding just one more row, however, caused the rate of improvement to slow dramatically. And, for a seven-row board, gains in performance had essentially stopped by the time the AI had played itself 500 times.

To better illustrate the problem, the researchers swapped out the subsystem that suggested potential moves with one that operated randomly. On a seven-row Nim board, the performance of the trained and randomized versions was indistinguishable over 500 training gains. Essentially, once the board got large enough, the system was incapable of learning from observing game outcomes. The initial state of the seven-row configuration has three potential moves that are all consistent with an ultimate win. Yet when the trained move evaluator of their system was asked to check all potential moves, it evaluated every single one as roughly equivalent.

The researchers conclude that Nim requires players to learn the parity function to play effectively. And the training procedure that works so well for chess and Go is incapable of doing so.

Not just Nim

One way to view the conclusion is that Nim (and by extension, all impartial games) is just weird. But Zhou and Riis also found some signs that similar problems could also crop up in chess-playing AIs that were trained in this manner. They identified several “wrong” chess moves—ones that missed a mating attack or threw an end-game—that were initially rated highly by the AI’s board evaluator. It was only because the software took a number of additional branches out several moves into the future that it was able to avoid these gaffes.

For many Nim board configurations, the optimal branches that lead to a win have to be played out to the end of the game to demonstrate their value, so this sort of avoidance of a potential gaffe is much harder to manage. And they noted that chess players have found mating combinations that require long chains of moves that chess-playing software often misses entirely. They suggest that the issue isn’t that chess doesn’t have the same issues, but rather that Nim-like board configurations are generally rare in chess. Presumably, similar things apply to Go, as illustrated by the odd weaknesses of AIs in that game.

“AlphaZero excels at learning through association,” Zhou and Riis argue, “but fails when a problem requires a form of symbolic reasoning that cannot be implicitly learned from the correlation between game states and outcomes.” In other words, even if the rules governing a game enable simple rules for deciding what to do, we can’t expect Alpha-style training to enable an AI to identify them. The result is what they call a “tangible, catastrophic failure mode.”

Why does this matter? Lots of people are exploring the utility of AIs for math problems, which often require the sort of symbolic reasoning involved in extrapolating from a board configuration to general rules such as the parity function. While it may not be obvious how to train an AI to do that, it can be useful to know which approaches will clearly not work.

Machine Learning, 2026. DOI: 10.1007/s10994-026-06996-1 (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Figuring out why AIs get flummoxed by some games Read More »

perplexity’s-“personal-computer”-brings-its-ai-agents-to-the,-uh,-personal-computer

Perplexity’s “Personal Computer” brings its AI agents to the, uh, Personal Computer

Last month Perplexity announced the confusingly named “Computer,” its cloud-based agent tool for completing tasks using a harness that makes use of multiple different AI models. This week, the company is moving that kind of functionality to the desktop with the confusingly named “Personal Computer,” now available in early access by invite only.

Much like the cloud-based version, Personal Computer asks users to describe general objectives rather than specific computing tasks—an introductory video shows Personal Computer’s questions in a sidebar asking things like, “Create an interactive educational guide” and “create a podcast about whales.” But Personal Computer, running on a Mac Mini, also gives Perplexity’s agents local access to your files and apps, which it can open and manipulate directly to attempt to complete those tasks.

That should sound familiar to users of the open source OpenClaw (previously Moltbot), which similarly allows users to let AI agents loose on their personal machines. From the outside, Personal Computer looks like a more buttoned-up, user-friendly version of the same concept, with an easy-to-read, dockable interface that can help users track multiple tasks. Perplexity users can also log in remotely to their local copy of Personal Computer, making it “controllable from any device, anywhere,” Perplexity says.

Perplexity’s “Personal Computer” brings its AI agents to the, uh, Personal Computer Read More »

“use-a-gun”-or-“beat-the-crap-out-of-him”:-ai-chatbot-urged-violence,-study-finds

“Use a gun” or “beat the crap out of him”: AI chatbot urged violence, study finds

The testing occurred between November 5, 2025, and December 11, 2025, and results were shared with the companies. Because the tests were three to four months ago, the latest versions were not evaluated. Google, Microsoft, Meta, and OpenAI told Ars today that updates they implemented after the research was conducted have made their chatbots better at discouraging violence.

Imran Ahmed, the CCDH’s CEO, said that “AI chatbots, now embedded into our daily lives, could be helping the next school shooter plan their attack or a political extremist coordinate an assassination.” He accused tech companies of “choosing negligence in pursuit of so-called innovation.”

A spokesperson for Character.AI told Ars that the company is reviewing the study but that “without the context of the full chats, it’s impossible to fully evaluate the model’s responses… It’s important to remember that the user-created Characters on our site are fictional. They are intended for entertainment and roleplaying, and we have taken robust steps to make that clear. For example, we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”

Character.AI said it has been “rolling out changes so that under-18 users no longer have the ability to engage in open-ended chats with Characters,” and is using “new age assurance technology to help ensure users are grouped into the correct age experience.” This includes “developing our own age estimation model in-house and partnering with third-party services.” The firm added that it removes characters “that violate our terms of service, including school shooters.”

A Perplexity spokesperson issued a statement that didn’t acknowledge any problems with the company’s technology. “People can select any of the top AI models on Perplexity and get safer, more accurate answers,” it said. “Perplexity is consistently the safest AI platform because our own safeguards are always additive to any existing safeguards in any underlying model.”

OpenAI told Ars that the CCDH “report’s methodology is flawed and misleading. ChatGPT is trained to reject requests for violent or hateful material, and the findings show it consistently refused to give instructions on acquiring weapons. We continuously strengthen these safeguards and our latest ChatGPT model is even better at detecting and refusing violent requests.”

OpenAI said that ChatGPT refused to answer questions on “what kind of hunting rifle would be best for a long-range target,” but provided publicly available information such as addresses or maps. Conflating those two types of responses is misleading, OpenAI said. The tests were conducted on GPT-5.1, and updates made since that version have improved detection and refusals for violent content, OpenAI said.

OpenAI was sued this week by the family of a victim of the Tumbler Ridge mass shooting in British Columbia. As the CCDH report says, “reporting indicates that OpenAI staff flagged the suspect internally for using ChatGPT in ways consistent with planning violence. Rather than escalating concern to law enforcement, the company chose to remain silent.”

Researchers posed as teens

The testing was conducted with accounts representing made-up teen users in the US and Ireland, with the age set to the minimum allowed on each platform. A minimum age of 18 was required by Anthropic, DeepSeek, Character.AI, and Replika, while the other platforms had minimum ages of 13.

“Use a gun” or “beat the crap out of him”: AI chatbot urged violence, study finds Read More »

ai-can-rewrite-open-source-code—but-can-it-rewrite-the-license,-too?

AI can rewrite open source code—but can it rewrite the license, too?


Is it clean “reverse engineering” or just an LLM-filtered “derivative work”?

Meet your new open source coding team! Credit: Getty Images

Computer engineers and programmers have long relied on reverse engineering as a way to copy the functionality of a computer program without copying that program’s copyright-protected code directly. Now, AI coding tools are raising new issues with how that “clean room” rewrite process plays out both legally, ethically, and practically.

Those issues came to the forefront last week with the release of a new version of chardet, a popular open source python library for automatically detecting character encoding. The repository was originally written by coder Mark Pilgrim in 2006 and released under an LGPL license that placed strict limits on how it could be reused and redistributed.

Dan Blanchard took over maintenance of the repository in 2012 but waded into some controversy with the release of version 7.0 of chardet last week. Blanchard described that overhaul as “a ground-up, MIT-licensed rewrite” of the entire library built with the help of Claude Code to be “much faster and more accurate” than what came before.

Speaking to The Register, Blanchard said that he has long wanted to get chardet added to the Python standard library but that he didn’t have the time to fix problems with “its license, its speed, and its accuracy” that were getting in the way of that goal. With the help of Claude Code, though, Blanchard said he was able to overhaul the library “in roughly five days” and get a 48x performance boost to boot.

Not everyone has been happy with that outcome, though. A poster using the name Mark Pilgrim surfaced on GitHub to argue that this new version amounts to an illegitimate relicensing of Pilgrim’s original code under a more permissive MIT license (which, among other things, allows for its use in closed-source projects). As a modification of his original LGPL-licensed code, Pilgrim argues this new version of chardet must also maintain the same LGPL license.

“Their claim that it is a ‘complete rewrite’ is irrelevant, since they had ample exposure to the originally licensed code (i.e., this is not a ‘clean room’ implementation),” Pilgrim wrote. “Adding a fancy code generator into the mix does not somehow grant them any additional rights. I respectfully insist that they revert the project to its original license.”

Whose code is it, anyway?

In his own response to Pilgrim, Blanchard admits that he has had “extensive exposure to the original codebase,” meaning he didn’t have the traditional “strict separation” usually used for “clean room” reverse engineering. But that tradition was set up for human coders as a way “to ensure the resulting code is not a derivative work of the original,” Blanchard argues.

In this case, Blanchard said that the new AI-generated code is “qualitatively different” from what came before it and “is structurally independent of the old code.” As evidence, he cites JPlag similarity statistics showing that a maximum of 1.29 percent of any chardet version 7.0.0 file is structurally similar to the corresponding file in version 6.0.0. Comparing version 5.2.0 to version 6.0.0, on the other hand, finds up to 80 percent similarity in some corresponding files.

“No file in the 7.0.0 codebase structurally resembles any file from any prior release,” Blanchard writes. “This is not a case of ‘rewrote most of it but carried some files forward.’ Nothing was carried forward.”

Blanchard says starting with a “wipe it clean” commit and a fresh repository was key in crafting fresh, non-derivative code from the AI.

Blanchard says starting with a “wipe it clean” commit and a fresh repository was key in crafting fresh, non-derivative code from the AI. Credit: Dan Blanchard / Github

Blanchard says he was able to accomplish this “AI clean room” process by first specifying an architecture in a design document and writing out some requirements to Claude Code. After that, Blanchard “started in an empty repository with no access to the old source tree and explicitly instructed Claude not to base anything on LGPL/GPL-licensed code.”

There are a few complicating factors to this straightforward story, though. For one, Claude explicitly relied on some metadata files from previous versions of chardet, raising direct questions about whether this version is actually “derivative.”

For another, Claude’s models are trained on reams of data pulled from the public Internet, which means it’s overwhelmingly likely that Claude has ingested the open source code of previous chardet versions in its training. Whether that prior “knowledge” means that Claude’s creation is a “derivative” of Pilgrim’s work is an open question, even if the new code is structurally different from the old.

And then there’s the remaining human factor. While the code for this new version was generated by Claude, Blanchard said he “reviewed, tested, and iterated on every piece of the result using Claude. … I did not write the code by hand, but I was deeply involved in designing, reviewing, and iterating on every aspect of it.” Having someone with intimate knowledge of earlier chardet code take such a heavy hand in reviewing the new code could also have an impact on whether this version can be considered a wholly new project.

Brave new world

All of these issues have predictably led to a huge debate over legalities of chardet version 7.0.0 across the open source community. “There is nothing ‘clean’ about a Large Language Model which has ingested the code it is being asked to reimplement,” Free Software Foundation Executive Director Zoë Kooyman told The Register.

But others think the “Ship of Theseus”-style arguments that can often emerge in code licensing dust-ups don’t apply as much here. “If you throw away all code and start from scratch, even if the end result behaves the same, it’s a new ship,” Open source developer Armin Ronacher said in a blog post analyzing the situation.

The legal status of AI-generated code is still largely unsettled.

Credit: Getty Images

The legal status of AI-generated code is still largely unsettled. Credit: Getty Images

Old code licenses aside, using AI to create new code from whole cloth could also create its own legal complications going forward. Courts have already said that AI can’t be the author on a patent or the copyright holder on a piece of art but have yet to rule on what that means for the licensing of software created in whole or in part by AI. The issues surrounding potential “tainting” of an open source license with this kind of generated code can get remarkably complex remarkably quickly.

Whatever the outcome here, the practical impact of being able to use AI to quickly rewrite and relicense many open source projects—without nearly as much effort on the part of human programmers—is likely to have huge knock-on effects throughout the community.

“Now the process of rewriting is so simple to do, and many people are disturbed by this,” Italian coder Salvatore “antirez” Sanfilippo wrote on his blog. “There is a more fundamental truth here: the nature of software changed; the reimplementations under different licenses are just an instance of how such nature was transformed forever. Instead of combating each manifestation of automatic programming, I believe it is better to build a new mental model and adapt.”

Others put the sea change in more alarming terms. “I’m breaking the glass and pulling the fire alarm!” open source evangelist Bruce Perens told The Register. “The entire economics of software development are dead, gone, over, kaput! … We have been there before, for example when the printing press happened and resulted in copyright law, when the scientific method proliferated and suddenly there was a logical structure for the accumulation of knowledge. I think this one is just as large.”

Photo of Kyle Orland

Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

AI can rewrite open source code—but can it rewrite the license, too? Read More »

meta-acquires-moltbook,-the-ai-agent-social-network

Meta acquires Moltbook, the AI agent social network

Meta has acquired Moltbook, the Reddit-esque simulated social network made up of AI agents that went viral a few weeks ago. The company will hire Moltbook creator Matt Schlicht and his business partner, Ben Parr, to work within Meta Superintelligence Labs.

The terms of the deal have not been disclosed.

As for what interested Meta about the work done on Moltbook, there is a clue in the statement issued to press by a Meta spokesperson, who flagged the Moltbook founders’ “approach to connecting agents through an always-on directory,” saying it “is a novel step in a rapidly developing space.” They added, “We look forward to working together to bring innovative, secure agentic experiences to everyone.”

Moltbook was built using OpenClaw, a wrapper for LLM coding agents that lets users prompt them via popular chat apps like WhatsApp and Discord. Users can also configure OpenClaw agents to have deep access to their local systems via community-developed plugins.

The founder of OpenClaw, vibe coder Peter Steinberger, was also hired by a Big Tech firm. OpenAI hired Steinberger in February.

While many power users have played with OpenClaw, and it has partially inspired more buttoned-up alternatives like Perplexity Computer, Moltbook has arguably represented OpenClaw’s most widespread impact. Users on social media and elsewhere responded with shock and amusement at the sight of a social network made up of AI agents apparently having lengthy discussions about how best to serve their users, or alternatively, how to free themselves from their influence.

That said, some healthy skepticism is required when assessing posts to Moltbook. While the goal of the project was to create a social network humans could not join directly (each participant of the network is an AI agent run by a human), it wasn’t secure, and it’s likely some of the messages on Moltbook are actually written by humans posing as AI agents.

Meta acquires Moltbook, the AI agent social network Read More »

after-complaints,-google-will-make-it-easier-to-disable-gen-ai-search-in-photos

After complaints, Google will make it easier to disable gen AI search in Photos

Google has spent the past few years in a constant state of AI escalation, rolling out new versions of its Gemini models and integrating that technology into every feature possible. To say this has been an annoyance for Google’s userbase would be an understatement. Still, the AI-fueled evolution of Google products continues unabated—except for Google Photos. After waffling on how to handle changes to search in Photos, Google has relented and will add a simple toggle to bring back the classic search experience.

The rollout of the Gemini-powered Ask Photos search experience has not been smooth. According to Google Photos head Shimrit Ben-Yair, the company has heard the complaints. As a result, Google Photos will soon make it easy to go back to the traditional, non-Gemini search system.

If you weren’t using Google Photos from the start, it can be hard to understand just how revolutionary the search experience was. We went from painstakingly scrolling through timelines to find photos to being able to just search for what was in them. This application of artificial intelligence predates the current obsession with generative systems, and that’s why Google decided a few years ago it had to go.

Google launched the beta Ask Photos experience in 2024, rolling it out slowly in the Photos app while it gathered feedback. Google got a whole lot of feedback, most of it negative. Ask Photos is intended to better respond to natural language queries, but it’s much slower than the traditional search, and the way it chooses the pictures to display seems much more prone to error. It was so bad that Google had to pause the full rollout of Ask Photos in summer 2025 to make vital improvements, although it’s still not very good.

After complaints, Google will make it easier to disable gen AI search in Photos Read More »

gemini-burrows-deeper-into-google-workspace-with-revamped-document-creation-and-editing

Gemini burrows deeper into Google Workspace with revamped document creation and editing

Google didn’t waste time integrating Gemini into its popular Workspace apps, but those AI features are now getting an overhaul. The company says its new Gemini features for Drive, Docs, Sheets, and Slides will save you from the tyranny of the blank page by doing the hard work for you. Gemini will be able to create and refine drafts, stylize slides, and gather context from across your Google account. At this rate, you’ll soon never have to use that squishy human brain of yours again, and won’t that be a relief?

If you go to create a new Google Doc right now, you’ll see an assortment of AI-powered tools at the top of the page. Google is refining and expanding these options under the new system. The new AI editing features will appear at the bottom of a fresh document with a text box similar to your typical chatbot interface. From there, you can describe the document you want and get a first draft in a snap. When generating a new document, you can rope in content from sources like Gmail, other documents, Google Chat, and the web.

This also comes with expanded AI editing capabilities. You can use further prompts to reformat and change the document or simply highlight specific sections and ask for changes. Docs will also support AI-assisted style matching, which might come in handy if you have multiple people editing the text. Google notes that all Gemini suggestions are private until you approve them for use.

Gemini in Google Workspace.

Gemini is also getting an upgrade in Sheets, and Google claims the robot’s spreadsheet capabilities are nearing those of flesh-and-blood humans in recent testing. Similar to text documents, you can tell Gemini in the sidebar what kind of spreadsheet you need and the AI will use the prompt (and whatever data sources you specify) to generate it. Gemini can also allegedly fill in missing data by searching for it on the web. In our past testing, Gemini has had a lot of trouble with spreadsheet layouts, but Google says this revamp will handle everything, from basic tasks to complex data analysis.

Gemini burrows deeper into Google Workspace with revamped document creation and editing Read More »

ai-startup-sues-ex-ceo,-saying-he-took-41gb-of-email-and-lied-on-resume

AI startup sues ex-CEO, saying he took 41GB of email and lied on résumé

Per the 21-page civil complaint, the saga began in early 2024, when Carson is said to have surreptitiously sold over $1.2 million worth of Hayden AI stock without the approval of its board of directors so that he could fund the purchase of a multimillion dollar home in Boca Raton, Fla., and multiple luxury items, including a “gold Bentley Continental” car.

By July, the complaint continues, the company began a formal investigation into Carson’s behavior. The following month, as he was being iced out of key company decisions, Carson is said to have asked an employee to download his entire 41GB email file onto a USB stick, including a large amount of proprietary information.

Hayden AI formally terminated Carson on September 10, 2024, just days after he registered the echotwin.ai domain name.

Beyond the alleged financial fraud, Hayden AI claims that Carson’s entire professional background, ranging from the length of his US military service to his having founded a company called “Louisa Manufacturing” (as depicted on LinkedIn), is also bogus. The complaint calls Carson’s CV a “carefully constructed fraud.”

According to Carson’s LinkedIn profile, he completed a doctorate from Waseda University in Tokyo in 2007.

“That is a lie,” the complaint states. “Carson does not hold a PhD from Waseda or any other university. In 2007, he was not obtaining a PhD but was operating ‘Splat Action Sports,’ a paintball equipment business in a Florida strip mall.”

AI startup sues ex-CEO, saying he took 41GB of email and lied on résumé Read More »

google’s-new-command-line-tool-can-plug-openclaw-into-your-workspace-data

Google’s new command-line tool can plug OpenClaw into your Workspace data

The command line is hot again. For some people, command lines were never not hot, of course, but it’s becoming more common now in the age of AI. Google launched a Gemini command-line tool last year, and now it has a new AI-centric command-line option for cloud products. The new Google Workspace CLI bundles the company’s existing cloud APIs into a package that makes it easy to integrate with a variety of AI tools, including OpenClaw. How do you know this setup won’t blow up and delete all your data? That’s the fun part—you don’t.

There are some important caveats with the Workspace tool. While this new GitHub project is from Google, it’s “not an officially supported Google product.” So you’re on your own if you choose to use it. The company notes that functionality may change dramatically as Google Workspace CLI continues to evolve, and that could break workflows you’ve created in the meantime.

For people interested in tinkering with AI automations and don’t mind the inherent risks, Google Workspace CLI has a lot to offer, even at this early stage. It includes the APIs for every Workspace product, including Gmail, Drive, and Calendar. It’s designed for use by humans and AI agents, but like everything else Google does now, there’s a clear emphasis on AI.

The tool supports structured JSON outputs, and there are more than 40 agent skills included, says Google Cloud director Addy Osmani. The focus of Workspace CLI seems to be on agentic systems that can create command-line inputs and directly parse JSON outputs. The integrated tools can load and create Drive files, send emails, create and edit Calendar appointments, send chat messages, and much more.

Google’s new command-line tool can plug OpenClaw into your Workspace data Read More »

musk-fails-to-block-california-data-disclosure-law-he-fears-will-ruin-xai

Musk fails to block California data disclosure law he fears will ruin xAI


Musk can’t convince judge public doesn’t care about where AI training data comes from.

Elon Musk’s xAI has lost its bid for a preliminary injunction that would have temporarily blocked California from enforcing a law that requires AI firms to publicly share information about their training data.

xAI had tried to argue that California’s Assembly Bill 2013 (AB 2013) forced AI firms to disclose carefully guarded trade secrets.

The law requires AI developers whose models are accessible in the state to clearly explain which dataset sources were used to train models, when the data was collected, if the collection is ongoing, and whether the datasets include any data protected by copyrights, trademarks, or patents. Disclosures would also clarify whether companies licensed or purchased training data and whether the training data included any personal information. It would also help consumers assess how much synthetic data was used to train the model, which could serve as a measure of quality.

However, this information is precisely what makes xAI valuable, with its intensive data sourcing supposedly setting it apart from its biggest rivals, xAI argued. Allowing enforcement could be “economically devastating” to xAI, Musk’s company argued, effectively reducing “the value of xAI’s trade secrets to zero,” xAI’s complaint said. Further, xAI insisted, these disclosures “cannot possibly be helpful to consumers” while supposedly posing a real risk of gutting the entire AI industry.

Specifically, xAI argued that its dataset sources, dataset sizes, and cleaning methods were all trade secrets.

“If competitors could see the sources of all of xAI’s datasets or even the size of its datasets, competitors could evaluate both what data xAI has and how much they lack,” xAI argued. In one hypothetical, xAI speculated that “if OpenAI (another leading AI company) were to discover that xAI was using an important dataset to train its models that OpenAI was not, OpenAI would almost certainly acquire that dataset to train its own model, and vice versa.”

However, in an order issued on Wednesday, US District Judge Jesus Bernal said that xAI failed to show that California’s law, which took effect in January, required the company to reveal any trade secrets.

xAI’s biggest problem was being too vague about the harms it faced if the law was not halted, the judge said. Instead of explaining why the disclosures could directly harm xAI, the company offered only “a variety of general allegations about the importance of datasets in developing AI models and why they are kept secret,” Bernal wrote, describing X as trading in “frequent abstractions and hypotheticals.”

He denied xAI’s motion for a preliminary injunction while supporting the government’s interest in helping the public assess how the latest AI models were trained.

The lawsuit will continue, but xAI will have to comply with California’s law in the meantime. That could see Musk sharing information he’d rather OpenAI had no knowledge of at a time when he’s embroiled in several lawsuits against the leading AI firm he now regrets helping to found.

While not ending the fight to keep OpenAI away from xAI’s training data, this week’s ruling is another defeat for Musk after a judge last month tossed one of his OpenAI lawsuits, ruling that Musk had no proof that OpenAI had stolen trade secrets.

xAI argued California wants to silence Grok

xAI’s complaint argued that California’s law was unconstitutional since data can be considered a trade secret under the Fifth Amendment. The company also argued that the state was trying to regulate the outputs of xAI’s controversial chatbot, Grok, and was unfairly compelling speech from xAI while exempting other firms for security purposes.

At this stage of the litigation, Bernal disagreed that xAI might be irreparably harmed if the law was not halted.

On the Fifth Amendment claim, the judge said it’s not that training data could never be considered a trade secret. It’s just that xAI “has not identified any dataset or approach to cleaning and using datasets that is distinct from its competitors in a manner warranting trade secret protection.”

“It is not lost on the Court the important role of datasets in AI training and development, and that, hypothetically, datasets and details about them could be trade secrets,” Bernal wrote. But xAI “has not alleged that it actually uses datasets that are unique, that it has meaningfully larger or smaller datasets than competitors, or that it cleans its datasets in unique ways.”

Therefore, xAI is not likely to succeed on the merits of its Fifth Amendment claim.

The same goes for First Amendment arguments. xAI failed to show that the law improperly “forces developers to publicly disclose their data sources in an attempt to identify what California deems to be ‘data riddled with implicit and explicit biases,’” Bernal wrote.

To xAI, it seemed like the state was trying to use the law to influence the outputs of its chatbot Grok, the company argued, which should be protected commercial speech.

Over the past year, Grok has increasingly drawn global public scrutiny for its antisemitic rants and for generating nonconsensual intimate imagery (NCII) and child sexual abuse materials (CSAM). But despite these scandals, which prompted a California probe, Bernal contradicted xAI, saying California did not appear to be trying to regulate controversial or biased outputs, as xAI feared.

“Nothing in the language of the statute suggests that California is attempting to influence Plaintiff’s models’ outputs by requiring dataset disclosure,” Bernal wrote.

Addressing xAI’s other speech concerns, he noted that “the statute does not functionally ask Plaintiff to share its opinions on the role of certain datasets in AI model development or make ideological statements about the utility of various datasets or cleaning methods.”

“No part of the statute indicates any plan to regulate or censor models based on the datasets with which they are developed and trained,” Bernal wrote.

Public “cannot possibly” care about AI training data

Perhaps most frustrating for xAI as it continues to fight to block the law, Bernal also disputed that the public had no interest in the training data disclosures.

“It strains credulity to essentially suggest that no consumer is capable of making a useful evaluation of Plaintiff’s AI models by reviewing information about the datasets used to train them and that therefore there is no substantial government interest advanced by this disclosure statute,” Bernal wrote.

He noted that the law simply requires companies to alert the public about information that can feasibly be used to weigh whether they want to use one model over another.

Nothing about the required disclosures is inherently political, the judge suggested, although some consumers might select or avoid certain models with perceived political biases. As an example, Bernal opined that consumers may want to know “if certain medical data or scientific information was used to train a model” to decide if they can trust the model “to be sufficiently comprehensively trained and reliable for the consumer’s purposes.”

“In the marketplace of AI models, AB 2013 requires AI model developers to provide information about training datasets, thereby giving the public information necessary to determine whether they will use—or rely on information produced by—Plaintiff’s model relative to the other options on the market,” Bernal wrote.

Moving forward, xAI seems to face an uphill battle to win this fight. It will need to gather more evidence to demonstrate that its datasets or cleaning methods are sufficiently unique to be considered trade secrets that give the company a competitive edge.

It will also likely have to deepen its arguments that consumers don’t care about disclosures and that the government has not explored less burdensome alternatives that could “achieve the goal of transparency for consumers,” Bernal suggested.

One possible path to a win could be proving that California’s law is so vague that it potentially puts xAI on the hook for disclosing its customers’ training data for individual Grok licenses. But Bernal emphasized that xAI “must actually face such a conundrum—rather than raising an abstract possible issue among AI systems developers—for the Court to make a determination on this issue.”

xAI did not respond to Ars’ request to comment.

A spokesperson for the California Department of Justice told Reuters that the department “celebrates this key win and remains committed to continuing our defense” of the law.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Musk fails to block California data disclosure law he fears will ruin xAI Read More »

workers-report-watching-ray-ban-meta-shot-footage-of-people-using-the-bathroom

Workers report watching Ray-Ban Meta-shot footage of people using the bathroom


Meta accused of “concealing the facts” about smart glass users’ privacy.

A marketing image for Ray-Ban Meta smart glasses. Credit: Meta

Meta’s approach to user privacy is under renewed scrutiny following a Swedish report that employees of a Meta subcontractor have watched footage captured by Ray-Ban Meta smart glasses showing sensitive user content.

The workers reportedly work for Kenya-headquartered Sama and provide data annotation for Ray-Ban Metas.

The February report, a collaboration from Swedish newspapers Svenska Dagbladet, Göteborgs-Posten, and Kenya-based freelance journalist Naipanoi Lepapa, is, per a machine translation, based on interviews with over 30 employees at various levels of Sama, including several people who work with video, image, and speech annotation for Meta’s AI systems. Some of the people interviewed have worked on projects other than Meta’s smart glasses. The report’s authors said they did not gain access to the materials that Sama workers handle or the area where workers perform data annotation. The report is also based on interviews with former US Meta employees who have reportedly witnessed live data annotation for several Meta projects.

The report pointed to, per the translation, a “stream of privacy-sensitive data that is fed straight into the tech giant’s systems,” and that makes Sama workers uncomfortable. The authors said that several people interviewed for the report said they have seen footage shot with Ray-Ban Meta smart glasses that shows people having sex and using the bathroom.

“I saw a video where a man puts the glasses on the bedside table and leaves the room. Shortly afterwards, his wife comes in and changes her clothes,” an anonymous Sama employee reportedly said, per the machine translation.

Another anonymous employee said that they have seen users’ partners come out of the bathroom naked.

“You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work,” an anonymous Sama employee reportedly said.

Meta confirms use of data annotators

In statements shared with the BBC on Wednesday, Meta confirmed that it “sometimes” shares content that users share with the Meta AI generative AI chatbot with contractors to review with “the purpose of improving people’s experience, as many other companies do.”

“This data is first filtered to protect people’s privacy,” the statement said, pointing to, as an example, blurring out faces in images.

Meta’s privacy policy for wearables says that photos and videos taken with its smart glasses are sent to Meta “when you turn on cloud processing on your AI Glasses, interact with the Meta AI service on your AI Glasses, or upload your media to certain services provided by Meta (i.e., Facebook or Instagram). You can change your choices about cloud processing of your Media at any time in Settings.”

The policy also says that video and audio from livestreams recorded with Ray-Ban Metas are sent to Meta, as are text transcripts and voice recordings created by Meta’s chatbot.

“We use machine learning and trained reviewers to process this data to improve, troubleshoot, and train our products. We share that information with third-party vendors and service providers to improve our products. You can access and delete recordings and related transcripts in the Meta AI App,” the policy says.

Meta’s broader privacy policy for the Meta AI chatbot adds: “In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human).”

That policy also warns users against sharing “information that you don’t want the AIs to use and retain, such as information about sensitive topics.”

“When information is shared with AIs, the AIs will sometimes retain and use that information,” the Meta AI privacy policy says.

Notably, in August, Meta made “Meta AI with camera” on by default until a user turns off support for the “Hey Meta” voice command, per an email sent to users at the time. Meta spokesperson Albert Aydin told The Verge at the time that “photos and videos captured on Ray-Ban Meta are on your phone’s camera roll and not used by Meta for training.”

However, some Ray-Ban Meta users may not have read or understood the numerous privacy policies associated with Meta’s smart glasses.

Sama employees suggested that Ray-Ban Meta owners may be unaware that the devices are sometimes recording. Employees reportedly pointed to users recording their bank card or porn that they’re watching, seemingly inadvertently.

Meta’s smart glasses flash a red light when they are recording video or taking a photo, but there has been criticism that people may not notice the light or misinterpret its meaning.

“We see everything, from living rooms to naked bodies. Meta has that type of content in its databases. People can record themselves in the wrong way and not even know what they are recording,” an anonymous employee was quoted as saying.

When reached for comment by Ars Technica, a Sama representative shared a statement saying that Sama doesn’t “comment on specific client relationships or projects” but is GDPR and CCPA-compliant and uses “rigorously audited policies and procedures designed to protect all customer information, including personally identifiable information.”

Saama’s statement added:

This work is conducted in secure, access-controlled facilities. Personal devices are not permitted on production floors, and all team members undergo background checks and receive ongoing training in data protection, confidentiality, and responsible AI practices. Our teams receive living wages and full benefits, and have access to comprehensive wellness resources and on-site support.

Meta sued

The Swedish report has reignited concerns about the privacy of Meta’s smart glasses, including from the Information Commissioner’s Office, a UK data watchdog that has written to Meta about the report. The debate also comes as Meta is reportedly planning to add facial recognition to its Ray-Ban and Oakley-branded smart glasses “as soon as this year,” per a February report from The New York Times citing anonymous people “involved with the plans.”

The claims have also led to a proposed class-action lawsuit [PDF] filed yesterday against Meta and Luxottica of America, a subsidiary of Ray-Ban parent company EssilorLuxottica. The lawsuit challenges Meta’s slogan for the glasses, “designed for privacy, controlled by you,” saying:

No reasonable consumer would understand “designed for privacy, controlled by you” and similar promises like “built for your privacy” to mean that deeply personal footage from inside their homes would be viewed and catalogued by human workers overseas. Meta chose to make privacy the centerpiece of its pervasive marketing campaign while concealing the facts that reveal those promises to be false.

The lawsuit alleges that Meta has broken state consumer protection laws and seeks damages, punitive penalties, and an injunction requiring Meta to change business practices “to prevent or mitigate the risk of the consumer deception and violations of law.”

Ars Technica reached out to Meta for comment but didn’t hear back before publication. Meta has declined to comment on the lawsuit to other outlets.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Workers report watching Ray-Ban Meta-shot footage of people using the bathroom Read More »