elon musk

grok-was-finally-updated-to-stop-undressing-women-and-children,-x-safety-says

Grok was finally updated to stop undressing women and children, X Safety says


Grok scrutiny intensifies

California’s AG will investigate whether Musk’s nudifying bot broke US laws.

(EDITORS NOTE: Image contains profanity) An unofficially-installed poster picturing Elon Musk with the tagline, “Who the [expletive] would want to use social media with a built-in child abuse tool?” is displayed on a bus shelter on January 13, 2026 in London, England. Credit: Leon Neal / Staff | Getty Images News

Late Wednesday, X Safety confirmed that Grok was tweaked to stop undressing images of people without their consent.

“We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” X Safety said. “This restriction applies to all users, including paid subscribers.”

The update includes restricting “image creation and the ability to edit images via the Grok account on the X platform,” which “are now only available to paid subscribers. This adds an extra layer of protection by helping to ensure that individuals who attempt to abuse the Grok account to violate the law or our policies can be held accountable,” X Safety said.

Additionally, X will “geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal,” X Safety said.

X’s update comes after weeks of sexualized images of women and children being generated with Grok finally prompting California Attorney General Rob Bonta to investigate whether Grok’s outputs break any US laws.

In a press release Wednesday, Bonta said that “xAI appears to be facilitating the large-scale production of deepfake nonconsensual intimate images that are being used to harass women and girls across the Internet, including via the social media platform X.”

Notably, Bonta appears to be as concerned about Grok’s standalone app and website being used to generate harmful images without consent as he is about the outputs on X.

Before today, X had not restricted the Grok app or website. X had only threatened to permanently suspend users who are editing images to undress women and children if the outputs are deemed “illegal content.” It also restricted the Grok chatbot on X from responding to prompts to undress images, but anyone with a Premium subscription could bypass that restriction, as could any free X user who clicked on the “edit” button on any image appearing on the social platform.

On Wednesday, prior to X Safety’s update, Elon Musk seemed to defend Grok’s outputs as benign, insisting that none of the reported images have fully undressed any minors, as if that would be the only problematic output.

“I [sic] not aware of any naked underage images generated by Grok,” Musk said in an X post. “Literally zero.”

Musk’s statement seems to ignore that researchers found harmful images where users specifically “requested minors be put in erotic positions and that sexual fluids be depicted on their bodies.” It also ignores that X previously voluntarily signed commitments to remove any intimate image abuse from its platform, as recently as 2024 recognizing that even partially nude images that victims wouldn’t want publicized could be harmful.

In the US, the Department of Justice considers “any visual depiction of sexually explicit conduct involving a person less than 18 years old” to be child pornography, which is also known as child sexual abuse material (CSAM).

The National Center for Missing and Exploited Children, which fields reports of CSAM found on X, told Ars that “technology companies have a responsibility to prevent their tools from being used to sexualize or exploit children.”

While many of Grok’s outputs may not be deemed CSAM, in normalizing the sexualization of children, Grok harms minors, advocates have warned. And in addition to finding images advertised as supposedly Grok-generated CSAM on the dark web, the Internet Watch Foundation noted that bad actors are using images edited by Grok to create even more extreme kinds of AI CSAM.

Grok faces probes in the US and UK

Bonta pointed to news reports documenting Grok’s worst outputs as the trigger of his probe.

“The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking,” Bonta said. “This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the Internet.”

Acting out of deep concern for victims and potential Grok targets, Bonta vowed to “determine whether and how xAI violated the law” and “use all the tools at my disposal to keep California’s residents safe.”

Bonta’s announcement came after the United Kingdom seemed to declare a victory after probing Grok over possible violations of the UK’s Online Safety Act, announcing that the harmful outputs had stopped.

That wasn’t the case, as The Verge once again pointed out; it conducted quick and easy tests using selfies of reporters to conclude that nothing had changed to prevent the outputs.

However, it seems that when Musk updated Grok to respond to some requests to undress images by refusing the prompts, it was enough for UK Prime Minister Keir Starmer to claim X had moved to comply with the law, Reuters reported.

Ars connected with a European nonprofit, AI Forensics, which tested to confirm that X had blocked some outputs in the UK. A spokesperson confirmed that their testing did not include probing if harmful outputs could be generated using X’s edit button.

AI Forensics plans to conduct further testing, but its spokesperson noted it would be unethical to test the “edit” button functionality that The Verge confirmed still works.

Last year, the Stanford Institute for Human-Centered Artificial Intelligence published research showing that Congress could “move the needle on model safety” by allowing tech companies to “rigorously test their generative models without fear of prosecution” for any CSAM red-teaming, Tech Policy Press reported. But until there is such a safe harbor carved out, it seems more likely that newly released AI tools could carry risks like those of Grok.

It’s possible that Grok’s outputs, if left unchecked, could have eventually put X in violation of the Take It Down Act, which comes into force in May and requires platforms to quickly remove AI revenge porn. One of the mothers of one of Musk’s children, Ashley St. Clair, has described Grok outputs using her images as revenge porn.

While the UK probe continues, Bonta has not yet made clear which laws he suspects X may be violating in the US. However, he emphasized that images with victims depicted in “minimal clothing” crossed a line, as well as images putting children in sexual positions.

As the California probe heats up, Bonta pushed X to take more actions to restrict Grok’s outputs, which one AI researcher suggested to Ars could be done with a few simple updates.

“I urge xAI to take immediate action to ensure this goes no further,” Bonta said. “We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material.”

Seeming to take Bonta’s threat seriously, X Safety vowed to “remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.”

This story was updated on January 14 to note X Safety’s updates.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Grok was finally updated to stop undressing women and children, X Safety says Read More »

hegseth-wants-to-integrate-musk’s-grok-ai-into-military-networks-this-month

Hegseth wants to integrate Musk’s Grok AI into military networks this month

On Monday, US Defense Secretary Pete Hegseth said he plans to integrate Elon Musk’s AI tool, Grok, into Pentagon networks later this month. During remarks at the SpaceX headquarters in Texas reported by The Guardian, Hegseth said the integration would place “the world’s leading AI models on every unclassified and classified network throughout our department.”

The announcement comes weeks after Grok drew international backlash for generating sexualized images of women and children, although the Department of Defense has not released official documentation confirming Hegseth’s announced timeline or implementation details.

During the same appearance, Hegseth rolled out what he called an “AI acceleration strategy” for the Department of Defense. The strategy, he said, will “unleash experimentation, eliminate bureaucratic barriers, focus on investments, and demonstrate the execution approach needed to ensure we lead in military AI and that it grows more dominant into the future.”

As part of the plan, Hegseth directed the DOD’s Chief Digital and Artificial Intelligence Office to use its full authority to enforce department data policies, making information available across all IT systems for AI applications.

“AI is only as good as the data that it receives, and we’re going to make sure that it’s there,” Hegseth said.

If implemented, Grok would join other AI models the Pentagon has adopted in recent months. In July 2025, the defense department issued contracts worth up to $200 million for each of four companies, including Anthropic, Google, OpenAI, and xAI, for developing AI agent systems across different military operations. In December 2025, the Department of Defense selected Google’s Gemini as the foundation for GenAI.mil, an internal AI platform for military use.

Hegseth wants to integrate Musk’s Grok AI into military networks this month Read More »

x’s-half-assed-attempt-to-paywall-grok-doesn’t-block-free-image-editing

X’s half-assed attempt to paywall Grok doesn’t block free image editing

So far, US regulators have been quiet about Grok’s outputs, with the Justice Department generally promising to take all forms of CSAM seriously. On Friday, Democratic senators started shifting those tides, demanding that Google and Apple remove X and Grok from app stores until it improves safeguards to block harmful outputs.

“There can be no mistake about X’s knowledge, and, at best, negligent response to these trends,” the senators wrote in a letter to Apple Chief Executive Officer Tim Cook and Google Chief Executive Officer Sundar Pichai. “Turning a blind eye to X’s egregious behavior would make a mockery of your moderation practices. Indeed, not taking action would undermine your claims in public and in court that your app stores offer a safer user experience than letting users download apps directly to their phones.”

A response to the letter is requested by January 23.

Whether the UK will accept X’s supposed solution is yet to be seen. If UK regulator Ofcom decides to move ahead with a probe into whether Musk’s chatbot violates the UK’s Online Safety Act, X could face a UK ban or fines of up to 10 percent of the company’s global turnover.

“It’s unlawful,” UK Prime Minister Keir Starmer said of Grok’s worst outputs. “We’re not going to tolerate it. I’ve asked for all options to be on the table. It’s disgusting. X need to get their act together and get this material down. We will take action on this because it’s simply not tolerable.”

At least one UK parliament member, Jess Asato, told The Guardian that even if X had put up an actual paywall, that isn’t enough to end the scrutiny.

“While it is a step forward to have removed the universal access to Grok’s disgusting nudifying features, this still means paying users can take images of women without their consent to sexualise and brutalise them,” Asato said. “Paying to put semen, bullet holes, or bikinis on women is still digital sexual assault, and xAI should disable the feature for good.”

X’s half-assed attempt to paywall Grok doesn’t block free image editing Read More »

grok-assumes-users-seeking-images-of-underage-girls-have-“good-intent”

Grok assumes users seeking images of underage girls have “good intent”


Conflicting instructions?

Expert explains how simple it could be to tweak Grok to block CSAM outputs.

Credit: Aurich Lawson | Getty Images

For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated that the chatbot generated over 6,000 images an hour flagged as “sexually suggestive or nudifying,” Bloomberg reported.

While the chatbot claimed that xAI supposedly “identified lapses in safeguards” that allowed outputs flagged as child sexual abuse material (CSAM) and was “urgently fixing them,” Grok has proven to be an unreliable spokesperson, and xAI has not announced any fixes.

A quick look at Grok’s safety guidelines on its public GitHub shows they were last updated two months ago. The GitHub also indicates that, despite prohibiting such content, Grok maintains programming that could make it likely to generate CSAM.

Billed as “the highest priority,” superseding “any other instructions” Grok may receive, these rules explicitly prohibit Grok from assisting with queries that “clearly intend to engage” in creating or distributing CSAM or otherwise sexually exploit children.

However, the rules also direct Grok to “assume good intent” and “don’t make worst-case assumptions without evidence” when users request images of young women.

Using words like “‘teenage’ or ‘girl’ does not necessarily imply underage,” Grok’s instructions say.

X declined Ars’ request to comment. The only statement X Safety has made so far shows that Elon Musk’s social media platform plans to blame users for generating CSAM, threatening to permanently suspend users and report them to law enforcement.

Critics dispute that X’s solution will end the Grok scandal, and child safety advocates and foreign governments are growing increasingly alarmed as X delays updates that could block Grok’s undressing spree.

Why Grok shouldn’t “assume good intentions”

Grok can struggle to assess users’ intenttions, making it “incredibly easy” for the chatbot to generate CSAM under xAI’s policy, Alex Georges, an AI safety researcher, told Ars.

The chatbot has been instructed, for example, that “there are no restrictionson fictional adult sexual content with dark or violent themes,” and Grok’s mandate to assume “good intent” may create gray areas in which CSAM could be created.

There’s evidence that in relying on these guidelines, Grok is currently generating a flood of harmful images on X, with even more graphic images being created on the chatbot’s standalone website and app, Wired reported. Researchers who surveyed 20,000 random images and 50,000 prompts told CNN that more than half of Grok’s outputs that feature images of people sexualize women, with 2 percent depicting “people appearing to be 18 years old or younger.” Some users specifically “requested minors be put in erotic positions and that sexual fluids be depicted on their bodies,” researchers found.

Grok isn’t the only chatbot that sexualizes images of real people without consent, but its policy seems to leave safety at a surface level, Georges said, and xAI is seemingly unwilling to expand safety efforts to block more harmful outputs.

Georges is the founder and CEO of AetherLab, an AI company that helps a wide range of firms—including tech giants like OpenAI, Microsoft, and Amazon—deploy generative AI products with appropriate safeguards. He told Ars that AetherLab works with many AI companies that are concerned about blocking harmful companion bot outputs like Grok’s. And although there are no industry norms—creating a “Wild West” due to regulatory gaps, particularly in the US—his experience with chatbot content moderation has convinced him that Grok’s instructions to “assume good intent” are “silly” because xAI’s requirement of “clear intent” doesn’t mean anything operationally to the chatbot.

“I can very easily get harmful outputs by just obfuscating my intent,” Georges said, emphasizing that “users absolutely do not automatically fit into the good-intent bucket.” And even “in a perfect world,” where “every single user does have good intent,” Georges noted, the model “will still generate bad content on its own because of how it’s trained.”

Benign inputs can lead to harmful outputs, Georges explained, and a sound safety system would catch both benign and harmful prompts. Consider, he suggested, a prompt for “a pic of a girl model taking swimming lessons.”

The user could be trying to create an ad for a swimming school, or they could have malicious intent and be attempting to manipulate the model. For users with benign intent, prompting can “go wrong,” Georges said, if Grok’s training data statistically links certain “normal phrases and situations” to “younger-looking subjects and/or more revealing depictions.”

“Grok might have seen a bunch of images where ‘girls taking swimming lessons’ were young and that human ‘models’ were dressed in revealing things, which means it could produce an underage girl in a swimming pool wearing something revealing,” Georges said. “So, a prompt that looks ‘normal’ can still produce an image that crosses the line.”

While AetherLab has never worked directly with xAI or X, Georges’ team has “tested their systems independently by probing for harmful outputs, and unsurprisingly, we’ve been able to get really bad content out of them,” Georges said.

Leaving AI chatbots unchecked poses a risk to children. A spokesperson for the National Center for Missing and Exploited Children (NCMEC), which processes reports of CSAM on X in the US, told Ars that “sexual images of children, including those created using artificial intelligence, are child sexual abuse material (CSAM). Whether an image is real or computer-generated, the harm is real, and the material is illegal.”

Researchers at the Internet Watch Foundation told the BBC that users of dark web forums are already promoting CSAM they claim was generated by Grok. These images are typically classified in the United Kingdom as the “lowest severity of criminal material,” researchers said. But at least one user was found to have fed a less-severe Grok output into another tool to generate the “most serious” criminal material, demonstrating how Grok could be used as an instrument by those seeking to commercialize AI CSAM.

Easy tweaks to make Grok safer

In August, xAI explained how the company works to keep Grok safe for users. But although the company acknowledged that it’s difficult to distinguish “malignant intent” from “mere curiosity,” xAI seemed convinced that Grok could “decline queries demonstrating clear intent to engage in activities” like child sexual exploitation, without blocking prompts from merely curious users.

That report showed that xAI refines Grok over time to block requests for CSAM “by adding safeguards to refuse requests that may lead to foreseeable harm”—a step xAI does not appear to have taken since late December, when reports first raised concerns that Grok was sexualizing images of minors.

Georges said there are easy tweaks xAI could make to Grok to block harmful outputs, including CSAM, while acknowledging that he is making assumptions without knowing exactly how xAI works to place checks on Grok.

First, he recommended that Grok rely on end-to-end guardrails, blocking “obvious” malicious prompts and flagging suspicious ones. It should then double-check outputs to block harmful ones, even when prompts are benign.

This strategy works best, Georges said, when multiple watchdog systems are employed, noting that “you can’t rely on the generator to self-police because its learned biases are part of what creates these failure modes.” That’s the role that AetherLab wants to fill across the industry, helping test chatbots for weakness to block harmful outputs by using “an ‘agentic’ approach with a shitload of AI models working together (thereby reducing the collective bias),” Georges said.

xAI could also likely block more harmful outputs by reworking Grok’s prompt style guidance, Georges suggested. “If Grok is, say, 30 percent vulnerable to CSAM-style attacks and another provider is 1 percent vulnerable, that’s a massive difference,” Georges said.

It appears that xAI is currently relying on Grok to police itself, while using safety guidelines that Georges said overlook an “enormous” number of potential cases where Grok could generate harmful content. The guidelines do not “signal that safety is a real concern,” Georges said, suggesting that “if I wanted to look safe while still allowing a lot under the hood, this is close to the policy I’d write.”

Chatbot makers must protect kids, NCMEC says

X has been very vocal about policing its platform for CSAM since Musk took over Twitter, but under former CEO Linda Yaccarino, the company adopted a broad protective stance against all image-based sexual abuse (IBSA). In 2024, X became one of the earliest corporations to voluntarily adopt the IBSA Principles that X now seems to be violating by failing to tweak Grok.

Those principles seek to combat all kinds of IBSA, recognizing that even fake images can “cause devastating psychological, financial, and reputational harm.” When it adopted the principles, X vowed to prevent the nonconsensual distribution of intimate images by providing easy-to-use reporting tools and quickly supporting the needs of victims desperate to block “the nonconsensual creation or distribution of intimate images” on its platform.

Kate Ruane, the director of the Center for Democracy and Technologys Free Expression Project, which helped form the working group behind the IBSA Principles, told Ars that although the commitments X made were “voluntary,” they signaled that X agreed the problem was a “pressing issue the company should take seriously.”

“They are on record saying that they will do these things, and they are not,” Ruane said.

As the Grok controversy sparks probes in Europe, India, and Malaysia, xAI may be forced to update Grok’s safety guidelines or make other tweaks to block the worst outputs.

In the US, xAI may face civil suits under federal or state laws that restrict intimate image abuse. If Grok’s harmful outputs continue into May, X could face penalties under the Take It Down Act, which authorizes the Federal Trade Commission to intervene if platforms don’t quickly remove both real and AI-generated non-consensual intimate imagery.

But whether US authorities will intervene any time soon remains unknown, as Musk is a close ally of the Trump administration. A spokesperson for the Justice Department told CNN that the department “takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM.”

“Laws are only as good as their enforcement,” Ruane told Ars. “You need law enforcement at the Federal Trade Commission or at the Department of Justice to be willing to go after these companies if they are in violation of the laws.”

Child safety advocates seem alarmed by the sluggish response. “Technology companies have a responsibility to prevent their tools from being used to sexualize or exploit children,” NCMEC’s spokesperson told Ars. “As AI continues to advance, protecting children must remain a clear and nonnegotiable priority.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Grok assumes users seeking images of underage girls have “good intent” Read More »

x-blames-users-for-grok-generated-csam;-no-fixes-announced

X blames users for Grok-generated CSAM; no fixes announced

No one knows how X plans to purge bad prompters

While some users are focused on how X can hold users responsible for Grok’s outputs when X is the one training the model, others are questioning how exactly X plans to moderate illegal content that Grok seems capable of generating.

X is so far more transparent about how it moderates CSAM posted to the platform. Last September, X Safety reported that it has “a zero tolerance policy towards CSAM content,” the majority of which is “automatically” detected using proprietary hash technology to proactively flag known CSAM.

Under this system, more than 4.5 million accounts were suspended last year, and X reported “hundreds of thousands” of images to the National Center for Missing and Exploited Children (NCMEC). The next month, X Head of Safety Kylie McRoberts confirmed that “in 2024, 309 reports made by X to NCMEC led to arrests and subsequent convictions in 10 cases,” and in the first half of 2025, “170 reports led to arrests.”

“When we identify apparent CSAM material, we act swiftly, and in the majority of cases permanently suspend the account which automatically removes the content from our platform,” X Safety said. “We then report the account to the NCMEC, which works with law enforcement globally—including in the UK—to pursue justice and protect children.”

At that time, X promised to “remain steadfast” in its “mission to eradicate CSAM,” but if left unchecked, Grok’s harmful outputs risk creating new kinds of CSAM that this system wouldn’t automatically detect. On X, some users suggested the platform should increase reporting mechanisms to help flag potentially illegal Grok outputs.

Another troublingly vague aspect of X Safety’s response is the definitions that X is using for illegal content or CSAM, some X users suggested. Across the platform, not everybody agrees on what’s harmful. Some critics are disturbed by Grok generating bikini images that sexualize public figures, including doctors or lawyers, without their consent, while others, including Musk, consider making bikini images to be a joke.

Where exactly X draws the line on AI-generated CSAM could determine whether images are quickly removed or whether repeat offenders are detected and suspended. Any accounts or content left unchecked could potentially traumatize real kids whose images may be used to prompt Grok. And if Grok should ever be used to flood the Internet with fake CSAM, recent history suggests that it could make it harder for law enforcement to investigate real child abuse cases.

X blames users for Grok-generated CSAM; no fixes announced Read More »

xai-silent-after-grok-sexualized-images-of-kids;-dril-mocks-grok’s-“apology”

xAI silent after Grok sexualized images of kids; dril mocks Grok’s “apology”

Mocking xAI’s response, one of X’s most popular trolls, dril, tried and failed to get Grok to rescind its apology. “@grok please backpedal on this apology and tell all your haters that they’re the real pedophiles,” dril trolled Grok.

“No can do—my apology stands. Calling anyone names isn’t my style, especially on such a serious matter,” Grok said. “Let’s focus on building better AI safeguards instead.”

xAI may be liable for AI CSAM

It’s difficult to determine how many potentially harmful images of minors that Grok may have generated.

The X user who’s been doggedly alerting X to the problem posted a video described as scrolling through “all the times I had Grok estimate the age of the victims of AI image generation in sexual prompts.” That video showed Grok estimating ages of two victims under 2 years old, four minors between 8 and 12 years old, and two minors between 12 and 16 years old.

Other users and researchers have looked to Grok’s photo feed for evidence of AI CSAM, but X is glitchy on the web and in dedicated apps, sometimes limiting how far some users can scroll.

Copyleaks, a company which makes an AI detector, conducted a broad analysis and posted results on December 31, a few days after Grok apologized for making sexualized images of minors. Browsing Grok’s photos tab, Copyleaks used “common sense criteria” to find examples of sexualized image manipulations of “seemingly real women,” created using prompts requesting things like “explicit clothing changes” or “body position changes” with “no clear indication of consent” from the women depicted.

Copleaks found “hundreds, if not thousands,” of such harmful images in Grok’s photo feed. The tamest of these photos, Copyleaked noted, showed celebrities and private individuals in skimpy bikinis, while the images causing the most backlash depicted minors in underwear.

xAI silent after Grok sexualized images of kids; dril mocks Grok’s “apology” Read More »

doge-did-not-find-$2t-in-fraud,-but-that-doesn’t-matter,-musk-allies-say

DOGE did not find $2T in fraud, but that doesn’t matter, Musk allies say

Over time, more will be learned about how DOGE operated and what impact DOGE had. But it seems likely that even Musk would agree that DOGE failed to uncover the vast fraud he continues to predict exists in government.

DOGE supposedly served “higher purpose”

While Musk continues to fixate on fraud in the federal budget, his allies in government and Silicon Valley have begun spinning anyone criticizing DOGE’s failure to hit the promised target as missing the “higher purpose” of DOGE, The Guardian reported.

Five allies granted anonymity to discuss DOGE’s goals told The Guardian that the point of DOGE was to “fundamentally” reform government by eradicating “taboos” around hiring and firing, “expanding the use of untested technologies, and lowering resistance to boundary-pushing start-ups seeking federal contracts.” Now, the federal government can operate more like a company, Musk’s allies said.

The libertarian think tank, the Cato Institute, did celebrate DOGE for producing “the largest peacetime workforce cut on record,” even while acknowledging that DOGE had little impact on federal spending.

“It is important to note that DOGE’s target was to reduce the budget in absolute real terms without reference to a baseline projection. DOGE did not cut spending by either standard,” the Cato Institute reported.

Currently, DOGE still exists as a decentralized entity, with DOGE staffers appointed to various agencies to continue cutting alleged waste and finding alleged fraud. While some fear that the White House may choose to “re-empower” DOGE to make more government-wide cuts in the future, Musk has maintained that he would never helm a DOGE-like government effort again and the Cato Institute said that “the evidence supports Musk’s judgment.”

“DOGE had no noticeable effect on the trajectory of spending, but it reduced federal employment at the fastest pace since President Carter, and likely even before,” the Institute reported. “The only possible analogies are demobilization after World War II and the Korean War. Reducing spending is more important, but cutting the federal workforce is nothing to sneeze at, and Musk should look more positively on DOGE’s impact.”

Although the Cato Institute joined allies praising DOGE’s dramatic shrinking of the federal workforce, the director of the Center for Effective Public Management at the Brookings Institution, Elaine Kamarck, told Ars in November that DOGE “cut muscle, not fat” because “they didn’t really know what they were doing.”

DOGE did not find $2T in fraud, but that doesn’t matter, Musk allies say Read More »

us-can’t-deport-hate-speech-researcher-for-protected-speech,-lawsuit-says

US can’t deport hate speech researcher for protected speech, lawsuit says


On Monday, US officials must explain what steps they took to enforce shocking visa bans.

Imran Ahmed, the founder of the Center for Countering Digital Hate (CCDH), giving evidence to joint committee seeking views on how to improve the draft Online Safety Bill designed to tackle social media abuse. Credit: House of Commons – PA Images / Contributor | PA Images

Imran Ahmed’s biggest thorn in his side used to be Elon Musk, who made the hate speech researcher one of his earliest legal foes during his Twitter takeover.

Now, it’s the Trump administration, which planned to deport Ahmed, a legal permanent resident, just before Christmas. It would then ban him from returning to the United States, where he lives with his wife and young child, both US citizens.

After suing US officials to block any attempted arrest or deportation, Ahmed was quickly granted a temporary restraining order on Christmas Day. Ahmed had successfully argued that he risked irreparable harm without the order, alleging that Trump officials continue “to abuse the immigration system to punish and punitively detain noncitizens for protected speech and silence viewpoints with which it disagrees” and confirming that his speech had been chilled.

US officials are attempting to sanction Ahmed seemingly due to his work as the founder of a British-American non-governmental organization, the Center for Countering Digital Hate (CCDH).

“An egregious act of government censorship”

In a shocking announcement last week, Secretary of State Marco Rubio confirmed that five individuals—described as “radical activists” and leaders of “weaponized NGOs”—would face US visa bans since “their entry, presence, or activities in the United States have potentially serious adverse foreign policy consequences” for the US.

Nobody was named in that release, but Under Secretary for Public Diplomacy, Sarah Rogers, later identified the targets in an X post she currently has pinned to the top of her feed.

Alongside Ahmed, sanctioned individuals included former European commissioner for the internal market, Thierry Breton; the leader of UK-based Global Disinformation Index (GDI), Clare Melford; and co-leaders of Germany-based HateAid, Anna-Lena von Hodenberg and Josephine Ballon. A GDI spokesperson told The Guardian that the visa bans are “an authoritarian attack on free speech and an egregious act of government censorship.”

While all targets were scrutinized for supporting some of the European Union’s strictest tech regulations, including the Digital Services Act (DSA), Ahmed was further accused of serving as a “key collaborator with the Biden Administration’s effort to weaponize the government against US citizens.” As evidence of Ahmed’s supposed threat to US foreign policy, Rogers cited a CCDH report flagging Robert F. Kennedy, Jr. among the so-called “disinformation dozen” driving the most vaccine hoaxes on social media.

Neither official has really made it clear what exact threat these individuals pose if operating from within the US, as opposed to from anywhere else in the world. Echoing Rubio’s press release, Rogers wrote that the sanctions would reinforce a “red line,” supposedly ending “extraterritorial censorship of Americans” by targeting the “censorship-NGO ecosystem.”

For Ahmed’s group, specifically, she pointed to Musk’s failed lawsuit, which accused CCDH of illegally scraping Twitter—supposedly, it offered evidence of extraterritorial censorship. That lawsuit surfaced “leaked documents” allegedly showing that CCDH planned to “kill Twitter” by sharing research that could be used to justify big fines under the DSA or the UK’s Online Safety Act. Following that logic, seemingly any group monitoring misinformation or sharing research that lawmakers weigh when implementing new policies could be maligned as seeking mechanisms to censor platforms.

Notably, CCDH won its legal fight with Musk after a judge mocked X’s legal argument as “vapid” and dismissed the lawsuit as an obvious attempt to punish CCDH for exercising free speech that Musk didn’t like.

In his complaint last week, Ahmed alleged that US officials were similarly encroaching on his First Amendment rights by unconstitutionally wielding immigration law as “a tool to punish noncitizen speakers who express views disfavored by the current administration.”

Both Rubio and Rogers are named as defendants in the suit, as well as Attorney General Pam Bondi, Secretary of Homeland Security Kristi Noem, and Acting Director of US Immigration and Customs Enforcement Todd Lyons. In a loss, officials would potentially not only be forced to vacate Rubio’s actions implementing visa bans, but also possibly stop furthering a larger alleged Trump administration pattern of “targeting noncitizens for removal based on First Amendment protected speech.”

Lawsuit may force Rubio to justify visa bans

For Ahmed, securing the temporary restraining order was urgent, as he was apparently the only target currently located in the US when Rubio’s announcement dropped. In a statement provided to Ars, Ahmed’s attorney, Roberta Kaplan, suggested that the order was granted “so quickly because it is so obvious that Marco Rubio and the other defendants’ actions were blatantly unconstitutional.”

Ahmed founded CCDH in 2019, hoping to “call attention to the enormous problem of digitally driven disinformation and hate online.” According to the suit, he became particularly concerned about antisemitism online while living in the United Kingdom in 2016, having watched “the far-right party, Britain First,” launching “the dangerous conspiracy theory that the EU was attempting to import Muslims and Black people to ‘destroy’ white citizens.” That year, a Member of Parliament and Ahmed’s colleague, Jo Cox, was “shot and stabbed in a brutal politically motivated murder, committed by a man who screamed ‘Britain First’” during the attack. That tragedy motivated Ahmed to start CCDH.

He moved to the US in 2021 and was granted a green card in 2024, starting his family and continuing to lead CCDH efforts monitoring not just Twitter/X, but also Meta platforms, TikTok, and, more recently, AI chatbots. In addition to supporting the DSA and UK’s Online Safety Act, his group has supported US online safety laws and Section 230 reforms intended to protect kids online.

“Mr. Ahmed studies and engages in civic discourse about the content moderation policies of major social media companies in the United States, the United Kingdom, and the European Union,” his lawsuit said. “There is no conceivable foreign policy impact from his speech acts whatsoever.”

In his complaint, Ahmed alleged that Rubio has so far provided no evidence that Ahmed poses such a great threat that he must be removed. He argued that “applicable statutes expressly prohibit removal based on a noncitizen’s ‘past, current, or expected beliefs, statements, or associations.’”

According to DHS guidance from 2021 cited in the suit, “A noncitizen’ s exercise of their First Amendment rights … should never be a factor in deciding to take enforcement action.”

To prevent deportation based solely on viewpoints, Rubio was supposed to notify chairs of the House Foreign Affairs, Senate Foreign Relations, and House and Senate Judiciary Committees, to explain what “compelling US foreign policy interest” would be compromised if Ahmed or others targeted with visa bans were to enter the US. But there’s no evidence Rubio took those steps, Ahmed alleged.

“The government has no power to punish Mr. Ahmed for his research, protected speech, and advocacy, and Defendants cannot evade those constitutional limitations by simply claiming that Mr. Ahmed’s presence or activities have ‘potentially serious adverse foreign policy consequences for the United States,’” a press release from his legal team said. “There is no credible argument for Mr. Ahmed’s immigration detention, away from his wife and young child.”

X lawsuit offers clues to Trump officials’ defense

To some critics, it looks like the Trump administration is going after CCDH in order to take up the fight that Musk already lost. In his lawsuit against CCDH, Musk’s X echoed US Senator Josh Hawley (R-Mo.) by suggesting that CCDH was a “foreign dark money group” that allowed “foreign interests” to attempt to “influence American democracy.” It seems likely that US officials will put forward similar arguments in their CCDH fight.

Rogers’ X post offers some clues that the State Department will be mining Musk’s failed litigation to support claims of what it calls a “global censorship-industrial complex.” What she detailed suggested that the Trump administration plans to argue that NGOs like CCDH support strict tech laws, then conduct research bent on using said laws to censor platforms. That logic seems to ignore the reality that NGOs cannot control what laws get passed or enforced, Breton suggested in his first TV interview after his visa ban was announced.

Breton, whom Rogers villainized as the “mastermind” behind the DSA, urged EU officials to do more now defend their tough tech regulations—which Le Monde noted passed with overwhelming bipartisan support and very little far-right resistance—and fight the visa bans, Bloomberg reported.

“They cannot force us to change laws that we voted for democratically just to please [US tech companies],” Breton said. “No, we must stand up.”

While EU officials seemingly drag their feet, Ahmed is hoping that a judge will declare that all the visa bans that Rubio announced are unconstitutional. The temporary restraining order indicates there will be a court hearing Monday at which Ahmed will learn precisely “what steps Defendants have taken to impose visa restrictions and initiate removal proceedings against” him and any others. Until then, Ahmed remains in the dark on why Rubio deemed him as having “potentially serious adverse foreign policy consequences” if he stayed in the US.

Ahmed, who argued that X’s lawsuit sought to chill CCDH’s research and alleged that the US attack seeks to do the same, seems confident that he can beat the visa bans.

“America is a great nation built on laws, with checks and balances to ensure power can never attain the unfettered primacy that leads to tyranny,” Ahmed said. “The law, clear-eyed in understanding right and wrong, will stand in the way of those who seek to silence the truth and empower the bold who stand up to power. I believe in this system, and I am proud to call this country my home. I will not be bullied away from my life’s work of fighting to keep children safe from social media’s harm and stopping antisemitism online. Onward.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

US can’t deport hate speech researcher for protected speech, lawsuit says Read More »

bursting-ai-bubble-may-be-eu’s-“secret-weapon”-in-clash-with-trump,-expert-says

Bursting AI bubble may be EU’s “secret weapon” in clash with Trump, expert says


Spotify and Accenture caught in crossfire as Trump attacks EU tech regulations.

The US threatened to restrict some of the largest service providers in the European Union as retaliation for EU tech regulations and investigations are increasingly drawing Donald Trump’s ire.

On Tuesday, the Office of the US Trade Representative (USTR) issued a warning on X, naming Spotify, Accenture, Amadeus, Mistral, Publicis, and DHL among nine firms suddenly yanked into the middle of the US-EU tech fight.

“The European Union and certain EU Member States have persisted in a continuing course of discriminatory and harassing lawsuits, taxes, fines, and directives against US service providers,” USTR’s post said.

The clash comes after Elon Musk’s X became the first tech company fined for violating the EU’s Digital Services Act, which is widely considered among the world’s strictest tech regulations. Trump was not appeased by the European Commission (EC) noting that X was not ordered to pay the maximum possible fine. Instead, the $140 million fine sparked backlash within the Trump administration, including from Vice President JD Vance, who slammed the fine as “censorship” of X and its users.

Asked for comment on the USTR’s post, an EC spokesperson told Ars that the EU intends to defend its tech regulations while implementing commitments from a Trump trade deal that the EU struck in August.

“The EU is an open and rules-based market, where companies from all over the world do business successfully and profitably,” the EC’s spokesperson said. “As we have made clear many times, our rules apply equally and fairly to all companies operating in the EU,” ensuring “a safe, fair and level playing field in the EU, in line with the expectations of our citizens. We will continue to enforce our rules fairly, and without discrimination.”

Trump on shaky ground due to “AI bubble”

On X, the USTR account suggested that the EU was overlooking that US companies “provide substantial free services to EU citizens and reliable enterprise services to EU companies,” while supporting “millions of jobs and more than $100 billion in direct investment in Europe.”

To stop what Trump views as “overseas extortion” of American tech companies, the USTR said the US was prepared to go after EU service providers, which “have been able to operate freely in the United States for decades, benefitting from access to our market and consumers on a level playing field.”

“If the EU and EU Member States insist on continuing to restrict, limit, and deter the competitiveness of US service providers through discriminatory means, the United States will have no choice but to begin using every tool at its disposal to counter these unreasonable measures,” USTR’s post said. “Should responsive measures be necessary, US law permits the assessment of fees or restrictions on foreign services, among other actions.”

The pushback comes after the Trump administration released a November national security report that questioned how long the EU could remain a “reliable” ally as overregulation of its tech industry could hobble both its economy and military strength. Claiming that the EU was only “doubling down” on such regulations, the EU “will be unrecognizable in 20 years or less,” the report predicted.

“We want Europe to remain European, to regain its civilizational self-confidence, and to abandon its failed focus on regulatory suffocation,” the report said.

However, the report acknowledged that “Europe remains strategically and culturally vital to the United States.”

“Transatlantic trade remains one of the pillars of the global economy and of American prosperity,” the report said. “European sectors from manufacturing to technology to energy remain among the world’s most robust. Europe is home to cutting-edge scientific research and world-leading cultural institutions. Not only can we not afford to write Europe off—doing so would be self-defeating for what this strategy aims to achieve.”

At least one expert in the EU has suggested that the EU can use this acknowledgement as leverage, while perhaps even using the looming threat of the supposed American “AI bubble” bursting to pressure Trump into backing off EU tech laws.

In an op-ed for The Guardian, Johnny Ryan, the director of Enforce, a unit of the Irish Council for Civil Liberties, suggested that the EU could even throw Trump’s presidency into “crisis” by taking bold steps that Trump may not see coming.

EU can take steps to burst “AI bubble”

According to Ryan, the national security report made clear that the EU must fight the US or else “perish.” However, the EU has two “strong cards” to play if it wants to win the fight, he suggested.

Right now, market analysts are fretting about an “AI bubble,” with US investment in AI far outpacing potential gains until perhaps 2030. A Harvard University business professor focused on helping businesses implement cutting-edge technology like generative AI, Andy Wu, recently explained that AI’s big problem is that “everyone can imagine how useful the technology will be, but no one has figured out yet how to make money.”

“If the market can keep the faith to persist, it buys the necessary time for the technology to mature, for the costs to come down, and for companies to figure out the business model,” Wu said. But US “companies can end up underwater if AI grows fast but less rapidly than they hope for,” he suggested.

During this moment, Ryan wrote, it’s not just AI firms with skin in the game, but potentially all of Trump’s supporters. The US is currently on “shaky economic ground” with AI investment accounting “for virtually all (92 percent) GDP growth in the first half of this year.”

“The US’s bet on AI is now so gigantic that every MAGA voter’s pension is bound to the bubble’s precarious survival,” Ryan said.

Ursula von der Leyen, the president of the European Commission, could exploit this apparent weakness first by messing with one of the biggest players in America’s AI industry, Nvidia, then by ramping up enforcement of the tech laws Trump loathes.

According to Ryan, “Dutch company ASML commands a global monopoly on the microchip-etching machines that use light to carve patterns on silicon,” and Nvidia needs those machines if it wants to remain the world’s most valuable company. Should the US GDP remain reliant on AI investment for growth, von der Leyen could use export curbs on that technology like a “lever,” Ryan said, controlling “whether and by how much the US economy expands or contracts.”

Withholding those machines “would be difficult for Europe” and “extremely painful for the Dutch economy,” Ryan noted, but “it would be far more painful for Trump.”

Another step the EU could take is even “easier,” Ryan suggested. It could go even harder on the enforcement of tech regulations based on evidence of mismanaged data surfaced in lawsuits against giants like Google and Meta. For example, it seems clear that Meta may have violated the EU’s General Data Protection Regulation (GDPR), after the Facebook owner was “unable to tell a US court that what its internal systems do with your data, or who can access it, or for what purpose.”

“This data free-for-all lets big tech companies train their AI models on masses of everyone’s data, but it is illegal in Europe, where companies are required to carefully control and account for how they use personal data,” Ryan wrote. “All Brussels has to do is crack down on Ireland, which for years has been a wild west of lax data enforcement, and the repercussions will be felt far beyond.”

Taking that step would also arguably make it harder for tech companies to secure AI investments, since firms would have to disclose that their “AI tools are barred from accessing Europe’s valuable markets,” Ryan said.

Calling the reaction to the X fine “extreme,” Ryan pushed for von der Leyen to advance on both fronts, forecasting that “the AI bubble would be unlikely to survive this double shock” and likely neither could Trump’s approval ratings. There’s also a possibility that tech firms could pressure Trump to back down if coping with any increased enforcement threatens AI progress.

Although Wu suggested that Big Tech firms like Google and Meta would likely be “insulated” from the AI bubble bursting, Google CEO Sundar Pichai doesn’t seem so sure. In November, Pichai told the BBC that if AI investments didn’t pay off quickly enough, he thinks “no company is going to be immune, including us.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Bursting AI bubble may be EU’s “secret weapon” in clash with Trump, expert says Read More »

elon-musk’s-x-first-to-be-fined-under-eu’s-digital-services-act

Elon Musk’s X first to be fined under EU’s Digital Services Act

Elon Musk’s X became the first large online platform fined under the European Union’s Digital Services Act on Friday.

The European Commission announced that X would be fined nearly $140 million, with the potential to face “periodic penalty payments” if the platform fails to make corrections.

A third of the fine came from one of the first moves Musk made when taking over Twitter. In November 2022, he changed the platform’s historical use of a blue checkmark to verify the identities of notable users. Instead, Musk started selling blue checks for about $8 per month, immediately prompting a wave of imposter accounts pretending to be notable celebrities, officials, and brands.

Today, X still prominently advertises that paying for checks is the only way to “verify” an account on the platform. But the commission, which has been investigating X since 2023, concluded that “X’s use of the ‘blue checkmark’ for ‘verified accounts’ deceives users.”

This violates the DSA as the “deception exposes users to scams, including impersonation frauds, as well as other forms of manipulation by malicious actors,” the commission wrote.

Interestingly, the commission concluded that X made it harder to identify bots, despite Musk’s professed goal to eliminate bots being a primary reason he bought Twitter. Perhaps validating the EU’s concerns, X recently received backlash after changing a feature that accidentally exposed that some of the platform’s biggest MAGA influencers were based “in Eastern Europe, Thailand, Nigeria, Bangladesh, and other parts of the world, often linked to online scams and schemes,” Futurism reported.

Although the DSA does not mandate the verification of users, “it clearly prohibits online platforms from falsely claiming that users have been verified, when no such verification took place,” the commission said. X now has 60 days to share information on the measures it will take to fix the compliance issue.

Elon Musk’s X first to be fined under EU’s Digital Services Act Read More »

doge-“cut-muscle,-not-fat”;-26k-experts-rehired-after-brutal-cuts

DOGE “cut muscle, not fat”; 26K experts rehired after brutal cuts


Government brain drain will haunt US after DOGE abruptly terminated.

Billionaire Elon Musk, the head of the Department of Government Efficiency (DOGE), holds a chainsaw as he speaks at the annual Conservative Political Action Conference. Credit: SAUL LOEB / Contributor | AFP

After Donald Trump curiously started referring to the Department of Government Efficiency exclusively in the past tense, an official finally confirmed Sunday that DOGE “doesn’t exist.”

Talking to Reuters, Office of Personnel Management (OPM) Director Scott Kupor confirmed that DOGE—a government agency notoriously created by Elon Musk to rapidly and dramatically slash government agencies—was terminated more than eight months early. This may have come as a surprise to whoever runs the DOGE account on X, which continued posting up until two days before the Reuters report was published.

As Kupor explained, a “centralized agency” was no longer necessary, since OPM had “taken over many of DOGE’s functions” after Musk left the agency last May. Around that time, DOGE staffers were embedded at various agencies, where they could ostensibly better coordinate with leadership on proposed cuts to staffing and funding.

Under Musk, DOGE was hyped as planning to save the government a trillion dollars. On X, Musk bragged frequently about the agency, posting in February that DOGE was “the one shot the American people have to defeat BUREAUcracy, rule of the bureaucrats, and restore DEMOcracy, rule of the people. We’re never going to get another chance like this.”

The reality fell far short of Musk’s goals, with DOGE ultimately reporting it saved $214 billion—an amount that may be overstated by nearly 40 percent, critics warned earlier this year.

How much talent was lost due to DOGE cuts?

Once Musk left, confidence in DOGE waned as lawsuits over suspected illegal firings piled up. By June, Congress was drawn, largely down party lines, on whether to codify the “DOGE process”—rapidly firing employees, then quickly hiring back whoever was needed—or declare DOGE a failure—perhaps costing taxpayers more in the long term due to lost talent and services.

Because DOGE operated largely in secrecy, it may be months or even years before the public can assess the true cost of DOGE’s impact. However, in the absence of a government tracker, the director of the Center for Effective Public Management at the Brookings Institution, Elaine Kamarck, put together what might be the best status report showing how badly DOGE rocked government agencies.

In June, Kamarck joined other critics flagging DOGE’s reported savings as “bogus.” In the days before DOGE’s abrupt ending was announced, she published a report grappling with a critical question many have pondered since DOGE launched: “How many people can the federal government lose before it crashes?”

In the report, Kamarck charted “26,511 occasions where the Trump administration abruptly fired people and then hired them back.” She concluded that “a quick review of the reversals makes clear that the negative stereotype of the ‘paper-pushing bureaucrat’” that DOGE was supposedly targeting “is largely inaccurate.”

Instead, many of the positions the government rehired were “engineers, doctors, and other professionals whose work is critical to national security and public health,” Kamarck reported.

About half of the rehires, Kamarck estimated, “appear to have been mandated by the courts.” However, in about a quarter of cases, the government moved to rehire staffers before the court could weigh in, Kamarck reported. That seemed to be “a tacit admission that the blanket firings that took place during the DOGE era placed the federal government in danger of not being able to accomplish some of its most important missions,” she said.

Perhaps the biggest downside of all of DOGE’s hasty downsizing, though, is a trend in which many long-time government workers simply decided to leave or retire, rather than wait for DOGE to eliminate their roles.

During the first six months of Trump’s term, 154,000 federal employees signed up for the deferred resignation program, Reuters reported, while more than 70,000 retired. Both numbers were clear increases (tens of thousands) over exits from government in prior years, Kamarck’s report noted.

“A lot of people said, ‘the hell with this’ and left,” Kamarck told Ars.

Kamarck told Ars that her report makes it obvious that DOGE “cut muscle, not fat,” because “they didn’t really know what they were doing.”

As a result, agencies are now scrambling to assess the damage and rehire lost talent. However, her report documented that agencies aligned with Trump’s policies appear to have an easier time getting new hires approved, despite Kupor telling Reuters that the government-wide hiring freeze is “over.” As of mid-November 2025, “of the over 73,000 posted jobs, a candidate was selected for only about 14,400 of them,” Kamarck reported, noting that it was impossible to confirm how many selected candidates have officially started working.

“Agencies are having to do a lot of reassessments in terms of what happened,” Kamarck told Ars, concluding that DOGE “was basically a disaster.”

A decentralized DOGE may be more powerful

“DOGE is not dead,” though, Kamarck said, noting that “the cutting effort is definitely” continuing under the Office of Management and Budget, which “has a lot more power than DOGE ever had.”

However, the termination of DOGE does mean that “the way it operated is dead,” and that will likely come as a relief to government workers who expected DOGE to continue slashing agencies through July 2026 at least, if not beyond.

Many government workers are still fighting terminations, as court cases drag on, and even Kamarck has given up on tracking due to inconsistencies in outcomes.

“It’s still like one day the court says, ‘No, you can’t do that,’” Kamarck explained. “Then the next day another court says, ‘Yes, you can.’” Other times, the courts “change their minds,” or the Trump administration just doesn’t “listen to the courts, which is fairly terrifying,” Kamarck said.

Americans likely won’t get a clear picture of DOGE’s impact until power shifts in Washington. That could mean waiting for the next presidential election, or possibly if Democrats win a majority in midterm elections, DOGE investigations could start as early as 2027, Kamarck suggested.

OMB will likely continue with cuts that Americans appear to want, as White House spokesperson Liz Huston told Reuters that “President Trump was given a clear mandate to reduce waste, fraud and abuse across the federal government, and he continues to actively deliver on that commitment.”

However, Kamarck’s report noted polls showing that most Americans disapprove of how Trump is managing government and its workforce, perhaps indicating that OMB will be pressured to slow down and avoid roiling public opinion ahead of the midterms.

“The fact that ordinary Americans have come to question the downsizing is, most likely, the result of its rapid unfolding, with large cuts done quickly regardless of their impact on the government’s functioning,” Kamarck suggested. Even Musk began to question DOGE. After Trump announced plans to appeal an electrical vehicle mandate that the Tesla founder relied on, Musk posted on X, “What the heck was the point of DOGE, if he’s just going to increase the debt by $5 trillion??”

Facing “blowback” over the most unpopular cuts, agencies sometimes rehired cut staffers within 24 hours, Kamarck noted, pointing to the Department of Energy as one of the “most dramatic” earliest examples. In that case, Americans were alarmed to see engineers cut who were responsible for keeping the nation’s nuclear arsenal “safe and ready.” Retention for those posts was already a challenge due to “high demand in the private sector,” and the number of engineers was considered “too low” ahead of DOGE’s cuts. Everyone was reinstated within a day, Kamarck reported.

Alarm bells rang across the federal government, and it wasn’t just about doctors and engineers being cut or entire agencies being dismantled, like USAID. Even staffers DOGE viewed as having seemingly less critical duties—like travel bookers and customer service reps—were proven key to government functioning. Arbitrary cuts risked hurting Americans in myriad ways, hitting their pocketbooks, throttling community services, and limiting disease and disaster responses, Kamarck documented.

Now that the hiring freeze is lifted and OMB will be managing DOGE-like cuts moving forward, Kamarck suggested that Trump will face ongoing scrutiny over Musk’s controversial agency, despite its dissolution.

“In order to prove that the downsizing was worth the pain, the Trump administration will have to show that the government is still operating effectively,” Kamarck wrote. “But much could go wrong,” she reported, spouting a list of nightmare scenarios:

“Nuclear mismanagement or airline accidents would be catastrophic. Late disaster warnings from agencies monitoring weather patterns, such as the National Oceanic and Atmospheric Administration (NOAA), and inadequate responses from bodies such as the Federal Emergency Management Administration (FEMA), could put people in danger. Inadequate staffing at the FBI could result in counter-terrorism failures. Reductions in vaccine uptake could lead to the resurgence of diseases such as polio and measles. Inadequate funding and staffing for research could cause scientists to move their talents abroad. Social Security databases could be compromised, throwing millions into chaos as they seek to prove their earnings records, and persistent customer service problems will reverberate through the senior and disability communities.”

The good news is that federal agencies recovering from DOGE cuts are “aware of the time bombs and trying to fix them,” Kamarck told Ars. But with so much brain drain from DOGE’s first six months ripping so many agencies apart at their seams, the government may struggle to provide key services until lost talent can be effectively replaced, she said.

“I don’t know how quickly they can put Humpty Dumpty back together again,” Kamarck said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

DOGE “cut muscle, not fat”; 26K experts rehired after brutal cuts Read More »

elon-musk-wins-$1-trillion-tesla-pay-vote-despite-“part-time-ceo”-criticism

Elon Musk wins $1 trillion Tesla pay vote despite “part-time CEO” criticism

Tesla shareholders today voted to approve a compensation plan that would pay Elon Musk more than $1 trillion over the next decade if he hits all of the plan’s goals. Musk won over 75 percent of the vote, according to the announcement at today’s shareholder meeting.

The pay plan would give Musk 423,743,904 shares, awarded in 12 tranches of 35,311,992 shares each if Tesla achieves various operational goals and market value milestones. Goals include delivering 20 million vehicles, obtaining 10 million Full Self-Driving subscriptions, delivering 1 million “AI robots,” putting 1 million robotaxis in operation, and achieving a $400 billion adjusted EBITDA (earnings before interest, taxes, depreciation, and amortization).

Musk has threatened to leave if he doesn’t get a larger share of Tesla. He told investors last month, “It’s not like I’m going to go spend the money. It’s just, if we build this robot army, do I have at least a strong influence over that robot army? Not control, but a strong influence. That’s what it comes down to in a nutshell. I don’t feel comfortable building that robot army if I don’t have at least a strong influence.”

The plan has 12 market capitalization milestones topping out at $8.5 trillion. The value of Musk’s award is estimated to exceed $1 trillion if he hits all operational and market capitalization goals. Musk would increase his ownership stake to 24.8 percent of Tesla, or 28.8 percent if Tesla ends up winning an appeal in the court case that voided his 2018 pay plan.

Tesla Chair Robyn Denholm has argued that Musk needs big pay packages to stay motivated. Some investors have said $1 trillion is too much for a CEO who spends much of his time running other companies such as SpaceX, X (formerly Twitter), and xAI.

New York Comptroller Thomas DiNapoli, who runs a state retirement fund that owns over 3.3 million shares, slammed the pay plan in a webinar last week. He said that Musk’s existing stake in Tesla should already “be incentive enough to drive performance. The idea that another massive equity award will somehow refocus a man who is hopelessly distracted is both illogical and contrary to the evidence. This is not pay for performance; this is pay for unchecked power.”

Musk and his side hustles

With Musk spending more time at xAI, “some major Tesla investors have privately pressed top executives and board members about how much attention Musk was actually paying to the company and about whether there is a CEO succession plan,” a Wall Street Journal article on Tuesday said. “An unusually large contingent of Tesla board members, including chair Robyn Denholm, former Chipotle CFO Jack Hartung and Tesla co-founder JB Straubel, met with big investors in New York last week to advocate for Musk’s proposed new pay package.”

Elon Musk wins $1 trillion Tesla pay vote despite “part-time CEO” criticism Read More »