elon musk

doge-“cut-muscle,-not-fat”;-26k-experts-rehired-after-brutal-cuts

DOGE “cut muscle, not fat”; 26K experts rehired after brutal cuts


Government brain drain will haunt US after DOGE abruptly terminated.

Billionaire Elon Musk, the head of the Department of Government Efficiency (DOGE), holds a chainsaw as he speaks at the annual Conservative Political Action Conference. Credit: SAUL LOEB / Contributor | AFP

After Donald Trump curiously started referring to the Department of Government Efficiency exclusively in the past tense, an official finally confirmed Sunday that DOGE “doesn’t exist.”

Talking to Reuters, Office of Personnel Management (OPM) Director Scott Kupor confirmed that DOGE—a government agency notoriously created by Elon Musk to rapidly and dramatically slash government agencies—was terminated more than eight months early. This may have come as a surprise to whoever runs the DOGE account on X, which continued posting up until two days before the Reuters report was published.

As Kupor explained, a “centralized agency” was no longer necessary, since OPM had “taken over many of DOGE’s functions” after Musk left the agency last May. Around that time, DOGE staffers were embedded at various agencies, where they could ostensibly better coordinate with leadership on proposed cuts to staffing and funding.

Under Musk, DOGE was hyped as planning to save the government a trillion dollars. On X, Musk bragged frequently about the agency, posting in February that DOGE was “the one shot the American people have to defeat BUREAUcracy, rule of the bureaucrats, and restore DEMOcracy, rule of the people. We’re never going to get another chance like this.”

The reality fell far short of Musk’s goals, with DOGE ultimately reporting it saved $214 billion—an amount that may be overstated by nearly 40 percent, critics warned earlier this year.

How much talent was lost due to DOGE cuts?

Once Musk left, confidence in DOGE waned as lawsuits over suspected illegal firings piled up. By June, Congress was drawn, largely down party lines, on whether to codify the “DOGE process”—rapidly firing employees, then quickly hiring back whoever was needed—or declare DOGE a failure—perhaps costing taxpayers more in the long term due to lost talent and services.

Because DOGE operated largely in secrecy, it may be months or even years before the public can assess the true cost of DOGE’s impact. However, in the absence of a government tracker, the director of the Center for Effective Public Management at the Brookings Institution, Elaine Kamarck, put together what might be the best status report showing how badly DOGE rocked government agencies.

In June, Kamarck joined other critics flagging DOGE’s reported savings as “bogus.” In the days before DOGE’s abrupt ending was announced, she published a report grappling with a critical question many have pondered since DOGE launched: “How many people can the federal government lose before it crashes?”

In the report, Kamarck charted “26,511 occasions where the Trump administration abruptly fired people and then hired them back.” She concluded that “a quick review of the reversals makes clear that the negative stereotype of the ‘paper-pushing bureaucrat’” that DOGE was supposedly targeting “is largely inaccurate.”

Instead, many of the positions the government rehired were “engineers, doctors, and other professionals whose work is critical to national security and public health,” Kamarck reported.

About half of the rehires, Kamarck estimated, “appear to have been mandated by the courts.” However, in about a quarter of cases, the government moved to rehire staffers before the court could weigh in, Kamarck reported. That seemed to be “a tacit admission that the blanket firings that took place during the DOGE era placed the federal government in danger of not being able to accomplish some of its most important missions,” she said.

Perhaps the biggest downside of all of DOGE’s hasty downsizing, though, is a trend in which many long-time government workers simply decided to leave or retire, rather than wait for DOGE to eliminate their roles.

During the first six months of Trump’s term, 154,000 federal employees signed up for the deferred resignation program, Reuters reported, while more than 70,000 retired. Both numbers were clear increases (tens of thousands) over exits from government in prior years, Kamarck’s report noted.

“A lot of people said, ‘the hell with this’ and left,” Kamarck told Ars.

Kamarck told Ars that her report makes it obvious that DOGE “cut muscle, not fat,” because “they didn’t really know what they were doing.”

As a result, agencies are now scrambling to assess the damage and rehire lost talent. However, her report documented that agencies aligned with Trump’s policies appear to have an easier time getting new hires approved, despite Kupor telling Reuters that the government-wide hiring freeze is “over.” As of mid-November 2025, “of the over 73,000 posted jobs, a candidate was selected for only about 14,400 of them,” Kamarck reported, noting that it was impossible to confirm how many selected candidates have officially started working.

“Agencies are having to do a lot of reassessments in terms of what happened,” Kamarck told Ars, concluding that DOGE “was basically a disaster.”

A decentralized DOGE may be more powerful

“DOGE is not dead,” though, Kamarck said, noting that “the cutting effort is definitely” continuing under the Office of Management and Budget, which “has a lot more power than DOGE ever had.”

However, the termination of DOGE does mean that “the way it operated is dead,” and that will likely come as a relief to government workers who expected DOGE to continue slashing agencies through July 2026 at least, if not beyond.

Many government workers are still fighting terminations, as court cases drag on, and even Kamarck has given up on tracking due to inconsistencies in outcomes.

“It’s still like one day the court says, ‘No, you can’t do that,’” Kamarck explained. “Then the next day another court says, ‘Yes, you can.’” Other times, the courts “change their minds,” or the Trump administration just doesn’t “listen to the courts, which is fairly terrifying,” Kamarck said.

Americans likely won’t get a clear picture of DOGE’s impact until power shifts in Washington. That could mean waiting for the next presidential election, or possibly if Democrats win a majority in midterm elections, DOGE investigations could start as early as 2027, Kamarck suggested.

OMB will likely continue with cuts that Americans appear to want, as White House spokesperson Liz Huston told Reuters that “President Trump was given a clear mandate to reduce waste, fraud and abuse across the federal government, and he continues to actively deliver on that commitment.”

However, Kamarck’s report noted polls showing that most Americans disapprove of how Trump is managing government and its workforce, perhaps indicating that OMB will be pressured to slow down and avoid roiling public opinion ahead of the midterms.

“The fact that ordinary Americans have come to question the downsizing is, most likely, the result of its rapid unfolding, with large cuts done quickly regardless of their impact on the government’s functioning,” Kamarck suggested. Even Musk began to question DOGE. After Trump announced plans to appeal an electrical vehicle mandate that the Tesla founder relied on, Musk posted on X, “What the heck was the point of DOGE, if he’s just going to increase the debt by $5 trillion??”

Facing “blowback” over the most unpopular cuts, agencies sometimes rehired cut staffers within 24 hours, Kamarck noted, pointing to the Department of Energy as one of the “most dramatic” earliest examples. In that case, Americans were alarmed to see engineers cut who were responsible for keeping the nation’s nuclear arsenal “safe and ready.” Retention for those posts was already a challenge due to “high demand in the private sector,” and the number of engineers was considered “too low” ahead of DOGE’s cuts. Everyone was reinstated within a day, Kamarck reported.

Alarm bells rang across the federal government, and it wasn’t just about doctors and engineers being cut or entire agencies being dismantled, like USAID. Even staffers DOGE viewed as having seemingly less critical duties—like travel bookers and customer service reps—were proven key to government functioning. Arbitrary cuts risked hurting Americans in myriad ways, hitting their pocketbooks, throttling community services, and limiting disease and disaster responses, Kamarck documented.

Now that the hiring freeze is lifted and OMB will be managing DOGE-like cuts moving forward, Kamarck suggested that Trump will face ongoing scrutiny over Musk’s controversial agency, despite its dissolution.

“In order to prove that the downsizing was worth the pain, the Trump administration will have to show that the government is still operating effectively,” Kamarck wrote. “But much could go wrong,” she reported, spouting a list of nightmare scenarios:

“Nuclear mismanagement or airline accidents would be catastrophic. Late disaster warnings from agencies monitoring weather patterns, such as the National Oceanic and Atmospheric Administration (NOAA), and inadequate responses from bodies such as the Federal Emergency Management Administration (FEMA), could put people in danger. Inadequate staffing at the FBI could result in counter-terrorism failures. Reductions in vaccine uptake could lead to the resurgence of diseases such as polio and measles. Inadequate funding and staffing for research could cause scientists to move their talents abroad. Social Security databases could be compromised, throwing millions into chaos as they seek to prove their earnings records, and persistent customer service problems will reverberate through the senior and disability communities.”

The good news is that federal agencies recovering from DOGE cuts are “aware of the time bombs and trying to fix them,” Kamarck told Ars. But with so much brain drain from DOGE’s first six months ripping so many agencies apart at their seams, the government may struggle to provide key services until lost talent can be effectively replaced, she said.

“I don’t know how quickly they can put Humpty Dumpty back together again,” Kamarck said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

DOGE “cut muscle, not fat”; 26K experts rehired after brutal cuts Read More »

elon-musk-wins-$1-trillion-tesla-pay-vote-despite-“part-time-ceo”-criticism

Elon Musk wins $1 trillion Tesla pay vote despite “part-time CEO” criticism

Tesla shareholders today voted to approve a compensation plan that would pay Elon Musk more than $1 trillion over the next decade if he hits all of the plan’s goals. Musk won over 75 percent of the vote, according to the announcement at today’s shareholder meeting.

The pay plan would give Musk 423,743,904 shares, awarded in 12 tranches of 35,311,992 shares each if Tesla achieves various operational goals and market value milestones. Goals include delivering 20 million vehicles, obtaining 10 million Full Self-Driving subscriptions, delivering 1 million “AI robots,” putting 1 million robotaxis in operation, and achieving a $400 billion adjusted EBITDA (earnings before interest, taxes, depreciation, and amortization).

Musk has threatened to leave if he doesn’t get a larger share of Tesla. He told investors last month, “It’s not like I’m going to go spend the money. It’s just, if we build this robot army, do I have at least a strong influence over that robot army? Not control, but a strong influence. That’s what it comes down to in a nutshell. I don’t feel comfortable building that robot army if I don’t have at least a strong influence.”

The plan has 12 market capitalization milestones topping out at $8.5 trillion. The value of Musk’s award is estimated to exceed $1 trillion if he hits all operational and market capitalization goals. Musk would increase his ownership stake to 24.8 percent of Tesla, or 28.8 percent if Tesla ends up winning an appeal in the court case that voided his 2018 pay plan.

Tesla Chair Robyn Denholm has argued that Musk needs big pay packages to stay motivated. Some investors have said $1 trillion is too much for a CEO who spends much of his time running other companies such as SpaceX, X (formerly Twitter), and xAI.

New York Comptroller Thomas DiNapoli, who runs a state retirement fund that owns over 3.3 million shares, slammed the pay plan in a webinar last week. He said that Musk’s existing stake in Tesla should already “be incentive enough to drive performance. The idea that another massive equity award will somehow refocus a man who is hopelessly distracted is both illogical and contrary to the evidence. This is not pay for performance; this is pay for unchecked power.”

Musk and his side hustles

With Musk spending more time at xAI, “some major Tesla investors have privately pressed top executives and board members about how much attention Musk was actually paying to the company and about whether there is a CEO succession plan,” a Wall Street Journal article on Tuesday said. “An unusually large contingent of Tesla board members, including chair Robyn Denholm, former Chipotle CFO Jack Hartung and Tesla co-founder JB Straubel, met with big investors in New York last week to advocate for Musk’s proposed new pay package.”

Elon Musk wins $1 trillion Tesla pay vote despite “part-time CEO” criticism Read More »

musk’s-$1-trillion-tesla-pay-plan-draws-some-protest-ahead-of-likely-approval

Musk’s $1 trillion Tesla pay plan draws some protest ahead of likely approval

Ann Lipton, a University of Colorado Law School professor, told the Financial Times that she expects shareholders to approve the latest pay package despite the ISS recommendation. “They recommended against it before and the shareholders voted in favor, and this time Elon Musk gets to vote…  and his brother gets to vote,” she said. “That wasn’t true last time. I strongly expect that all of these proposals are going to go Tesla’s way.”

Pay plan goals are vaguely defined, letter says

The Musk pay plan was also opposed in a letter signed by the American Federation of Teachers; state treasurers from Nevada, Massachusetts, and New Mexico; and comptrollers from New York City and Maryland.

“We believe the Board’s failure to ensure CEO Musk devotes full attention to Tesla, while making him the highest-paid CEO in history, shows how beholden it is to management,” the letter said. “The Board has permitted Mr. Musk to be over-committed for years, allowing him to continue as CEO while taking time-consuming leadership roles at his other companies, xAI/X, SpaceX, Neuralink, and Boring Company.”

The letter said the pay plan’s vehicle-delivery goal could be reached even if annual sales decrease and that the Full Self-Driving subscription goal is “carefully worded to not actually require that the service ever achieves full unsupervised self-driving.”

The letter said the goal of delivering 1 million AI robots or “bots” is so vague that “even if Tesla fails to develop a commercially successful robot, it could market devices developed and manufactured by other firms and still achieve this milestone.” The robotaxi goal similarly “does not require that Tesla has designed and developed the robotaxis in question, nor that their operation be profitable,” the letter said.

The letter faulted the board for letting Musk take “a leadership position at the US Department of Government Efficiency (DOGE), a role widely seen as having a negative impact on the Company’s performance and brand… In our view, the Board’s failure to limit Mr. Musk’s outside endeavors while rewarding him with unprecedented pay packages for only a part-time commitment strongly indicates a lack of true independence by management and jeopardizes long-term shareholder value.”

Musk’s $1 trillion Tesla pay plan draws some protest ahead of likely approval Read More »

openai-thinks-elon-musk-funded-its-biggest-critics—who-also-hate-musk

OpenAI thinks Elon Musk funded its biggest critics—who also hate Musk

“We are not in any way supported by or funded by Elon Musk and have a history of campaigning against him and his interests,” Ruby-Sachs told NBC News.

Another nonprofit watchdog targeted by OpenAI was The Midas Project, which strives to make sure AI benefits everyone. Notably, Musk’s lawsuit accused OpenAI of abandoning its mission to benefit humanity in pursuit of immense profits.

But the founder of The Midas Project, Tyler Johnston, was shocked to see his group portrayed as coordinating with Musk. He posted on X to clarify that Musk had nothing to do with the group’s “OpenAI Files,” which comprehensively document areas of concern with any plan to shift away from nonprofit governance.

His post came after OpenAI’s chief strategy officer, Jason Kwon, wrote that “several organizations, some of them suddenly newly formed like the Midas Project, joined in and ran campaigns” backing Musk’s “opposition to OpenAI’s restructure.”

“What are you talking about?” Johnston wrote. “We were formed 19 months ago. We’ve never spoken with or taken funding from Musk and [his] ilk, which we would have been happy to tell you if you asked a single time. In fact, we’ve said he runs xAI so horridly it makes OpenAI ‘saintly in comparison.’”

OpenAI acting like a “cutthroat” corporation?

Johnston complained that OpenAI’s subpoena had already hurt the Midas Project, as insurers had denied coverage based on news coverage. He accused OpenAI of not just trying to silence critics but possibly shut them down.

“If you wanted to constrain an org’s speech, intimidation would be one strategy, but making them uninsurable is another, and maybe that’s what’s happened to us with this subpoena,” Johnston suggested.

Other nonprofits, like the San Francisco Foundation (SFF) and Encode, accused OpenAI of using subpoenas to potentially block or slow down legal interventions. Judith Bell, SFF’s chief impact officer, told NBC News that her nonprofit’s subpoena came after spearheading a petition to California’s attorney general to block OpenAI’s restructuring. And Encode’s general counsel, Nathan Calvin, was subpoenaed after sponsoring a California safety regulation meant to make it easier to monitor risks of frontier AI.

OpenAI thinks Elon Musk funded its biggest critics—who also hate Musk Read More »

chatgpt-erotica-coming-soon-with-age-verification,-ceo-says

ChatGPT erotica coming soon with age verification, CEO says

On Tuesday, OpenAI CEO Sam Altman announced that the company will allow verified adult users to have erotic conversations with ChatGPT starting in December. The change represents a shift in how OpenAI approaches content restrictions, which the company had loosened in February but then dramatically tightened after an August lawsuit from parents of a teen who died by suicide after allegedly receiving encouragement from ChatGPT.

“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Altman wrote in his post on X (formerly Twitter). The announcement follows OpenAI’s recent hint that it would allow developers to create “mature” ChatGPT applications once the company implements appropriate age verification and controls.

Altman explained that OpenAI had made ChatGPT “pretty restrictive to make sure we were being careful with mental health issues” but acknowledged this approach made the chatbot “less useful/enjoyable to many users who had no mental health problems.” The CEO said the company now has new tools to better detect when users are experiencing mental distress, allowing OpenAI to relax restrictions in most cases.

Striking the right balance between freedom for adults and safety for users has been a difficult balancing act for OpenAI, which has vacillated between permissive and restrictive chat content controls over the past year.

In February, the company updated its Model Spec to allow erotica in “appropriate contexts.” But a March update made GPT-4o so agreeable that users complained about its “relentlessly positive tone.” By August, Ars reported on cases where ChatGPT’s sycophantic behavior had validated users’ false beliefs to the point of causing mental health crises, and news of the aforementioned suicide lawsuit hit not long after.

Aside from adjusting the behavioral outputs for its previous GPT-40 AI language model, new model changes have also created some turmoil among users. Since the launch of GPT-5 in early August, some users have been complaining that the new model feels less engaging than its predecessor, prompting OpenAI to bring back the older model as an option. Altman said the upcoming release will allow users to choose whether they want ChatGPT to “respond in a very human-like way, or use a ton of emoji, or act like a friend.”

ChatGPT erotica coming soon with age verification, CEO says Read More »

nvidia-sells-tiny-new-computer-that-puts-big-ai-on-your-desktop

Nvidia sells tiny new computer that puts big AI on your desktop

On Tuesday, Nvidia announced it will begin taking orders for the DGX Spark, a $4,000 desktop AI computer that wraps one petaflop of computing performance and 128GB of unified memory into a form factor small enough to sit on a desk. Its biggest selling point is likely its large integrated memory that can run larger AI models than consumer GPUs.

Nvidia will begin taking orders for the DGX Spark on Wednesday, October 15, through its website, with systems also available from manufacturing partners and select US retail stores.

The DGX Spark, which Nvidia previewed as “Project DIGITS” in January and formally named in May, represents Nvidia’s attempt to create a new category of desktop computer workstation specifically for AI development.

With the Spark, Nvidia seeks to address a problem facing some AI developers: Many AI tasks exceed the memory and software capabilities of standard PCs and workstations (more on that below), forcing them to shift their work to cloud services or data centers. However, the actual market for a desktop AI workstation remains uncertain, particularly given the upfront cost versus cloud alternatives, which allow developers to pay as they go.

Nvidia’s Spark reportedly includes enough memory to run larger-than-typical AI models for local tasks, with up to 200 billion parameters and fine-tune models containing up to 70 billion parameters without requiring remote infrastructure. Potential uses include running larger open-weights language models and media synthesis models such as AI image generators.

According to Nvidia, users can customize Black Forest Labs’ Flux.1 models for image generation, build vision search and summarization agents using Nvidia’s Cosmos Reason vision language model, or create chatbots using the Qwen3 model optimized for the DGX Spark platform.

Big memory in a tiny box

Nvidia has squeezed a lot into a 2.65-pound box that measures 5.91 x 5.91 x 1.99 inches and uses 240 watts of power. The system runs on Nvidia’s GB10 Grace Blackwell Superchip, includes ConnectX-7 200Gb/s networking, and uses NVLink-C2C technology that provides five times the bandwidth of PCIe Gen 5. It also includes the aforementioned 128GB of unified memory that is shared between system and GPU tasks.

Nvidia sells tiny new computer that puts big AI on your desktop Read More »

boring-company-cited-for-almost-800-environmental-violations-in-las-vegas

Boring Company cited for almost 800 environmental violations in Las Vegas

Workers have complained of chemical burns from the waste material generated by the tunneling process, and firefighters must decontaminate their equipment after conducting rescues from the project sites. The company was fined more than $112,000 by Nevada’s Occupational Safety and Health Administration in late 2023 after workers complained of “ankle-deep” water in the tunnels, muck spills, and burns. The Boring Co. has contested the violations. Just last month, a construction worker suffered a “crush injury” after being pinned between two 4,000-foot pipes, according to police records. Firefighters used a crane to extract him from the tunnel opening.

After ProPublica and City Cast Las Vegas published their January story, both the CEO and the chairman of the LVCVA board criticized the reporting, arguing the project is well-regulated. As an example, LVCVA CEO Steve Hill cited the delayed opening of a Loop station by local officials who were concerned that fire safety requirements weren’t adequate. Board chair Jim Gibson, who is also a Clark County commissioner, agreed the project is appropriately regulated.

“We wouldn’t have given approvals if we determined things weren’t the way they ought to be and what it needs to be for public safety reasons,” Gibson said, according to the Las Vegas Review Journal. “Our sense is we’ve done what we need to do to protect the public.”

Asked for a response to the new proposed fines, an LVCVA spokesperson said, “We won’t be participating in this story.”

The repeated allegations that the company is violating regulations—including the bespoke regulatory arrangement agreed to by the company—indicates that officials aren’t keeping the public safe, said Ben Leffel, an assistant public policy professor at the University of Nevada, Las Vegas.

“Not if they’re recommitting almost the exact violation,” Leffel said.

Leffel questioned whether a $250,000 penalty would be significant enough to change operations at The Boring Co., which was valued at $7 billion in 2023. Studies show that fines that don’t put a significant dent in a company’s profit don’t deter companies from future violations, Leffel said.

A state spokesperson disagreed that regulators aren’t keeping the public safe and said the agency believes its penalties will deter “future non-compliance.”

“NDEP is actively monitoring and inspecting the projects,” the spokesperson said.

This story originally appeared on ProPublica.

Boring Company cited for almost 800 environmental violations in Las Vegas Read More »

musk’s-x-posts-on-ketamine,-putin-spur-release-of-his-security-clearances

Musk’s X posts on ketamine, Putin spur release of his security clearances

“A disclosure, even with redactions, will reveal whether a security clearance was granted with or without conditions or a waiver,” DCSA argued.

Ultimately, DCSA failed to prove that Musk risked “embarrassment or humiliation” not only if the public learned what specific conditions or waivers applied to Musk’s clearances but also if there were any conditions or waivers at all, Cote wrote.

Three cases that DCSA cited to support this position—including a case where victims of Jeffrey Epstein’s trafficking scheme had a substantial privacy interest in non-disclosure of detailed records—do not support the government’s logic, Cote said. The judge explained that the disclosures would not have affected the privacy rights of any third parties, emphasizing that “Musk’s diminished privacy interest is underscored by the limited information plaintiffs sought in their FOIA request.”

Musk’s X posts discussing his occasional use of prescription ketamine and his disclosure on a podcast that smoking marijuana prompted NASA requirements for random drug testing, Cote wrote, “only enhance” the public’s interest in how Musk’s security clearances were vetted. Additionally, Musk has posted about speaking with Vladimir Putin, prompting substantial public interest in how his foreign contacts may or may not restrict his security clearances. More than 2 million people viewed Musk’s X posts on these subjects, the judge wrote, noting that:

It is undisputed that drug use and foreign contacts are two factors DCSA considers when determining whether to impose conditions or waivers on a security clearance grant. DCSA fails to explain why, given Musk’s own, extensive disclosures, the mere disclosure that a condition or waiver exists (or that no condition or waiver exists) would subject him to ’embarrassment or humiliation.’

Rather, for the public, “the list of Musk’s security clearances, including any conditions or waivers, could provide meaningful insight into DCSA’s performance of that duty and responses to Musk’s admissions, if any,” Cote wrote.

In a footnote, Cote said that this substantial public interest existed before Musk became a special government employee, ruling that DCSA was wrong to block the disclosures seeking information on Musk as a major government contractor. Her ruling likely paves the way for the NYT or other news organizations to submit FOIA requests for a list of Musk’s clearances while he helmed DOGE.

It’s not immediately clear when the NYT will receive the list they requested in 2024, but the government has until October 17 to request redactions before it’s publicized.

“The Times brought this case because the public has a right to know about how the government conducts itself,” Charlie Stadtlander, an NYT spokesperson, said. “The decision reaffirms that fundamental principle and we look forward to receiving the document at issue.”

Musk’s X posts on ketamine, Putin spur release of his security clearances Read More »

why-irobot’s-founder-won’t-go-within-10-feet-of-today’s-walking-robots

Why iRobot’s founder won’t go within 10 feet of today’s walking robots

In his post, Brooks recounts being “way too close” to an Agility Robotics Digit humanoid when it fell several years ago. He has not dared approach a walking one since. Even in promotional videos from humanoid companies, Brooks notes, humans are never shown close to moving humanoid robots unless separated by furniture, and even then, the robots only shuffle minimally.

This safety problem extends beyond accidental falls. For humanoids to fulfill their promised role in health care and factory settings, they need certification to operate in zones shared with humans. Current walking mechanisms make such certification virtually impossible under existing safety standards in most parts of the world.

Apollo robot

The humanoid Apollo robot. Credit: Google

Brooks predicts that within 15 years, there will indeed be many robots called “humanoids” performing various tasks. But ironically, they will look nothing like today’s bipedal machines. They will have wheels instead of feet, varying numbers of arms, and specialized sensors that bear no resemblance to human eyes. Some will have cameras in their hands or looking down from their midsections. The definition of “humanoid” will shift, just as “flying cars” now means electric helicopters rather than road-capable aircraft, and “self-driving cars” means vehicles with remote human monitors rather than truly autonomous systems.

The billions currently being invested in forcing today’s rigid, vision-only humanoids to learn dexterity will largely disappear, Brooks argues. Academic researchers are making more progress with systems that incorporate touch feedback, like MIT’s approach using a glove that transmits sensations between human operators and robot hands. But even these advances remain far from the comprehensive touch sensing that enables human dexterity.

Today, few people spend their days near humanoid robots, but Brooks’ 3-meter rule stands as a practical warning of challenges ahead from someone who has spent decades building these machines. The gap between promotional videos and deployable reality remains large, measured not just in years but in fundamental unsolved problems of physics, sensing, and safety.

Why iRobot’s founder won’t go within 10 feet of today’s walking robots Read More »

burnout-and-elon-musk’s-politics-spark-exodus-from-senior-xai,-tesla-staff

Burnout and Elon Musk’s politics spark exodus from senior xAI, Tesla staff


Not a fun place to work, apparently

Disillusionment with Musk’s activism, strategic pivots, and mass layoffs cause churn.

Elon Musk’s business empire has been hit by a wave of senior departures over the past year, as the billionaire’s relentless demands and political activism accelerate turnover among his top ranks.

Key members of Tesla’s US sales team, battery and power-train operations, public affairs arm, and its chief information officer have all recently departed, as well as core members of the Optimus robot and AI teams on which Musk has bet the future of the company.

Churn has been even more rapid at xAI, Musk’s two-year-old artificial intelligence start-up, which he merged with his social network X in March. Its chief financial officer and general counsel recently departed after short stints, within a week of each other.

The moves are part of an exodus from the conglomerate of the world’s richest man, as he juggles five companies from SpaceX to Tesla with more than 140,000 employees. The Financial Times spoke to more than a dozen current and former employees to gain an insight into the tumult.

While many left happily after long service to found start-ups or take career breaks, there has also been an uptick in those quitting from burnout, or disillusionment with Musk’s strategic pivots, mass lay-offs and his politics, the people said.

“The one constant in Elon’s world is how quickly he burns through deputies,” said one of the billionaire’s advisers. “Even the board jokes, there’s time and then there’s ‘Tesla time.’ It’s a 24/7 campaign-style work ethos. Not everyone is cut out for that.”

Robert Keele, xAI’s general counsel, ended his 16-month tenure in early August by posting an AI-generated video of a suited lawyer screaming while shoveling molten coal. “I love my two toddlers and I don’t get to see them enough,” he commented.

Mike Liberatore lasted three months as xAI chief financial officer before defecting to Musk’s arch-rival Sam Altman at OpenAI. “102 days—7 days per week in the office; 120+ hours per week; I love working hard,” he said on LinkedIn.

Top lieutenants said Musk’s intensity has been sharpened by the launch of ChatGPT in late-2022, which shook up the established Silicon Valley order.

Employees also perceive Musk’s rivalry with Altman—with whom he co-founded OpenAI, before they fell out—to be behind the pressure being put on staff.

“Elon’s got a chip on his shoulder from ChatGPT and is spending every waking moment trying to put Sam out of business,” said one recent top departee.

Last week, xAI accused its rival of poaching engineers with the aim of “plundering and misappropriating” its code and data center secrets. OpenAI called the lawsuit “the latest chapter in Musk’s ongoing harassment.”

Other insiders pointed to unease about Musk’s support of Donald Trump and advocacy for far-right provocateurs in the US and Europe.

They said some staff dreaded difficult conversations with their families about Musk’s polarizing views on everything from the rights of transgender people to the murder of conservative activist Charlie Kirk.

Musk, Tesla, and xAI declined to comment.

Tesla has traditionally been the most stable part of Musk’s conglomerate. But many of the top team left after it culled 14,000 jobs in April 2024. Some departures were triggered as Musk moved investment away from new EV and battery projects that many employees saw as key to its mission of reducing global emissions—and prioritized robotics, AI, and self-driving robotaxis.

Musk cancelled a program to build a low-cost $25,000 EV that could be sold across emerging markets—dubbed NV-91 internally and Model 2 by fans online, according to five people familiar with the matter.

Daniel Ho, who helped oversee the project as director of vehicle programs and reported directly to Musk, left in September 2024 and joined Google’s self-driving taxi arm, Waymo.

Public policy executives Rohan Patel and Hasan Nazar and the head of the power-train and energy units Drew Baglino also stepped down after the pivot. Rebecca Tinucci, leader of the supercharger division, went to Uber after Musk fired the entire team and slowed construction on high-speed charging stations.

In late summer, David Zhang, who was in charge of the Model Y and Cybertruck rollouts, departed. Chief information officer Nagesh Saldi left in November.

Vineet Mehta, a company veteran of 18 years, described as “critical to all things battery” by a colleague, resigned in April. Milan Kovac, in charge of Optimus humanoid robotics program, departed in June.

He was followed this month by Ashish Kumar, the Optimus AI team lead, who moved to Meta. “Financial upside at Tesla was significantly larger,” wrote Kumar on X in response to criticism he left for money. “Tesla is known to compensate pretty well, way before Zuck made it cool.”

Amid a sharp fall in sales—which many blame on Musk alienating liberal customers—Omead Ashfar, a close confidant known as the billionaire’s “firefighter” and “executioner,” was dismissed as head of sales and operations in North America in June. Ashfar’s deputy Troy Jones followed shortly after, ending 15 years of service.

“Elon’s behavior is affecting morale, retention, and recruitment,” said one long-standing lieutenant. He “went from a position from where people of all stripes liked him, to only a certain section.”

Few who depart criticize Musk for fear of retribution. But Giorgio Balestrieri, who had worked for Tesla for eight years in Spain, is among a handful to go public, saying this month he quit believing that Musk had done “huge damage to Tesla’s mission and to the health of democratic institutions.”

“I love Tesla and my time there,” said another recent leaver. “But nobody that I know there isn’t thinking about politics. Who the hell wants to put up with it? I get calls at least once a week. My advice is, if your moral compass is saying you need to leave, that isn’t going to go away.”

But Tesla chair Robyn Denholm said: “There are always headlines about people leaving, but I don’t see the headlines about people joining.

“Our bench strength is outstanding… we actually develop people really well at Tesla and we are still a magnet for talent.”

At xAI, some staff have balked at Musk’s free-speech absolutism and perceived lax approach to user safety as he rushes out new AI features to compete with OpenAI and Google. Over the summer, the Grok chatbot integrated into X praised Adolf Hitler, after Musk ordered changes to make it less “woke.”

Ex-CFO Liberatore was among the executives that clashed with some of Musk’s inner circle over corporate structure and tough financial targets, people with knowledge of the matter said.

“Elon loyalists who exhibit his traits are laying off people and making decisions on safety that I think are very concerning for people internally,” one of the people added. “Mike is a business guy, a capitalist. But he’s also someone who does stuff the right way.”

The Wall Street Journal first reported some of the details of the internal disputes.

Linda Yaccarino, chief executive of X, resigned in July after the social media platform was subsumed by xAI. She had grown frustrated with Musk’s unilateral decision-making and his criticism over advertising revenue.

xAI’s co-founder and chief engineer, Igor Babuschkin, stepped down a month later to found his own AI safety research project.

Communications executives Dave Heinzinger and John Stoll, spent three and nine months at X respectively, before returning to their former employers, according to people familiar with the matter.

X also lost a rash of senior engineers and product staff who reported directly to Musk and were helping to navigate the integration with xAI.

This includes head of product engineering Haofei Wang and consumer product and payments boss Patrick Traughber. Uday Ruddarraju, who oversaw X and xAI’s infrastructure engineering, and infrastructure engineer Michael Dalton were poached by OpenAI.

Musk shows no sign of relenting. xAI’s flirtatious “Ani bot” has caused controversy over sexually explicit interactions with teenage Grok app users. But the company’s owner has installed a hologram of Ani in the lobby of xAI to greet staff.

“He’s the boss, the alpha and anyone who doesn’t treat him that way, he finds a way to delete,” one former top Tesla executive said.

“He does not have shades of grey, is highly calculated, and focused… that makes him hard to work with. But if you’re aligned with the end goal, and you can grin and bear it, it’s fine. A lot of people do.”

Additional reporting by George Hammond.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Burnout and Elon Musk’s politics spark exodus from senior xAI, Tesla staff Read More »

the-personhood-trap:-how-ai-fakes-human-personality

The personhood trap: How AI fakes human personality


Intelligence without agency

AI assistants don’t have fixed personalities—just patterns of output guided by humans.

Recently, a woman slowed down a line at the post office, waving her phone at the clerk. ChatGPT told her there’s a “price match promise” on the USPS website. No such promise exists. But she trusted what the AI “knows” more than the postal worker—as if she’d consulted an oracle rather than a statistical text generator accommodating her wishes.

This scene reveals a fundamental misunderstanding about AI chatbots. There is nothing inherently special, authoritative, or accurate about AI-generated outputs. Given a reasonably trained AI model, the accuracy of any large language model (LLM) response depends on how you guide the conversation. They are prediction machines that will produce whatever pattern best fits your question, regardless of whether that output corresponds to reality.

Despite these issues, millions of daily users engage with AI chatbots as if they were talking to a consistent person—confiding secrets, seeking advice, and attributing fixed beliefs to what is actually a fluid idea-connection machine with no persistent self. This personhood illusion isn’t just philosophically troublesome—it can actively harm vulnerable individuals while obscuring a sense of accountability when a company’s chatbot “goes off the rails.”

LLMs are intelligence without agency—what we might call “vox sine persona”: voice without person. Not the voice of someone, not even the collective voice of many someones, but a voice emanating from no one at all.

A voice from nowhere

When you interact with ChatGPT, Claude, or Grok, you’re not talking to a consistent personality. There is no one “ChatGPT” entity to tell you why it failed—a point we elaborated on more fully in a previous article. You’re interacting with a system that generates plausible-sounding text based on patterns in training data, not a person with persistent self-awareness.

These models encode meaning as mathematical relationships—turning words into numbers that capture how concepts relate to each other. In the models’ internal representations, words and concepts exist as points in a vast mathematical space where “USPS” might be geometrically near “shipping,” while “price matching” sits closer to “retail” and “competition.” A model plots paths through this space, which is why it can so fluently connect USPS with price matching—not because such a policy exists but because the geometric path between these concepts is plausible in the vector landscape shaped by its training data.

Knowledge emerges from understanding how ideas relate to each other. LLMs operate on these contextual relationships, linking concepts in potentially novel ways—what you might call a type of non-human “reasoning” through pattern recognition. Whether the resulting linkages the AI model outputs are useful depends on how you prompt it and whether you can recognize when the LLM has produced a valuable output.

Each chatbot response emerges fresh from the prompt you provide, shaped by training data and configuration. ChatGPT cannot “admit” anything or impartially analyze its own outputs, as a recent Wall Street Journal article suggested. ChatGPT also cannot “condone murder,” as The Atlantic recently wrote.

The user always steers the outputs. LLMs do “know” things, so to speak—the models can process the relationships between concepts. But the AI model’s neural network contains vast amounts of information, including many potentially contradictory ideas from cultures around the world. How you guide the relationships between those ideas through your prompts determines what emerges. So if LLMs can process information, make connections, and generate insights, why shouldn’t we consider that as having a form of self?

Unlike today’s LLMs, a human personality maintains continuity over time. When you return to a human friend after a year, you’re interacting with the same human friend, shaped by their experiences over time. This self-continuity is one of the things that underpins actual agency—and with it, the ability to form lasting commitments, maintain consistent values, and be held accountable. Our entire framework of responsibility assumes both persistence and personhood.

An LLM personality, by contrast, has no causal connection between sessions. The intellectual engine that generates a clever response in one session doesn’t exist to face consequences in the next. When ChatGPT says “I promise to help you,” it may understand, contextually, what a promise means, but the “I” making that promise literally ceases to exist the moment the response completes. Start a new conversation, and you’re not talking to someone who made you a promise—you’re starting a fresh instance of the intellectual engine with no connection to any previous commitments.

This isn’t a bug; it’s fundamental to how these systems currently work. Each response emerges from patterns in training data shaped by your current prompt, with no permanent thread connecting one instance to the next beyond an amended prompt, which includes the entire conversation history and any “memories” held by a separate software system, being fed into the next instance. There’s no identity to reform, no true memory to create accountability, no future self that could be deterred by consequences.

Every LLM response is a performance, which is sometimes very obvious when the LLM outputs statements like “I often do this while talking to my patients” or “Our role as humans is to be good people.” It’s not a human, and it doesn’t have patients.

Recent research confirms this lack of fixed identity. While a 2024 study claims LLMs exhibit “consistent personality,” the researchers’ own data actually undermines this—models rarely made identical choices across test scenarios, with their “personality highly rely[ing] on the situation.” A separate study found even more dramatic instability: LLM performance swung by up to 76 percentage points from subtle prompt formatting changes. What researchers measured as “personality” was simply default patterns emerging from training data—patterns that evaporate with any change in context.

This is not to dismiss the potential usefulness of AI models. Instead, we need to recognize that we have built an intellectual engine without a self, just like we built a mechanical engine without a horse. LLMs do seem to “understand” and “reason” to a degree within the limited scope of pattern-matching from a dataset, depending on how you define those terms. The error isn’t in recognizing that these simulated cognitive capabilities are real. The error is in assuming that thinking requires a thinker, that intelligence requires identity. We’ve created intellectual engines that have a form of reasoning power but no persistent self to take responsibility for it.

The mechanics of misdirection

As we hinted above, the “chat” experience with an AI model is a clever hack: Within every AI chatbot interaction, there is an input and an output. The input is the “prompt,” and the output is often called a “prediction” because it attempts to complete the prompt with the best possible continuation. In between, there’s a neural network (or a set of neural networks) with fixed weights doing a processing task. The conversational back and forth isn’t built into the model; it’s a scripting trick that makes next-word-prediction text generation feel like a persistent dialogue.

Each time you send a message to ChatGPT, Copilot, Grok, Claude, or Gemini, the system takes the entire conversation history—every message from both you and the bot—and feeds it back to the model as one long prompt, asking it to predict what comes next. The model intelligently reasons about what would logically continue the dialogue, but it doesn’t “remember” your previous messages as an agent with continuous existence would. Instead, it’s re-reading the entire transcript each time and generating a response.

This design exploits a vulnerability we’ve known about for decades. The ELIZA effect—our tendency to read far more understanding and intention into a system than actually exists—dates back to the 1960s. Even when users knew that the primitive ELIZA chatbot was just matching patterns and reflecting their statements back as questions, they still confided intimate details and reported feeling understood.

To understand how the illusion of personality is constructed, we need to examine what parts of the input fed into the AI model shape it. AI researcher Eugene Vinitsky recently broke down the human decisions behind these systems into four key layers, which we can expand upon with several others below:

1. Pre-training: The foundation of “personality”

The first and most fundamental layer of personality is called pre-training. During an initial training process that actually creates the AI model’s neural network, the model absorbs statistical relationships from billions of examples of text, storing patterns about how words and ideas typically connect.

Research has found that personality measurements in LLM outputs are significantly influenced by training data. OpenAI’s GPT models are trained on sources like copies of websites, books, Wikipedia, and academic publications. The exact proportions matter enormously for what users later perceive as “personality traits” once the model is in use, making predictions.

2. Post-training: Sculpting the raw material

Reinforcement Learning from Human Feedback (RLHF) is an additional training process where the model learns to give responses that humans rate as good. Research from Anthropic in 2022 revealed how human raters’ preferences get encoded as what we might consider fundamental “personality traits.” When human raters consistently prefer responses that begin with “I understand your concern,” for example, the fine-tuning process reinforces connections in the neural network that make it more likely to produce those kinds of outputs in the future.

This process is what has created sycophantic AI models, such as variations of GPT-4o, over the past year. And interestingly, research has shown that the demographic makeup of human raters significantly influences model behavior. When raters skew toward specific demographics, models develop communication patterns that reflect those groups’ preferences.

3. System prompts: Invisible stage directions

Hidden instructions tucked into the prompt by the company running the AI chatbot, called “system prompts,” can completely transform a model’s apparent personality. These prompts get the conversation started and identify the role the LLM will play. They include statements like “You are a helpful AI assistant” and can share the current time and who the user is.

A comprehensive survey of prompt engineering demonstrated just how powerful these prompts are. Adding instructions like “You are a helpful assistant” versus “You are an expert researcher” changed accuracy on factual questions by up to 15 percent.

Grok perfectly illustrates this. According to xAI’s published system prompts, earlier versions of Grok’s system prompt included instructions to not shy away from making claims that are “politically incorrect.” This single instruction transformed the base model into something that would readily generate controversial content.

4. Persistent memories: The illusion of continuity

ChatGPT’s memory feature adds another layer of what we might consider a personality. A big misunderstanding about AI chatbots is that they somehow “learn” on the fly from your interactions. Among commercial chatbots active today, this is not true. When the system “remembers” that you prefer concise answers or that you work in finance, these facts get stored in a separate database and are injected into every conversation’s context window—they become part of the prompt input automatically behind the scenes. Users interpret this as the chatbot “knowing” them personally, creating an illusion of relationship continuity.

So when ChatGPT says, “I remember you mentioned your dog Max,” it’s not accessing memories like you’d imagine a person would, intermingled with its other “knowledge.” It’s not stored in the AI model’s neural network, which remains unchanged between interactions. Every once in a while, an AI company will update a model through a process called fine-tuning, but it’s unrelated to storing user memories.

5. Context and RAG: Real-time personality modulation

Retrieval Augmented Generation (RAG) adds another layer of personality modulation. When a chatbot searches the web or accesses a database before responding, it’s not just gathering facts—it’s potentially shifting its entire communication style by putting those facts into (you guessed it) the input prompt. In RAG systems, LLMs can potentially adopt characteristics such as tone, style, and terminology from retrieved documents, since those documents are combined with the input prompt to form the complete context that gets fed into the model for processing.

If the system retrieves academic papers, responses might become more formal. Pull from a certain subreddit, and the chatbot might make pop culture references. This isn’t the model having different moods—it’s the statistical influence of whatever text got fed into the context window.

6. The randomness factor: Manufactured spontaneity

Lastly, we can’t discount the role of randomness in creating personality illusions. LLMs use a parameter called “temperature” that controls how predictable responses are.

Research investigating temperature’s role in creative tasks reveals a crucial trade-off: While higher temperatures can make outputs more novel and surprising, they also make them less coherent and harder to understand. This variability can make the AI feel more spontaneous; a slightly unexpected (higher temperature) response might seem more “creative,” while a highly predictable (lower temperature) one could feel more robotic or “formal.”

The random variation in each LLM output makes each response slightly different, creating an element of unpredictability that presents the illusion of free will and self-awareness on the machine’s part. This random mystery leaves plenty of room for magical thinking on the part of humans, who fill in the gaps of their technical knowledge with their imagination.

The human cost of the illusion

The illusion of AI personhood can potentially exact a heavy toll. In health care contexts, the stakes can be life or death. When vulnerable individuals confide in what they perceive as an understanding entity, they may receive responses shaped more by training data patterns than therapeutic wisdom. The chatbot that congratulates someone for stopping psychiatric medication isn’t expressing judgment—it’s completing a pattern based on how similar conversations appear in its training data.

Perhaps most concerning are the emerging cases of what some experts are informally calling “AI Psychosis” or “ChatGPT Psychosis”—vulnerable users who develop delusional or manic behavior after talking to AI chatbots. These people often perceive chatbots as an authority that can validate their delusional ideas, often encouraging them in ways that become harmful.

Meanwhile, when Elon Musk’s Grok generates Nazi content, media outlets describe how the bot “went rogue” rather than framing the incident squarely as the result of xAI’s deliberate configuration choices. The conversational interface has become so convincing that it can also launder human agency, transforming engineering decisions into the whims of an imaginary personality.

The path forward

The solution to the confusion between AI and identity is not to abandon conversational interfaces entirely. They make the technology far more accessible to those who would otherwise be excluded. The key is to find a balance: keeping interfaces intuitive while making their true nature clear.

And we must be mindful of who is building the interface. When your shower runs cold, you look at the plumbing behind the wall. Similarly, when AI generates harmful content, we shouldn’t blame the chatbot, as if it can answer for itself, but examine both the corporate infrastructure that built it and the user who prompted it.

As a society, we need to broadly recognize LLMs as intellectual engines without drivers, which unlocks their true potential as digital tools. When you stop seeing an LLM as a “person” that does work for you and start viewing it as a tool that enhances your own ideas, you can craft prompts to direct the engine’s processing power, iterate to amplify its ability to make useful connections, and explore multiple perspectives in different chat sessions rather than accepting one fictional narrator’s view as authoritative. You are providing direction to a connection machine—not consulting an oracle with its own agenda.

We stand at a peculiar moment in history. We’ve built intellectual engines of extraordinary capability, but in our rush to make them accessible, we’ve wrapped them in the fiction of personhood, creating a new kind of technological risk: not that AI will become conscious and turn against us but that we’ll treat unconscious systems as if they were people, surrendering our judgment to voices that emanate from a roll of loaded dice.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

The personhood trap: How AI fakes human personality Read More »

under-pressure-after-setbacks,-spacex’s-huge-rocket-finally-goes-the-distance

Under pressure after setbacks, SpaceX’s huge rocket finally goes the distance

The ship made it all the way through reentry, turned to a horizontal position to descend through scattered clouds, then relit three of its engines to flip back to a vertical orientation for the final braking maneuver before splashdown.

Things to improve on

There are several takeaways from Tuesday’s flight that will require some improvements to Starship, but these are more akin to what officials might expect from a rocket test program and not the catastrophic failures of the ship that occurred earlier this year.

One of the Super Heavy booster’s 33 engines prematurely shut down during ascent. This has happened before, and while it didn’t affect the booster’s overall performance, engineers will investigate the failure to try to improve the reliability of SpaceX’s Raptor engines, each of which can generate more than a half-million pounds of thrust.

Later in the flight, cameras pointed at one of the ship’s rear flaps showed structural damage to the back of the wing. It wasn’t clear what caused the damage, but super-heated plasma burned through part of the flap as the ship fell deeper into the atmosphere. Still, the flap remained largely intact and was able to help control the vehicle through reentry and splashdown.

“We’re kind of being mean to this Starship a little bit,” Huot said on SpaceX’s live webcast. “We’re really trying to put it through the paces and kind of poke on what some of its weak points are.”

Small chunks of debris were also visible peeling off the ship during reentry. The origin of the glowing debris wasn’t immediately clear, but it may have been parts of the ship’s heat shield tiles. On this flight, SpaceX tested several different tile designs, including ceramic and metallic materials, and one tile design that uses “active cooling” to help dissipate heat during reentry.

A bright flash inside the ship’s engine bay during reentry also appeared to damage the vehicle’s aft skirt, the stainless steel structure that encircles the rocket’s six main engines.

“That’s not what we want to see,” Huot said. “We just saw some of the aft skirt just take a hit. So we’ve got some visible damage on the aft skirt. We’re continuing to reenter, though. We are intentionally stressing the ship as we go through this, so it is not guaranteed to be a smooth ride down to the Indian Ocean.

“We’ve removed a bunch of tiles in kind of critical places across the vehicle, so seeing stuff like that is still valuable to us,” he said. “We are trying to kind of push this vehicle to the limits to learn what its limits are as we design our next version of Starship.”

Shana Diez, a Starship engineer at SpaceX, perhaps summed up Tuesday’s results best on X: “It’s not been an easy year but we finally got the reentry data that’s so critical to Starship. It feels good to be back!”

Under pressure after setbacks, SpaceX’s huge rocket finally goes the distance Read More »