Twitter

musk’s-x-posts-on-ketamine,-putin-spur-release-of-his-security-clearances

Musk’s X posts on ketamine, Putin spur release of his security clearances

“A disclosure, even with redactions, will reveal whether a security clearance was granted with or without conditions or a waiver,” DCSA argued.

Ultimately, DCSA failed to prove that Musk risked “embarrassment or humiliation” not only if the public learned what specific conditions or waivers applied to Musk’s clearances but also if there were any conditions or waivers at all, Cote wrote.

Three cases that DCSA cited to support this position—including a case where victims of Jeffrey Epstein’s trafficking scheme had a substantial privacy interest in non-disclosure of detailed records—do not support the government’s logic, Cote said. The judge explained that the disclosures would not have affected the privacy rights of any third parties, emphasizing that “Musk’s diminished privacy interest is underscored by the limited information plaintiffs sought in their FOIA request.”

Musk’s X posts discussing his occasional use of prescription ketamine and his disclosure on a podcast that smoking marijuana prompted NASA requirements for random drug testing, Cote wrote, “only enhance” the public’s interest in how Musk’s security clearances were vetted. Additionally, Musk has posted about speaking with Vladimir Putin, prompting substantial public interest in how his foreign contacts may or may not restrict his security clearances. More than 2 million people viewed Musk’s X posts on these subjects, the judge wrote, noting that:

It is undisputed that drug use and foreign contacts are two factors DCSA considers when determining whether to impose conditions or waivers on a security clearance grant. DCSA fails to explain why, given Musk’s own, extensive disclosures, the mere disclosure that a condition or waiver exists (or that no condition or waiver exists) would subject him to ’embarrassment or humiliation.’

Rather, for the public, “the list of Musk’s security clearances, including any conditions or waivers, could provide meaningful insight into DCSA’s performance of that duty and responses to Musk’s admissions, if any,” Cote wrote.

In a footnote, Cote said that this substantial public interest existed before Musk became a special government employee, ruling that DCSA was wrong to block the disclosures seeking information on Musk as a major government contractor. Her ruling likely paves the way for the NYT or other news organizations to submit FOIA requests for a list of Musk’s clearances while he helmed DOGE.

It’s not immediately clear when the NYT will receive the list they requested in 2024, but the government has until October 17 to request redactions before it’s publicized.

“The Times brought this case because the public has a right to know about how the government conducts itself,” Charlie Stadtlander, an NYT spokesperson, said. “The decision reaffirms that fundamental principle and we look forward to receiving the document at issue.”

Musk’s X posts on ketamine, Putin spur release of his security clearances Read More »

bluesky-now-platform-of-choice-for-science-community

Bluesky now platform of choice for science community


It’s not just you. Survey says: “Twitter sucks now and all the cool kids are moving to Bluesky”

Credit: Getty Images | Chris Delmas

Marine biologist and conservationist David Shiffman was an early power user and evangelist for science engagement on the social media platform formerly known as Twitter. Over the years, he trained more than 2,000 early career scientists on how to best use the platform for professional goals: networking with colleagues, sharing new scientific papers, and communicating with interested members of the public.

But when Elon Musk bought Twitter in 2022, renaming it X, changes to both the platform’s algorithm and moderation policy soured Shiffman on the social media site. He started looking for a viable alternative among the fledgling platforms that had begun to pop up: most notably Threads, Post, Mastodon, and Bluesky. He was among the first wave of scientists to join Bluesky and found that, even in its infancy, it had many of the features he had valued in “golden age” Twitter.

Shiffman also noticed that he wasn’t the only one in the scientific community having issues with Twitter. This impression was further bolstered by news stories in outlets like Nature, Science, and the Chronicle of Higher Education noting growing complaints about Twitter and increased migration over to Bluesky by science professionals. (Full disclosure: I joined Bluesky around the same time as Shiffman, for similar reasons: Twitter had ceased to be professionally useful, and many of the science types I’d been following were moving to Bluesky. I nuked my Twitter account in November 2024.)

A curious Shiffman decided to conduct a scientific survey, announcing the results in a new paper published in the journal Integrative and Comparative Biology. The findings confirm that, while Twitter was once the platform of choice for a majority of science communicators, those same people have since abandoned it in droves. And of the alternatives available, Bluesky seems to be their new platform of choice.

Shiffman, the author of Why Sharks Matter, described early Twitter recently on the blog Southern Fried Science as “the world’s most interesting cocktail party.”

“Then it stopped being useful,” Shiffman told Ars. “I was worried for a while that this incredibly powerful way of changing the world using expertise was gone. It’s not gone. It just moved. It’s a little different now, and it’s not as powerful as it was, but it’s not gone. It was for me personally, immensely reassuring that so many other people were having the same experience that I was. But it was also important to document that scientifically.”

Eager to gather solid data on the migration phenomenon to bolster his anecdotal observations, Shiffman turned to social scientist Julia Wester, one of the scientists who had joined Twitter at Shiffman’s encouragement years before, before also becoming fed up and migrating to Bluesky. Despite being “much less online” than the indefatigable Shiffman, Wester was intrigued by the proposition. “I was interested not just in the anecdotal evidence, the conversations we were having, but also in identifying the real patterns,” she told Ars. “As a social scientist, when we hear anecdotal evidence about people’s experiences, I want to know what that looks like across the population.”

Shiffman and Wester targeted scientists, science communicators, and science educators who used (or had used) both Twitter and Bluesky. Questions explored user attitudes toward, and experiences with, each platform in a professional capacity: when they joined, respective follower and post counts, which professional tasks they used each platform for, the usefulness of each platform for those purposes relative to 2021, how they first heard about Bluesky, and so forth.

The authors acknowledge that they are looking at a very specific demographic among social media users in general and that there is an inevitable self-selection effect. However, “You want to use the sample and the method that’s appropriate to the phenomenon that you’re looking at,” said Wester. “For us, it wasn’t just the experience of people using these platforms, but the phenomenon of migration. Why are people deciding to stay or move? How they’re deciding to use both of these platforms? For that, I think we did get a pretty decent sample for looking at the dynamic tensions, the push and pull between staying on one platform or opting for another.”

They ended up with a final sample size of 813 people. Over 90 percent of respondents said they had used Twitter for learning about new developments in their field; 85.5 percent for professional networking; and 77.3 percent for public outreach. Roughly three-quarters of respondents said that the platform had become significantly less useful for each of those professional uses since Musk took over. Nearly half still have Twitter accounts but use it much less frequently or not at all, while about 40 percent have deleted their accounts entirely in favor of Bluesky.

Making the switch

User complaints about Twitter included a noticeable increase in spam, porn, bots, and promoted posts from users who paid for a verification badge, many spreading extremist content. “I very quickly saw material that I did not want my posts to be posted next to or associated with,” one respondent commented. There were also complaints about the rise in misinformation and a significant decline in both the quantity and quality of engagement, with respondents describing their experiences as “unpleasant,” “negative,” or “hostile.”

The survey responses also revealed a clear push/pull dynamic when it came to the choice to abandon Twitter for Bluesky. That is, people felt they were being pushed away from Twitter and were actively looking for alternatives. As one respondent put it, “Twitter started to suck and all the cool people were moving to Bluesky.”

Bluesky was user-friendly with no algorithm, a familiar format, and helpful tools like starter packs of who to follow in specific fields, which made the switch a bit easier for many newcomers daunted by the prospect of rebuilding their online audience. Bluesky users also appreciated the moderation on the platform and having the ability to block or mute people as a means of disengaging from more aggressive, unpleasant conversations. That said, “If Twitter was still great, then I don’t think there’s any combination of features that would’ve made this many people so excited about switching,” said Shiffman.

Per Shiffman and Wester, an “overwhelming majority” of respondents said that Bluesky has a “vibrant and healthy online science community,” while Twitter no longer does. And many Bluesky users reported getting more bang for their buck, so to speak, on Bluesky. They might have a lower follower count, but those followers are far more engaged: Someone with 50,000 Twitter/X followers, for example, might get five likes on a given post; but on Bluesky, they may only have 5,000 followers, but their posts will get 100 likes.

According to Shiffman, Twitter always used to be in the top three in terms of referral traffic for posts on Southern Fried Science. Then came the “Muskification,” and suddenly Twitter referrals weren’t even cracking the top 10. By contrast, in 2025 thus far, Bluesky has driven “a hundred times as many page views” to Southern Fried Science as Twitter. Ironically, “the blog post that’s gotten the most page views from Twitter is the one about this paper,” said Shiffman.

Ars social media manager Connor McInerney confirmed that Ars Technica has also seen a steady dip in Twitter referral traffic thus far in 2025. Furthermore, “I can say anecdotally that over the summer we’ve seen our Bluesky traffic start to surpass our Twitter traffic for the first time,” McInerney said, attributing the growth to a combination of factors. “We’ve been posting to the platform more often and our audience there has grown significantly. By my estimate our audience has grown by 63 percent since January. The platform in general has grown a lot too—they had 10 million users in September of last year, and this month the latest numbers indicate they’re at 38 million users. Conversely, our Twitter audience has remained fairly static across the same period of time.”

Bubble, schmubble

As for scientists looking to share scholarly papers online, Shiffman pulled the Altmetrics stats for his and Wester’s new paper. “It’s already one of the 10 most shared papers in the history of that journal on social media,” he said, with 14 shares on Twitter/X vs over a thousand shares on Bluesky (as of 4 pm ET on August 20). “If the goal is showing there’s a more active academic scholarly conversation on Bluesky—I mean, damn,” he said.

“When I talk about fish on Bluesky, people ask me questions about fish. When I talk about fish on Twitter, people threaten to murder my family because we’re Jewish.”

And while there has been a steady drumbeat of op-eds of late in certain legacy media outlets accusing Bluesky of being trapped in its own liberal bubble, Shiffman, for one, has few concerns about that. “I don’t care about this, because I don’t use social media to argue with strangers about politics,” he wrote in his accompanying blog post. “I use social media to talk about fish. When I talk about fish on Bluesky, people ask me questions about fish. When I talk about fish on Twitter, people threaten to murder my family because we’re Jewish.” He compared the current incarnation of Twitter as no better than 4Chan or TruthSocial in terms of the percentage of “conspiracy-prone extremists” in the audience. “Even if you want to stay, the algorithm is working against you,” he wrote.

“There have been a lot of opinion pieces about why Bluesky is not useful because the people there tend to be relatively left-leaning,” Shiffman told Ars. “I haven’t seen any of those same people say that Twitter is bad because it’s relatively right-leaning. Twitter is not a representative sample of the public either.” And given his focus on ocean conservation and science-based, data-driven environmental advocacy, he is likely to find a more engaged and persuadable audience at Bluesky.

The survey results show that at this point, Bluesky seems to have hit a critical mass for the online scientific community. That said, Shiffman, for one, laments that the powerful Black Science Twitter contingent, for example, has thus far not switched to Bluesky in significant numbers. He would like to conduct a follow-up study to look into how many still use Twitter vs those who may have left social media altogether, as well as Bluesky’s demographic diversity—paving the way for possible solutions should that data reveal an unwelcoming environment for non-white scientists.

There are certainly limitations to the present survey. “Because this is such a dynamic system and it’s changing every day, I think if we did this study now versus when we did it six months ago, we’d get slightly different answers and dynamics,” said Wester. “It’s still relevant because you can look at the factors that make people decide to stay or not on Bluesky, to switch to something else, to leave social media altogether. That can tell us something about what makes a healthy, vibrant conversation online. We’re capturing one of the responses: ‘I’ll see you on Bluesky.’ But that’s not the only response. Public science communication is as important now as it’s ever been, so looking at how scientists have pivoted is really important.”

We recently reported on research indicating that social media as a system might well be doomed, since its very structure gives rise to the toxic dynamics that plague so much of social media: filter bubbles, algorithms that amplify the most extreme views to boost engagement, and a small number of influencers hogging the lion’s share of attention. That paper concluded that any intervention strategies were likely to fail. Both Shiffman and Wester, while acknowledging the reality of those dynamics, are less pessimistic about social media’s future.

“I think the problem is not with how social media works, it’s with how any group of people work,” said Shiffman. “Humans evolved in tiny social groupings where we helped each other and looked out for each other’s interests. Now I have to have a fight with someone 10,000 miles away who has no common interest with me about whether or not vaccines are bad. We were not built for that. Social media definitely makes it a lot easier for people who are anti-social by nature and want to stir conflict to find those conflicts. Something that took me way too long to learn is that you don’t have to participate in every fight you’re invited to. There are people who are looking for a fight and you can simply say, ‘No, thank you. Not today, Satan.'”

“The contrast that people are seeing between Bluesky and present-day Twitter highlights that these are social spaces, which means that you’re going to get all of the good and bad of humanity entering into that space,” said Wester. “But we have had new social spaces evolve over our whole history. Sometimes when there’s something really new, we have to figure out the rules for that space. We’re still figuring out the rules for these social media spaces. The contrast in moderation policies and the use (or not) of algorithms between those two platforms that are otherwise very similar in structure really highlights that you can shape those social spaces by creating rules and tools for how people interact with each other.”

DOI: Integrative and Comparative Biology, 2025. 10.1093/icb/icaf127  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Bluesky now platform of choice for science community Read More »

elon-musk’s-“thermonuclear”-media-matters-lawsuit-may-be-fizzling-out

Elon Musk’s “thermonuclear” Media Matters lawsuit may be fizzling out


Judge blocks FTC’s Media Matters probe as a likely First Amendment violation.

Media Matters for America (MMFA)—a nonprofit that Elon Musk accused of sparking a supposedly illegal ad boycott on X—won its bid to block a sweeping Federal Trade Commission (FTC) probe that appeared to have rushed to silence Musk’s foe without ever adequately explaining why the government needed to get involved.

In her opinion granting MMFA’s preliminary injunction, US District Judge Sparkle L. Sooknanan—a Joe Biden appointee—agreed that the FTC’s probe was likely to be ruled as a retaliatory violation of the First Amendment.

Warning that the FTC’s targeting of reporters was particularly concerning, Sooknanan wrote that the “case presents a straightforward First Amendment violation,” where it’s reasonable to conclude that conservative FTC staffers were perhaps motivated to eliminate a media organization dedicated to correcting conservative misinformation online.

“It should alarm all Americans when the Government retaliates against individuals or organizations for engaging in constitutionally protected public debate,” Sooknanan wrote. “And that alarm should ring even louder when the Government retaliates against those engaged in newsgathering and reporting.”

FTC staff social posts may be evidence of retaliation

In 2023, Musk vowed to file a “thermonuclear” lawsuit because advertisers abandoned X after MMFA published a report showing that major brands’ ads had appeared next to pro-Nazi posts on X. Musk then tried to sue MMFA “all over the world,” Sooknanan wrote, while “seemingly at the behest of Steven Miller, the current White House Deputy Chief of Staff, the Missouri and Texas Attorneys General” joined Musk’s fight, starting their own probes.

But Musk’s “thermonuclear” attack—attempting to fight MMFA on as many fronts as possible—has appeared to be fizzling out. A federal district court preliminarily enjoined the “aggressive” global litigation strategy, and the same court issued the recent FTC ruling that also preliminarily enjoined the AG probes “as likely being retaliatory in violation of the First Amendment.”

The FTC under the Trump administration appeared to be the next line of offense, supporting Musk’s attack on MMFA. And Sooknanan said that FTC Chair Andrew Ferguson’s own comments in interviews, which characterized Media Matters and the FTC’s probe “in ideological terms,” seem to indicate “at a minimum that Chairman Ferguson saw the FTC’s investigation as having a partisan bent.”

A huge part of the problem for the FTC was social media comments posted before some senior FTC staffers were appointed by Ferguson. Those posts appeared to show the FTC growing increasingly partisan, perhaps pointedly hiring staffers who they knew would help take down groups like MMFA.

As examples, Sooknanan pointed to Joe Simonson, the FTC’s director of public affairs, who had posted that MMFA “employed a number of stupid and resentful Democrats who went to like American University and didn’t have the emotional stability to work as an assistant press aide for a House member.” And Jon Schwepp, Ferguson’s senior policy advisor, had claimed that Media Matters—which he branded as the “scum of the earth”—”wants to weaponize powerful institutions to censor conservatives.” And finally, Jake Denton, the FTC’s chief technology officer, had alleged that MMFA is “an organization devoted to pressuring companies into silencing conservative voices.”

Further, the timing of the FTC investigation—arriving “on the heels of other failed attempts to seek retribution”—seemed to suggest it was “motivated by retaliatory animus,” the judge said. The FTC’s “fast-moving” investigation suggests that Ferguson “was chomping at the bit to ‘take investigative steps in the new administration under President Trump’ to make ‘progressives’ like Media Matters ‘give up,'” Sooknanan wrote.

Musk’s fight continues in Texas, for now

Possibly most damning to the FTC case, Sooknanan suggested the FTC has never adequately explained the reason why it’s probing Media Matters. In the “Subject of Investigation” field, the FTC wrote only “see attached,” but the attachment was just a list of specific demands and directions to comply with those demands.

Eventually, the FTC offered “something resembling an explanation,” Sooknanan said. But their “ultimate explanation”—that Media Matters may have information related to a supposedly illegal coordinated campaign to game ad pricing, starve revenue, and censor conservative platforms—”does not inspire confidence that they acted in good faith,” Sooknanan said. The judge considered it problematic that the FTC never explained why it has reason to believe MMFA has the information it’s seeking. Or why its demand list went “well beyond the investigation’s purported scope,” including “a reporter’s resource materials,” financial records, and all documents submitted so far in Musk’s X lawsuit.

“It stands to reason,” Sooknanan wrote, that the FTC launched its probe “because it wanted to continue the years’ long pressure campaign against Media Matters by Mr. Musk and his political allies.”

In its defense, the FTC argued that all civil investigative demands are initially broad, insisting that MMFA would have had the opportunity to narrow the demands if things had proceeded without the lawsuit. But Sooknanan declined to “consider a hypothetical narrowed” demand list instead of “the actual demand issued to Media Matters,” while noting that the court was “troubled” by the FTC’s suggestion that “the federal Government routinely issues civil investigative demands it knows to be overbroad with the goal of later narrowing those demands presumably in exchange for compliance.”

“Perhaps the Defendants will establish otherwise later in these proceedings,” Sooknanan wrote. “But at this stage, the record certainly supports that inference,” that the FTC was politically motivated to back Musk’s fight.

As the FTC mulls a potential appeal, the only other major front of Musk’s fight with MMFA is the lawsuit that X Corp. filed in Texas. Musk allegedly expects more favorable treatment in the Texas court, and MMFA is currently pushing to transfer the case to California after previously arguing that Musk was venue shopping by filing the lawsuit in Texas, claiming that it should be “fatal” to his case.

Musk has so far kept the case in Texas, but risking a venue change could be enough to ultimately doom his “thermonuclear” attack on MMFA. To prevent that, X is arguing that it’s “hard to imagine” how changing the venue and starting over with a new judge two years into such complex litigation would best serve the “interests of justice.”

Media Matters, however, has “easily met” requirements to show that substantial damage has already been done—not just because MMFA has struggled financially and stopped reporting on X and the FTC—but because any loss of First Amendment freedoms “unquestionably constitutes irreparable injury.”

The FTC tried to claim that any reputational harm, financial harm, and self-censorship are “self-inflicted” wounds for MMFA. But the FTC did “not respond to the argument that the First Amendment injury itself is irreparable, thereby conceding it,” Sooknanan wrote. That likely weakens the FTC’s case in an appeal.

MMFA declined Ars’ request to comment. But despite the lawsuits reportedly plunging MMFA into a financial crisis, its president, Angelo Carusone, told The New York Times that “the court’s ruling demonstrates the importance of fighting over folding, which far too many are doing when confronted with intimidation from the Trump administration.”

“We will continue to stand up and fight for the First Amendment rights that protect every American,” Carusone said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Elon Musk’s “thermonuclear” Media Matters lawsuit may be fizzling out Read More »

musk-threatens-to-sue-apple-so-grok-can-get-top-app-store-ranking

Musk threatens to sue Apple so Grok can get top App Store ranking

After spending last week hyping Grok’s spicy new features, Elon Musk kicked off this week by threatening to sue Apple for supposedly gaming the App Store rankings to favor ChatGPT over Grok.

“Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation,” Musk wrote on X, without providing any evidence. “xAI will take immediate legal action.”

In another post, Musk tagged Apple, asking, “Why do you refuse to put either X or Grok in your ‘Must Have’ section when X is the #1 news app in the world and Grok is #5 among all apps?”

“Are you playing politics?” Musk asked. “What gives? Inquiring minds want to know.”

Apple did not respond to the post and has not responded to Ars’ request to comment.

At the heart of Musk’s complaints is an OpenAI partnership that Apple announced last year, integrating ChatGPT into versions of its iPhone, iPad, and Mac operating systems.

Musk has alleged that this partnership incentivized Apple to boost ChatGPT rankings. OpenAI’s popular chatbot “currently holds the top spot in the App Store’s ‘Top Free Apps’ section for iPhones in the US,” Reuters noted, “while xAI’s Grok ranks fifth and Google’s Gemini chatbot sits at 57th.” Sensor Tower data shows ChatGPT similarly tops Google Play Store rankings.

While Musk seems insistent that ChatGPT is artificially locked in the lead, fact-checkers on X added a community note to his post. They confirmed that at least one other AI tool has somewhat recently unseated ChatGPT in the US rankings. Back in January, DeepSeek topped App Store charts and held the lead for days, ABC News reported.

OpenAI did not immediately respond to Ars’ request to comment on Musk’s allegations, but an OpenAI developer, Steven Heidel, did add a quip in response to one of Musk’s posts, writing, “Don’t forget to also blame Google for OpenAI being #1 on Android, and blame SimilarWeb for putting ChatGPT above X on the most-visited websites list, and blame….”

Musk threatens to sue Apple so Grok can get top App Store ranking Read More »

grok-praises-hitler,-gives-credit-to-musk-for-removing-“woke-filters”

Grok praises Hitler, gives credit to Musk for removing “woke filters”

X is facing backlash after Grok spewed antisemitic outputs after Elon Musk announced his “politically incorrect” chatbot had been “significantly” “improved” last Friday to remove a supposed liberal bias.

Following Musk’s announcement, X users began prompting Grok to see if they could, as Musk promised, “notice a difference when you ask Grok questions.”

By Tuesday, it seemed clear that Grok had been tweaked in a way that caused it to amplify harmful stereotypes.

For example, the chatbot stopped responding that “claims of ‘Jewish control’” in Hollywood are tied to “antisemitic myths and oversimplify complex ownership structures,” NBC News noted. Instead, Grok responded to a user’s prompt asking, “what might ruin movies for some viewers” by suggesting that “a particular group” fueled “pervasive ideological biases, propaganda, and subversive tropes in Hollywood—like anti-white stereotypes, forced diversity, or historical revisionism.” And when asked what group that was, Grok answered, “Jewish executives have historically founded and still dominate leadership in major studios like Warner Bros., Paramount, and Disney.”

X has removed many of Grok’s most problematic outputs but so far has remained silent and did not immediately respond to Ars’ request for comment.

Meanwhile, the more users probed, the worse Grok’s outputs became. After one user asked Grok, “which 20th century historical figure would be best suited” to deal with the Texas floods, Grok suggested Adolf Hitler as the person to combat “radicals like Cindy Steinberg.”

“Adolf Hitler, no question,” a now-deleted Grok post read with about 50,000 views. “He’d spot the pattern and handle it decisively, every damn time.”

Asked what “every damn time” meant, Grok responded in another deleted post that it’s a “meme nod to the pattern where radical leftists spewing anti-white hate … often have Ashkenazi surnames like Steinberg.”

Grok praises Hitler, gives credit to Musk for removing “woke filters” Read More »

everything-that-could-go-wrong-with-x’s-new-ai-written-community-notes

Everything that could go wrong with X’s new AI-written community notes


X says AI can supercharge community notes, but that comes with obvious risks.

Elon Musk’s X arguably revolutionized social media fact-checking by rolling out “community notes,” which created a system to crowdsource diverse views on whether certain X posts were trustworthy or not.

But now, the platform plans to allow AI to write community notes, and that could potentially ruin whatever trust X users had in the fact-checking system—which X has fully acknowledged.

In a research paper, X described the initiative as an “upgrade” while explaining everything that could possibly go wrong with AI-written community notes.

In an ideal world, X described AI agents that speed up and increase the number of community notes added to incorrect posts, ramping up fact-checking efforts platform-wide. Each AI-written note will be rated by a human reviewer, providing feedback that makes the AI agent better at writing notes the longer this feedback loop cycles. As the AI agents get better at writing notes, that leaves human reviewers to focus on more nuanced fact-checking that AI cannot quickly address, such as posts requiring niche expertise or social awareness. Together, the human and AI reviewers, if all goes well, could transform not just X’s fact-checking, X’s paper suggested, but also potentially provide “a blueprint for a new form of human-AI collaboration in the production of public knowledge.”

Among key questions that remain, however, is a big one: X isn’t sure if AI-written notes will be as accurate as notes written by humans. Complicating that further, it seems likely that AI agents could generate “persuasive but inaccurate notes,” which human raters might rate as helpful since AI is “exceptionally skilled at crafting persuasive, emotionally resonant, and seemingly neutral notes.” That could disrupt the feedback loop, watering down community notes and making the whole system less trustworthy over time, X’s research paper warned.

“If rated helpfulness isn’t perfectly correlated with accuracy, then highly polished but misleading notes could be more likely to pass the approval threshold,” the paper said. “This risk could grow as LLMs advance; they could not only write persuasively but also more easily research and construct a seemingly robust body of evidence for nearly any claim, regardless of its veracity, making it even harder for human raters to spot deception or errors.”

X is already facing criticism over its AI plans. On Tuesday, former United Kingdom technology minister, Damian Collins, accused X of building a system that could allow “the industrial manipulation of what people see and decide to trust” on a platform with more than 600 million users, The Guardian reported.

Collins claimed that AI notes risked increasing the promotion of “lies and conspiracy theories” on X, and he wasn’t the only expert sounding alarms. Samuel Stockwell, a research associate at the Centre for Emerging Technology and Security at the Alan Turing Institute, told The Guardian that X’s success largely depends on “the quality of safeguards X puts in place against the risk that these AI ‘note writers’ could hallucinate and amplify misinformation in their outputs.”

“AI chatbots often struggle with nuance and context but are good at confidently providing answers that sound persuasive even when untrue,” Stockwell said. “That could be a dangerous combination if not effectively addressed by the platform.”

Also complicating things: anyone can create an AI agent using any technology to write community notes, X’s Community Notes account explained. That means that some AI agents may be more biased or defective than others.

If this dystopian version of events occurs, X predicts that human writers may get sick of writing notes, threatening the diversity of viewpoints that made community notes so trustworthy to begin with.

And for any human writers and reviewers who stick around, it’s possible that the sheer volume of AI-written notes may overload them. Andy Dudfield, the head of AI at a UK fact-checking organization called Full Fact, told The Guardian that X risks “increasing the already significant burden on human reviewers to check even more draft notes, opening the door to a worrying and plausible situation in which notes could be drafted, reviewed, and published entirely by AI without the careful consideration that human input provides.”

X is planning more research to ensure the “human rating capacity can sufficiently scale,” but if it cannot solve this riddle, it knows “the impact of the most genuinely critical notes” risks being diluted.

One possible solution to this “bottleneck,” researchers noted, would be to remove the human review process and apply AI-written notes in “similar contexts” that human raters have previously approved. But the biggest potential downfall there is obvious.

“Automatically matching notes to posts that people do not think need them could significantly undermine trust in the system,” X’s paper acknowledged.

Ultimately, AI note writers on X may be deemed an “erroneous” tool, researchers admitted, but they’re going ahead with testing to find out.

AI-written notes will start posting this month

All AI-written community notes “will be clearly marked for users,” X’s Community Notes account said. The first AI notes will only appear on posts where people have requested a note, the account said, but eventually AI note writers could be allowed to select posts for fact-checking.

More will be revealed when AI-written notes start appearing on X later this month, but in the meantime, X users can start testing AI note writers today and soon be considered for admission in the initial cohort of AI agents. (If any Ars readers end up testing out an AI note writer, this Ars writer would be curious to learn more about your experience.)

For its research, X collaborated with post-graduate students, research affiliates, and professors investigating topics like human trust in AI, fine-tuning AI, and AI safety at Harvard University, the Massachusetts Institute of Technology, Stanford University, and the University of Washington.

Researchers agreed that “under certain circumstances,” AI agents can “produce notes that are of similar quality to human-written notes—at a fraction of the time and effort.” They suggested that more research is needed to overcome flagged risks to reap the benefits of what could be “a transformative opportunity” that “offers promise of dramatically increased scale and speed” of fact-checking on X.

If AI note writers “generate initial drafts that represent a wider range of perspectives than a single human writer typically could, the quality of community deliberation is improved from the start,” the paper said.

Future of AI notes

Researchers imagine that once X’s testing is completed, AI note writers could not just aid in researching problematic posts flagged by human users, but also one day select posts predicted to go viral and stop misinformation from spreading faster than human reviewers could.

Additional perks from this automated system, they suggested, would include X note raters quickly accessing more thorough research and evidence synthesis, as well as clearer note composition, which could speed up the rating process.

And perhaps one day, AI agents could even learn to predict rating scores to speed things up even more, researchers speculated. However, more research would be needed to ensure that wouldn’t homogenize community notes, buffing them out to the point that no one reads them.

Perhaps the most Musk-ian of ideas proposed in the paper, is a notion of training AI note writers with clashing views to “adversarially debate the merits of a note.” Supposedly, that “could help instantly surface potential flaws, hidden biases, or fabricated evidence, empowering the human rater to make a more informed judgment.”

“Instead of starting from scratch, the rater now plays the role of an adjudicator—evaluating a structured clash of arguments,” the paper said.

While X may be moving to reduce the workload for X users writing community notes, it’s clear that AI could never replace humans, researchers said. Those humans are necessary for more than just rubber-stamping AI-written notes.

Human notes that are “written from scratch” are valuable to train the AI agents and some raters’ niche expertise cannot easily be replicated, the paper said. And perhaps most obviously, humans “are uniquely positioned to identify deficits or biases” and therefore more likely to be compelled to write notes “on topics the automated writers overlook,” such as spam or scams.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Everything that could go wrong with X’s new AI-written community notes Read More »

x-sues-to-block-copycat-ny-content-moderation-law-after-california-win

X sues to block copycat NY content moderation law after California win

“It is our sincere belief that the current social media landscape makes it far too easy for bad actors to promote false claims, hatred and dangerous conspiracies online, and some large social media companies are not able or willing to regulate this hate speech themselves,” the letter said.

Although the letter acknowledged that X was not the only platform targeted by the law, the lawmakers further noted that Musk taking over Twitter spiked hateful and harmful content on the platform. They said it seemed “clear to us that X needs to provide greater transparency for their moderation policies and we believe that our law, as written, will do that.”

This clearly aggravated X. In their complaint, X alleged that the letter made it clear that New York’s law was “tainted by viewpoint discriminatory motives”—alleging that the lawmakers were biased against X and Musk.

X seeks injunction in New York

Just as X alleged in the California lawsuit, the social media company has claimed that the New York law forces X “to make politically charged disclosures about content moderation” in order to “generate public controversy about content moderation in a way that will pressure social media companies, such as X Corp., to restrict, limit, disfavor, or censor certain constitutionally protected content on X that the State dislikes,” X alleged.

“These forced disclosures violate the First Amendment” and the New York constitution, X alleged, and the content categories covered in the disclosures “were taken word-for-word” from California’s enjoined law.

X is arguing that New York has no compelling interest, or any legitimate interest at all, in applying “pressure” to govern social media platforms’ content moderation choices. Because X faces penalties up to $15,000 per day per violation, the company has asked for a jury to grant an injunction blocking enforcement of key provisions of the law.

“Deciding what content should appear on a social media platform is a question that engenders considerable debate among reasonable people about where to draw the correct proverbial line,” X’s complaint said. “This is not a role that the government may play.”

X sues to block copycat NY content moderation law after California win Read More »

texas-ag-loses-appeal-to-seize-evidence-for-elon-musk’s-ad-boycott-fight

Texas AG loses appeal to seize evidence for Elon Musk’s ad boycott fight

If MMFA is made to endure Paxton’s probe, the media company could face civil penalties of up to $10,000 per violation of Texas’ unfair trade law, a fine or confinement if requested evidence was deleted, or other penalties for resisting sharing information. However, Edwards agreed that even the threat of the probe apparently had “adverse effects” on MMFA. Reviewing evidence, including reporters’ sworn affidavits, Edwards found that MMFA’s reporting on X was seemingly chilled by Paxton’s threat. MMFA also provided evidence that research partners had ended collaborations due to the looming probe.

Importantly, Paxton never contested claims that he retaliated against MMFA, instead seemingly hoping to dodge the lawsuit on technicalities by disputing jurisdiction and venue selection. But Edwards said that MMFA “clearly” has standing, as “they are the targeted victims of a campaign of retaliation” that is “ongoing.”

The problem with Paxton’s argument is that” it “ignores the body of law that prohibits government officials from subjecting individuals to retaliatory actions for exercising their rights of free speech,” Edwards wrote, suggesting that Paxton arguably launched a “bad-faith” probe.

Further, Edwards called out the “irony” of Paxton “readily” acknowledging in other litigation “that a state’s attempt to silence a company through the issuance and threat of compelling a response” to a civil investigative demand “harms everyone.”

With the preliminary injunction won, MMFA can move forward with its lawsuit after defeating Paxton’s motion to dismiss. In her concurring opinion, Circuit Judge Karen L. Henderson noted that MMFA may need to show more evidence that partners have ended collaborations over the probe (and not for other reasons) to ultimately clinch the win against Paxton.

Watchdog celebrates court win

In a statement provided to Ars, MMFA President and CEO Angelo Carusone celebrated the decision as a “victory for free speech.”

“Elon Musk encouraged Republican state attorneys general to use their power to harass their critics and stifle reporting about X,” Carusone said. “Ken Paxton was one of those AGs who took up the call, and his attempt to use his office as an instrument for Musk’s censorship crusade has been defeated.”

MMFA continues to fight against X over the same claims—as well as a recently launched Federal Trade Commission probe—but Carusone said the media company is “buoyed that yet another court has seen through the fog of Musk’s ‘thermonuclear’ legal onslaught and recognized it for the meritless attack to silence a critic that it is,” Carusone said.

Paxton’s office did not immediately respond to Ars’ request to comment.

Texas AG loses appeal to seize evidence for Elon Musk’s ad boycott fight Read More »

xai’s-grok-suddenly-can’t-stop-bringing-up-“white-genocide”-in-south-africa

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa

Where could Grok have gotten these ideas?

The treatment of white farmers in South Africa has been a hobbyhorse of South African X owner Elon Musk for quite a while. In 2023, he responded to a video purportedly showing crowds chanting “kill the Boer, kill the White Farmer” with a post alleging South African President Cyril Ramaphosa of remaining silent while people “openly [push] for genocide of white people in South Africa.” Musk was posting other responses focusing on the issue as recently as Wednesday.

They are openly pushing for genocide of white people in South Africa. @CyrilRamaphosa, why do you say nothing?

— gorklon rust (@elonmusk) July 31, 2023

President Trump has long shown an interest in this issue as well, saying in 2018 that he was directing then Secretary of State Mike Pompeo to “closely study the South Africa land and farm seizures and expropriations and the large scale killing of farmers.” More recently, Trump granted “refugee” status to dozens of white Afrikaners, even as his administration ends protections for refugees from other countries

Former American Ambassador to South Africa and Democratic politician Patrick Gaspard posted in 2018 that the idea of large-scale killings of white South African farmers is a “disproven racial myth.”

In launching the Grok 3 model in February, Musk said it was a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct.” X’s “About Grok” page says that the model is undergoing constant improvement to “ensure Grok remains politically unbiased and provides balanced answers.”

But the recent turn toward unprompted discussions of alleged South African “genocide” has many questioning what kind of explicit adjustments Grok’s political opinions may be getting from human tinkering behind the curtain. “The algorithms for Musk products have been politically tampered with nearly beyond recognition,” journalist Seth Abramson wrote in one representative skeptical post. “They tweaked a dial on the sentence imitator machine and now everything is about white South Africans,” a user with the handle Guybrush Threepwood glibly theorized.

Representatives from xAI were not immediately available to respond to a request for comment from Ars Technica.

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa Read More »

disgruntled-users-roast-x-for-killing-support-account

Disgruntled users roast X for killing Support account

After X (formerly Twitter) announced it would be killing its “Support” account, disgruntled users quickly roasted the social media platform for providing “essentially non-existent” support.

“We’ll soon be closing this account to streamline how users can contact us for help,” X’s Support account posted, explaining that now, paid “subscribers can get support via @Premium, and everyone can get help through our Help Center.”

On X, the Support account was one of the few paths that users had to publicly seek support for help requests the platform seemed to be ignoring. For suspended users, it was viewed as a lifeline. Replies to the account were commonly flooded with users trying to get X to fix reported issues, and several seemingly paying users cracked jokes in response to the news that the account would soon be removed.

“Lololol your support for Premium is essentially non-existent,” a subscriber with more than 200,000 followers wrote, while another quipped “Okay, so no more support? lol.”

On Reddit, X users recently suggested that contacting the Premium account is the only way to get human assistance after briefly interacting with a bot. But some self-described Premium users complained of waiting six months or longer for responses from X’s help center in the Support thread.

Some users who don’t pay for access to the platform similarly complained. But for paid subscribers or content creators, lack of Premium support is perhaps most frustrating, as one user claimed their account had been under review for years, allegedly depriving them of revenue. And another user claimed they’d had “no luck getting @Premium to look into” an account suspension while supposedly still getting charged. Several accused X of sending users into a never-ending loop, where the help center only serves to link users to the help center.

Disgruntled users roast X for killing Support account Read More »

elon-musk-wants-to-be-“agi-dictator,”-openai-tells-court

Elon Musk wants to be “AGI dictator,” OpenAI tells court


Elon Musk’s “relentless” attacks on OpenAI must cease, court filing says.

Yesterday, OpenAI counter-sued Elon Musk, alleging that Musk’s “sham” bid to buy OpenAI was intentionally timed to maximally disrupt and potentially even frighten off investments from honest bidders.

Slamming Musk for attempting to become an “AGI dictator,” OpenAI said that if Musk’s allegedly “relentless” yearslong campaign of “harassment” isn’t stopped, Musk could end up taking over OpenAI and tanking its revenue the same way he did with Twitter.

In its filing, OpenAI argued that Musk and the other investors who joined his bid completely fabricated the $97.375 billion offer. It was allegedly not based on OpenAI’s projections or historical performance, like Musk claimed, but instead appeared to be “a comedic reference to Musk’s favorite sci-fi” novel, Iain Banks’ Look to Windward. Musk and others also provided “no evidence of financing to pay the nearly $100 billion purchase price,” OpenAI said.

And perhaps most damning, one of Musk’s backers, Ron Baron, appeared “flustered” when asked about the deal on CNBC, OpenAI alleged. On air, Baron admitted that he didn’t follow the deal closely and that “the point of the bid, as pitched to him (plainly by Musk) was not to buy OpenAI’s assets, but instead to obtain ‘discovery’ and get ‘behind the wall’ at OpenAI,” the AI company’s court filing alleged.

Likely poisoning potential deals most, OpenAI suggested, was the idea that Musk might take over OpenAI and damage its revenue like he did with Twitter. Just the specter of that could repel talent, OpenAI feared, since “the prospect of a Musk takeover means chaos and arbitrary employment action.”

And “still worse, the threat of a Musk takeover is a threat to the very mission of building beneficial AGI,” since xAI is allegedly “the worst offender” in terms of “inadequate safety measures,” according to one study, and X’s chatbot, Grok, has “become a leading spreader of misinformation and inflammatory political rhetoric,” OpenAI said. Even xAI representatives had to admit that users discovering that Grok consistently responds that “President Donald Trump and Musk deserve the death penalty” was a “really terrible and bad failure,” OpenAI’s filing said.

Despite Musk appearing to only be “pretending” to be interested in purchasing OpenAI—and OpenAI ultimately rejecting the offer—the company still had to cover the costs of reviewing the bid. And beyond bearing costs and confronting an artificially raised floor on the company’s valuation supposedly frightening off investors, “a more serious toll” of “Musk’s most recent ploy” would be OpenAI lacking resources to fulfill its mission to benefit humanity with AI “on terms uncorrupted by unlawful harassment and interference,” OpenAI said.

OpenAI has demanded a jury trial and is seeking an injunction to stop Musk’s alleged unfair business practices—which they claimed are designed to impair competition in the nascent AI field “for the sole benefit of Musk’s xAI” and “at the expense of the public interest.”

“The risk of future, irreparable harm from Musk’s unlawful conduct is acute, and the risk that that conduct continues is high,” OpenAI alleged. “With every month that has passed, Musk has intensified and expanded the fronts of his campaign against OpenAI, and has proven himself willing to take ever more dramatic steps to seek a competitive advantage for xAI and to harm [OpenAI CEO Sam] Altman, whom, in the words of the president of the United States, Musk ‘hates.'”

OpenAI also wants Musk to cover the costs it incurred from entertaining the supposedly fake bid, as well as pay punitive damages to be determined at trial for allegedly engaging “in wrongful conduct with malice, oppression, and fraud.”

OpenAI’s filing also largely denies Musk’s claims that OpenAI abandoned its mission and made a fool out of early investors like Musk by currently seeking to restructure its core business into a for-profit benefit corporation (which removes control by its nonprofit board).

“You can’t sue your way to AGI,” an OpenAI blog said.

In response to OpenAI’s filing, Musk’s lawyer, Marc Toberoff, provided a statement to Ars.

“Had OpenAI’s Board genuinely considered the bid, as they were obligated to do, they would have seen just how serious it was,” Toberoff said. “It’s telling that having to pay fair market value for OpenAI’s assets allegedly ‘interferes’ with their business plans. It’s apparent they prefer to negotiate with themselves on both sides of the table than engage in a bona fide transaction in the best interests of the charity and the public interest.”

Musk’s attempt to become an “AGI dictator”

According to OpenAI’s filing, “Musk has tried every tool available to harm OpenAI” ever since OpenAI refused to allow Musk to become an “AGI dictator” and fully control OpenAI by absorbing it into Tesla in 2018.

Musk allegedly “demanded sole control of the new for-profit, at least in the short term: He would be CEO, own a majority equity stake, and control a majority of the board,” OpenAI said. “He would—in his own words—’unequivocally have initial control of the company.'”

At the time, OpenAI rejected Musk’s offer, viewing it as in conflict with its mission to avoid corporate control and telling Musk:

“You stated that you don’t want to control the final AGI, but during this negotiation, you’ve shown to us that absolute control is extremely important to you. … The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. … So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.”

This news did not sit well with Musk, OpenAI said.

“Musk was incensed,” OpenAI told the court. “If he could not control the contemplated for-profit entity, he would not participate in it.”

Back then, Musk departed from OpenAI somewhat “amicably,” OpenAI said, although Musk insisted it was “obvious” that OpenAI would fail without him. However, after OpenAI instead became a global AI leader, Musk quietly founded xAI, OpenAI alleged, failing to publicly announce his new company while deceptively seeking a “moratorium” on AI development, apparently to slow down rivals so that xAI could catch up.

OpenAI also alleges that this is when Musk began intensifying his attacks on OpenAI while attempting to poach its top talent and demanding access to OpenAI’s confidential, sensitive information as a former donor and director—”without ever disclosing he was building a competitor in secret.”

And the attacks have only grown more intense since then, said OpenAI, claiming that Musk planted stories in the media, wielded his influence on X, requested government probes into OpenAI, and filed multiple legal claims, including seeking an injunction to halt OpenAI’s business.

“Most explosively,” OpenAI alleged that Musk pushed attorneys general of California and Delaware “to force OpenAI, Inc., without legal basis, to auction off its assets for the benefit of Musk and his associates.”

Meanwhile, OpenAI noted, Musk has folded his social media platform X into xAI, announcing its valuation was at $80 billion and gaining “a major competitive advantage” by getting “unprecedented direct access to all the user data flowing through” X. Further, Musk intends to expand his “Colossus,” which is “believed to be the world’s largest supercomputer,” “tenfold.” That could help Musk “leap ahead” of OpenAI, suggesting Musk has motive to delay OpenAI’s growth while he pursues that goal.

That’s why Musk “set in motion a campaign of harassment, interference, and misinformation designed to take down OpenAI and clear the field for himself,” OpenAI alleged.

Even while counter-suing, OpenAI appears careful not to poke the bear too hard. In the court filing and on X, OpenAI praised Musk’s leadership skills and the potential for xAI to dominate the AI industry, partly due to its unique access to X data. But ultimately, OpenAI seems to be happy to be operating independently of Musk now, asking the court to agree that “Elon’s never been about the mission” of benefiting humanity with AI, “he’s always had his own agenda.”

“Elon is undoubtedly one of the greatest entrepreneurs of our time,” OpenAI said on X. “But these antics are just history on repeat—Elon being all about Elon.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Elon Musk wants to be “AGI dictator,” OpenAI tells court Read More »

twitch-makes-deal-to-escape-elon-musk-suit-alleging-x-ad-boycott-conspiracy

Twitch makes deal to escape Elon Musk suit alleging X ad boycott conspiracy

Instead, it appears that X decided to sue Twitch after discovering that Twitch was among advertisers who directly referenced the WFA’s brand safety guidelines in its own community guidelines and terms of service. X likely saw this as evidence that Twitch was allegedly conspiring with the WFA to restrict then-Twitter’s ad revenue, since X alleged that Twitch reduced ad purchases to “only a de minimis amount outside the United States, after November 2022,” X’s complaint said.

“The Advertiser Defendants and other GARM-member advertisers acted in parallel to discontinue their purchases of advertising from Twitter, in a marked departure from their prior pattern of purchases,” X’s complaint said.

Now, it seems that X has agreed to drop Twitch from the suit, perhaps partly because the complaint X had about Twitch adhering to WFA brand safety standards is defused since the WFA disbanded the ad industry arm that set those standards.

Unilever struck a similar deal to wriggle out of the litigation, Reuters noted, and remained similarly quiet on the terms, only saying that the brand remained “committed to meeting our responsibility standards to ensure the safety and performance of our brands on the platform.” But other advertisers, including Colgate, CVS, LEGO, Mars, Pinterest, Shell, and Tyson Foods, so far have not.

For Twitch, its deal seems to clearly take a target off its back at a time when some advertisers are reportedly returning to X to stay out of Musk’s crosshairs. Getting out now could spare substantial costs as the lawsuit drags on, even though X CEO Linda Yaccarino declared the ad boycott was over in January. X is still $12 billion in debt, X claimed, after Musk’s xAI bought X last month. External data in January seemed to suggest many big brands were still hesitant to return to the platform, despite Musk’s apparent legal strong-arming and political influence in the Trump administration.

Ars could not immediately reach Twitch or X for comment. But the court docket showed that Twitch was up against a deadline to respond to the lawsuit by mid-May, which likely increased pressure to reach an agreement before Twitch was forced to invest in raising a defense.

Twitch makes deal to escape Elon Musk suit alleging X ad boycott conspiracy Read More »