Twitter

grok-praises-hitler,-gives-credit-to-musk-for-removing-“woke-filters”

Grok praises Hitler, gives credit to Musk for removing “woke filters”

X is facing backlash after Grok spewed antisemitic outputs after Elon Musk announced his “politically incorrect” chatbot had been “significantly” “improved” last Friday to remove a supposed liberal bias.

Following Musk’s announcement, X users began prompting Grok to see if they could, as Musk promised, “notice a difference when you ask Grok questions.”

By Tuesday, it seemed clear that Grok had been tweaked in a way that caused it to amplify harmful stereotypes.

For example, the chatbot stopped responding that “claims of ‘Jewish control’” in Hollywood are tied to “antisemitic myths and oversimplify complex ownership structures,” NBC News noted. Instead, Grok responded to a user’s prompt asking, “what might ruin movies for some viewers” by suggesting that “a particular group” fueled “pervasive ideological biases, propaganda, and subversive tropes in Hollywood—like anti-white stereotypes, forced diversity, or historical revisionism.” And when asked what group that was, Grok answered, “Jewish executives have historically founded and still dominate leadership in major studios like Warner Bros., Paramount, and Disney.”

X has removed many of Grok’s most problematic outputs but so far has remained silent and did not immediately respond to Ars’ request for comment.

Meanwhile, the more users probed, the worse Grok’s outputs became. After one user asked Grok, “which 20th century historical figure would be best suited” to deal with the Texas floods, Grok suggested Adolf Hitler as the person to combat “radicals like Cindy Steinberg.”

“Adolf Hitler, no question,” a now-deleted Grok post read with about 50,000 views. “He’d spot the pattern and handle it decisively, every damn time.”

Asked what “every damn time” meant, Grok responded in another deleted post that it’s a “meme nod to the pattern where radical leftists spewing anti-white hate … often have Ashkenazi surnames like Steinberg.”

Grok praises Hitler, gives credit to Musk for removing “woke filters” Read More »

everything-that-could-go-wrong-with-x’s-new-ai-written-community-notes

Everything that could go wrong with X’s new AI-written community notes


X says AI can supercharge community notes, but that comes with obvious risks.

Elon Musk’s X arguably revolutionized social media fact-checking by rolling out “community notes,” which created a system to crowdsource diverse views on whether certain X posts were trustworthy or not.

But now, the platform plans to allow AI to write community notes, and that could potentially ruin whatever trust X users had in the fact-checking system—which X has fully acknowledged.

In a research paper, X described the initiative as an “upgrade” while explaining everything that could possibly go wrong with AI-written community notes.

In an ideal world, X described AI agents that speed up and increase the number of community notes added to incorrect posts, ramping up fact-checking efforts platform-wide. Each AI-written note will be rated by a human reviewer, providing feedback that makes the AI agent better at writing notes the longer this feedback loop cycles. As the AI agents get better at writing notes, that leaves human reviewers to focus on more nuanced fact-checking that AI cannot quickly address, such as posts requiring niche expertise or social awareness. Together, the human and AI reviewers, if all goes well, could transform not just X’s fact-checking, X’s paper suggested, but also potentially provide “a blueprint for a new form of human-AI collaboration in the production of public knowledge.”

Among key questions that remain, however, is a big one: X isn’t sure if AI-written notes will be as accurate as notes written by humans. Complicating that further, it seems likely that AI agents could generate “persuasive but inaccurate notes,” which human raters might rate as helpful since AI is “exceptionally skilled at crafting persuasive, emotionally resonant, and seemingly neutral notes.” That could disrupt the feedback loop, watering down community notes and making the whole system less trustworthy over time, X’s research paper warned.

“If rated helpfulness isn’t perfectly correlated with accuracy, then highly polished but misleading notes could be more likely to pass the approval threshold,” the paper said. “This risk could grow as LLMs advance; they could not only write persuasively but also more easily research and construct a seemingly robust body of evidence for nearly any claim, regardless of its veracity, making it even harder for human raters to spot deception or errors.”

X is already facing criticism over its AI plans. On Tuesday, former United Kingdom technology minister, Damian Collins, accused X of building a system that could allow “the industrial manipulation of what people see and decide to trust” on a platform with more than 600 million users, The Guardian reported.

Collins claimed that AI notes risked increasing the promotion of “lies and conspiracy theories” on X, and he wasn’t the only expert sounding alarms. Samuel Stockwell, a research associate at the Centre for Emerging Technology and Security at the Alan Turing Institute, told The Guardian that X’s success largely depends on “the quality of safeguards X puts in place against the risk that these AI ‘note writers’ could hallucinate and amplify misinformation in their outputs.”

“AI chatbots often struggle with nuance and context but are good at confidently providing answers that sound persuasive even when untrue,” Stockwell said. “That could be a dangerous combination if not effectively addressed by the platform.”

Also complicating things: anyone can create an AI agent using any technology to write community notes, X’s Community Notes account explained. That means that some AI agents may be more biased or defective than others.

If this dystopian version of events occurs, X predicts that human writers may get sick of writing notes, threatening the diversity of viewpoints that made community notes so trustworthy to begin with.

And for any human writers and reviewers who stick around, it’s possible that the sheer volume of AI-written notes may overload them. Andy Dudfield, the head of AI at a UK fact-checking organization called Full Fact, told The Guardian that X risks “increasing the already significant burden on human reviewers to check even more draft notes, opening the door to a worrying and plausible situation in which notes could be drafted, reviewed, and published entirely by AI without the careful consideration that human input provides.”

X is planning more research to ensure the “human rating capacity can sufficiently scale,” but if it cannot solve this riddle, it knows “the impact of the most genuinely critical notes” risks being diluted.

One possible solution to this “bottleneck,” researchers noted, would be to remove the human review process and apply AI-written notes in “similar contexts” that human raters have previously approved. But the biggest potential downfall there is obvious.

“Automatically matching notes to posts that people do not think need them could significantly undermine trust in the system,” X’s paper acknowledged.

Ultimately, AI note writers on X may be deemed an “erroneous” tool, researchers admitted, but they’re going ahead with testing to find out.

AI-written notes will start posting this month

All AI-written community notes “will be clearly marked for users,” X’s Community Notes account said. The first AI notes will only appear on posts where people have requested a note, the account said, but eventually AI note writers could be allowed to select posts for fact-checking.

More will be revealed when AI-written notes start appearing on X later this month, but in the meantime, X users can start testing AI note writers today and soon be considered for admission in the initial cohort of AI agents. (If any Ars readers end up testing out an AI note writer, this Ars writer would be curious to learn more about your experience.)

For its research, X collaborated with post-graduate students, research affiliates, and professors investigating topics like human trust in AI, fine-tuning AI, and AI safety at Harvard University, the Massachusetts Institute of Technology, Stanford University, and the University of Washington.

Researchers agreed that “under certain circumstances,” AI agents can “produce notes that are of similar quality to human-written notes—at a fraction of the time and effort.” They suggested that more research is needed to overcome flagged risks to reap the benefits of what could be “a transformative opportunity” that “offers promise of dramatically increased scale and speed” of fact-checking on X.

If AI note writers “generate initial drafts that represent a wider range of perspectives than a single human writer typically could, the quality of community deliberation is improved from the start,” the paper said.

Future of AI notes

Researchers imagine that once X’s testing is completed, AI note writers could not just aid in researching problematic posts flagged by human users, but also one day select posts predicted to go viral and stop misinformation from spreading faster than human reviewers could.

Additional perks from this automated system, they suggested, would include X note raters quickly accessing more thorough research and evidence synthesis, as well as clearer note composition, which could speed up the rating process.

And perhaps one day, AI agents could even learn to predict rating scores to speed things up even more, researchers speculated. However, more research would be needed to ensure that wouldn’t homogenize community notes, buffing them out to the point that no one reads them.

Perhaps the most Musk-ian of ideas proposed in the paper, is a notion of training AI note writers with clashing views to “adversarially debate the merits of a note.” Supposedly, that “could help instantly surface potential flaws, hidden biases, or fabricated evidence, empowering the human rater to make a more informed judgment.”

“Instead of starting from scratch, the rater now plays the role of an adjudicator—evaluating a structured clash of arguments,” the paper said.

While X may be moving to reduce the workload for X users writing community notes, it’s clear that AI could never replace humans, researchers said. Those humans are necessary for more than just rubber-stamping AI-written notes.

Human notes that are “written from scratch” are valuable to train the AI agents and some raters’ niche expertise cannot easily be replicated, the paper said. And perhaps most obviously, humans “are uniquely positioned to identify deficits or biases” and therefore more likely to be compelled to write notes “on topics the automated writers overlook,” such as spam or scams.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Everything that could go wrong with X’s new AI-written community notes Read More »

x-sues-to-block-copycat-ny-content-moderation-law-after-california-win

X sues to block copycat NY content moderation law after California win

“It is our sincere belief that the current social media landscape makes it far too easy for bad actors to promote false claims, hatred and dangerous conspiracies online, and some large social media companies are not able or willing to regulate this hate speech themselves,” the letter said.

Although the letter acknowledged that X was not the only platform targeted by the law, the lawmakers further noted that Musk taking over Twitter spiked hateful and harmful content on the platform. They said it seemed “clear to us that X needs to provide greater transparency for their moderation policies and we believe that our law, as written, will do that.”

This clearly aggravated X. In their complaint, X alleged that the letter made it clear that New York’s law was “tainted by viewpoint discriminatory motives”—alleging that the lawmakers were biased against X and Musk.

X seeks injunction in New York

Just as X alleged in the California lawsuit, the social media company has claimed that the New York law forces X “to make politically charged disclosures about content moderation” in order to “generate public controversy about content moderation in a way that will pressure social media companies, such as X Corp., to restrict, limit, disfavor, or censor certain constitutionally protected content on X that the State dislikes,” X alleged.

“These forced disclosures violate the First Amendment” and the New York constitution, X alleged, and the content categories covered in the disclosures “were taken word-for-word” from California’s enjoined law.

X is arguing that New York has no compelling interest, or any legitimate interest at all, in applying “pressure” to govern social media platforms’ content moderation choices. Because X faces penalties up to $15,000 per day per violation, the company has asked for a jury to grant an injunction blocking enforcement of key provisions of the law.

“Deciding what content should appear on a social media platform is a question that engenders considerable debate among reasonable people about where to draw the correct proverbial line,” X’s complaint said. “This is not a role that the government may play.”

X sues to block copycat NY content moderation law after California win Read More »

texas-ag-loses-appeal-to-seize-evidence-for-elon-musk’s-ad-boycott-fight

Texas AG loses appeal to seize evidence for Elon Musk’s ad boycott fight

If MMFA is made to endure Paxton’s probe, the media company could face civil penalties of up to $10,000 per violation of Texas’ unfair trade law, a fine or confinement if requested evidence was deleted, or other penalties for resisting sharing information. However, Edwards agreed that even the threat of the probe apparently had “adverse effects” on MMFA. Reviewing evidence, including reporters’ sworn affidavits, Edwards found that MMFA’s reporting on X was seemingly chilled by Paxton’s threat. MMFA also provided evidence that research partners had ended collaborations due to the looming probe.

Importantly, Paxton never contested claims that he retaliated against MMFA, instead seemingly hoping to dodge the lawsuit on technicalities by disputing jurisdiction and venue selection. But Edwards said that MMFA “clearly” has standing, as “they are the targeted victims of a campaign of retaliation” that is “ongoing.”

The problem with Paxton’s argument is that” it “ignores the body of law that prohibits government officials from subjecting individuals to retaliatory actions for exercising their rights of free speech,” Edwards wrote, suggesting that Paxton arguably launched a “bad-faith” probe.

Further, Edwards called out the “irony” of Paxton “readily” acknowledging in other litigation “that a state’s attempt to silence a company through the issuance and threat of compelling a response” to a civil investigative demand “harms everyone.”

With the preliminary injunction won, MMFA can move forward with its lawsuit after defeating Paxton’s motion to dismiss. In her concurring opinion, Circuit Judge Karen L. Henderson noted that MMFA may need to show more evidence that partners have ended collaborations over the probe (and not for other reasons) to ultimately clinch the win against Paxton.

Watchdog celebrates court win

In a statement provided to Ars, MMFA President and CEO Angelo Carusone celebrated the decision as a “victory for free speech.”

“Elon Musk encouraged Republican state attorneys general to use their power to harass their critics and stifle reporting about X,” Carusone said. “Ken Paxton was one of those AGs who took up the call, and his attempt to use his office as an instrument for Musk’s censorship crusade has been defeated.”

MMFA continues to fight against X over the same claims—as well as a recently launched Federal Trade Commission probe—but Carusone said the media company is “buoyed that yet another court has seen through the fog of Musk’s ‘thermonuclear’ legal onslaught and recognized it for the meritless attack to silence a critic that it is,” Carusone said.

Paxton’s office did not immediately respond to Ars’ request to comment.

Texas AG loses appeal to seize evidence for Elon Musk’s ad boycott fight Read More »

xai’s-grok-suddenly-can’t-stop-bringing-up-“white-genocide”-in-south-africa

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa

Where could Grok have gotten these ideas?

The treatment of white farmers in South Africa has been a hobbyhorse of South African X owner Elon Musk for quite a while. In 2023, he responded to a video purportedly showing crowds chanting “kill the Boer, kill the White Farmer” with a post alleging South African President Cyril Ramaphosa of remaining silent while people “openly [push] for genocide of white people in South Africa.” Musk was posting other responses focusing on the issue as recently as Wednesday.

They are openly pushing for genocide of white people in South Africa. @CyrilRamaphosa, why do you say nothing?

— gorklon rust (@elonmusk) July 31, 2023

President Trump has long shown an interest in this issue as well, saying in 2018 that he was directing then Secretary of State Mike Pompeo to “closely study the South Africa land and farm seizures and expropriations and the large scale killing of farmers.” More recently, Trump granted “refugee” status to dozens of white Afrikaners, even as his administration ends protections for refugees from other countries

Former American Ambassador to South Africa and Democratic politician Patrick Gaspard posted in 2018 that the idea of large-scale killings of white South African farmers is a “disproven racial myth.”

In launching the Grok 3 model in February, Musk said it was a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct.” X’s “About Grok” page says that the model is undergoing constant improvement to “ensure Grok remains politically unbiased and provides balanced answers.”

But the recent turn toward unprompted discussions of alleged South African “genocide” has many questioning what kind of explicit adjustments Grok’s political opinions may be getting from human tinkering behind the curtain. “The algorithms for Musk products have been politically tampered with nearly beyond recognition,” journalist Seth Abramson wrote in one representative skeptical post. “They tweaked a dial on the sentence imitator machine and now everything is about white South Africans,” a user with the handle Guybrush Threepwood glibly theorized.

Representatives from xAI were not immediately available to respond to a request for comment from Ars Technica.

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa Read More »

disgruntled-users-roast-x-for-killing-support-account

Disgruntled users roast X for killing Support account

After X (formerly Twitter) announced it would be killing its “Support” account, disgruntled users quickly roasted the social media platform for providing “essentially non-existent” support.

“We’ll soon be closing this account to streamline how users can contact us for help,” X’s Support account posted, explaining that now, paid “subscribers can get support via @Premium, and everyone can get help through our Help Center.”

On X, the Support account was one of the few paths that users had to publicly seek support for help requests the platform seemed to be ignoring. For suspended users, it was viewed as a lifeline. Replies to the account were commonly flooded with users trying to get X to fix reported issues, and several seemingly paying users cracked jokes in response to the news that the account would soon be removed.

“Lololol your support for Premium is essentially non-existent,” a subscriber with more than 200,000 followers wrote, while another quipped “Okay, so no more support? lol.”

On Reddit, X users recently suggested that contacting the Premium account is the only way to get human assistance after briefly interacting with a bot. But some self-described Premium users complained of waiting six months or longer for responses from X’s help center in the Support thread.

Some users who don’t pay for access to the platform similarly complained. But for paid subscribers or content creators, lack of Premium support is perhaps most frustrating, as one user claimed their account had been under review for years, allegedly depriving them of revenue. And another user claimed they’d had “no luck getting @Premium to look into” an account suspension while supposedly still getting charged. Several accused X of sending users into a never-ending loop, where the help center only serves to link users to the help center.

Disgruntled users roast X for killing Support account Read More »

elon-musk-wants-to-be-“agi-dictator,”-openai-tells-court

Elon Musk wants to be “AGI dictator,” OpenAI tells court


Elon Musk’s “relentless” attacks on OpenAI must cease, court filing says.

Yesterday, OpenAI counter-sued Elon Musk, alleging that Musk’s “sham” bid to buy OpenAI was intentionally timed to maximally disrupt and potentially even frighten off investments from honest bidders.

Slamming Musk for attempting to become an “AGI dictator,” OpenAI said that if Musk’s allegedly “relentless” yearslong campaign of “harassment” isn’t stopped, Musk could end up taking over OpenAI and tanking its revenue the same way he did with Twitter.

In its filing, OpenAI argued that Musk and the other investors who joined his bid completely fabricated the $97.375 billion offer. It was allegedly not based on OpenAI’s projections or historical performance, like Musk claimed, but instead appeared to be “a comedic reference to Musk’s favorite sci-fi” novel, Iain Banks’ Look to Windward. Musk and others also provided “no evidence of financing to pay the nearly $100 billion purchase price,” OpenAI said.

And perhaps most damning, one of Musk’s backers, Ron Baron, appeared “flustered” when asked about the deal on CNBC, OpenAI alleged. On air, Baron admitted that he didn’t follow the deal closely and that “the point of the bid, as pitched to him (plainly by Musk) was not to buy OpenAI’s assets, but instead to obtain ‘discovery’ and get ‘behind the wall’ at OpenAI,” the AI company’s court filing alleged.

Likely poisoning potential deals most, OpenAI suggested, was the idea that Musk might take over OpenAI and damage its revenue like he did with Twitter. Just the specter of that could repel talent, OpenAI feared, since “the prospect of a Musk takeover means chaos and arbitrary employment action.”

And “still worse, the threat of a Musk takeover is a threat to the very mission of building beneficial AGI,” since xAI is allegedly “the worst offender” in terms of “inadequate safety measures,” according to one study, and X’s chatbot, Grok, has “become a leading spreader of misinformation and inflammatory political rhetoric,” OpenAI said. Even xAI representatives had to admit that users discovering that Grok consistently responds that “President Donald Trump and Musk deserve the death penalty” was a “really terrible and bad failure,” OpenAI’s filing said.

Despite Musk appearing to only be “pretending” to be interested in purchasing OpenAI—and OpenAI ultimately rejecting the offer—the company still had to cover the costs of reviewing the bid. And beyond bearing costs and confronting an artificially raised floor on the company’s valuation supposedly frightening off investors, “a more serious toll” of “Musk’s most recent ploy” would be OpenAI lacking resources to fulfill its mission to benefit humanity with AI “on terms uncorrupted by unlawful harassment and interference,” OpenAI said.

OpenAI has demanded a jury trial and is seeking an injunction to stop Musk’s alleged unfair business practices—which they claimed are designed to impair competition in the nascent AI field “for the sole benefit of Musk’s xAI” and “at the expense of the public interest.”

“The risk of future, irreparable harm from Musk’s unlawful conduct is acute, and the risk that that conduct continues is high,” OpenAI alleged. “With every month that has passed, Musk has intensified and expanded the fronts of his campaign against OpenAI, and has proven himself willing to take ever more dramatic steps to seek a competitive advantage for xAI and to harm [OpenAI CEO Sam] Altman, whom, in the words of the president of the United States, Musk ‘hates.'”

OpenAI also wants Musk to cover the costs it incurred from entertaining the supposedly fake bid, as well as pay punitive damages to be determined at trial for allegedly engaging “in wrongful conduct with malice, oppression, and fraud.”

OpenAI’s filing also largely denies Musk’s claims that OpenAI abandoned its mission and made a fool out of early investors like Musk by currently seeking to restructure its core business into a for-profit benefit corporation (which removes control by its nonprofit board).

“You can’t sue your way to AGI,” an OpenAI blog said.

In response to OpenAI’s filing, Musk’s lawyer, Marc Toberoff, provided a statement to Ars.

“Had OpenAI’s Board genuinely considered the bid, as they were obligated to do, they would have seen just how serious it was,” Toberoff said. “It’s telling that having to pay fair market value for OpenAI’s assets allegedly ‘interferes’ with their business plans. It’s apparent they prefer to negotiate with themselves on both sides of the table than engage in a bona fide transaction in the best interests of the charity and the public interest.”

Musk’s attempt to become an “AGI dictator”

According to OpenAI’s filing, “Musk has tried every tool available to harm OpenAI” ever since OpenAI refused to allow Musk to become an “AGI dictator” and fully control OpenAI by absorbing it into Tesla in 2018.

Musk allegedly “demanded sole control of the new for-profit, at least in the short term: He would be CEO, own a majority equity stake, and control a majority of the board,” OpenAI said. “He would—in his own words—’unequivocally have initial control of the company.'”

At the time, OpenAI rejected Musk’s offer, viewing it as in conflict with its mission to avoid corporate control and telling Musk:

“You stated that you don’t want to control the final AGI, but during this negotiation, you’ve shown to us that absolute control is extremely important to you. … The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. … So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.”

This news did not sit well with Musk, OpenAI said.

“Musk was incensed,” OpenAI told the court. “If he could not control the contemplated for-profit entity, he would not participate in it.”

Back then, Musk departed from OpenAI somewhat “amicably,” OpenAI said, although Musk insisted it was “obvious” that OpenAI would fail without him. However, after OpenAI instead became a global AI leader, Musk quietly founded xAI, OpenAI alleged, failing to publicly announce his new company while deceptively seeking a “moratorium” on AI development, apparently to slow down rivals so that xAI could catch up.

OpenAI also alleges that this is when Musk began intensifying his attacks on OpenAI while attempting to poach its top talent and demanding access to OpenAI’s confidential, sensitive information as a former donor and director—”without ever disclosing he was building a competitor in secret.”

And the attacks have only grown more intense since then, said OpenAI, claiming that Musk planted stories in the media, wielded his influence on X, requested government probes into OpenAI, and filed multiple legal claims, including seeking an injunction to halt OpenAI’s business.

“Most explosively,” OpenAI alleged that Musk pushed attorneys general of California and Delaware “to force OpenAI, Inc., without legal basis, to auction off its assets for the benefit of Musk and his associates.”

Meanwhile, OpenAI noted, Musk has folded his social media platform X into xAI, announcing its valuation was at $80 billion and gaining “a major competitive advantage” by getting “unprecedented direct access to all the user data flowing through” X. Further, Musk intends to expand his “Colossus,” which is “believed to be the world’s largest supercomputer,” “tenfold.” That could help Musk “leap ahead” of OpenAI, suggesting Musk has motive to delay OpenAI’s growth while he pursues that goal.

That’s why Musk “set in motion a campaign of harassment, interference, and misinformation designed to take down OpenAI and clear the field for himself,” OpenAI alleged.

Even while counter-suing, OpenAI appears careful not to poke the bear too hard. In the court filing and on X, OpenAI praised Musk’s leadership skills and the potential for xAI to dominate the AI industry, partly due to its unique access to X data. But ultimately, OpenAI seems to be happy to be operating independently of Musk now, asking the court to agree that “Elon’s never been about the mission” of benefiting humanity with AI, “he’s always had his own agenda.”

“Elon is undoubtedly one of the greatest entrepreneurs of our time,” OpenAI said on X. “But these antics are just history on repeat—Elon being all about Elon.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Elon Musk wants to be “AGI dictator,” OpenAI tells court Read More »

twitch-makes-deal-to-escape-elon-musk-suit-alleging-x-ad-boycott-conspiracy

Twitch makes deal to escape Elon Musk suit alleging X ad boycott conspiracy

Instead, it appears that X decided to sue Twitch after discovering that Twitch was among advertisers who directly referenced the WFA’s brand safety guidelines in its own community guidelines and terms of service. X likely saw this as evidence that Twitch was allegedly conspiring with the WFA to restrict then-Twitter’s ad revenue, since X alleged that Twitch reduced ad purchases to “only a de minimis amount outside the United States, after November 2022,” X’s complaint said.

“The Advertiser Defendants and other GARM-member advertisers acted in parallel to discontinue their purchases of advertising from Twitter, in a marked departure from their prior pattern of purchases,” X’s complaint said.

Now, it seems that X has agreed to drop Twitch from the suit, perhaps partly because the complaint X had about Twitch adhering to WFA brand safety standards is defused since the WFA disbanded the ad industry arm that set those standards.

Unilever struck a similar deal to wriggle out of the litigation, Reuters noted, and remained similarly quiet on the terms, only saying that the brand remained “committed to meeting our responsibility standards to ensure the safety and performance of our brands on the platform.” But other advertisers, including Colgate, CVS, LEGO, Mars, Pinterest, Shell, and Tyson Foods, so far have not.

For Twitch, its deal seems to clearly take a target off its back at a time when some advertisers are reportedly returning to X to stay out of Musk’s crosshairs. Getting out now could spare substantial costs as the lawsuit drags on, even though X CEO Linda Yaccarino declared the ad boycott was over in January. X is still $12 billion in debt, X claimed, after Musk’s xAI bought X last month. External data in January seemed to suggest many big brands were still hesitant to return to the platform, despite Musk’s apparent legal strong-arming and political influence in the Trump administration.

Ars could not immediately reach Twitch or X for comment. But the court docket showed that Twitch was up against a deadline to respond to the lawsuit by mid-May, which likely increased pressure to reach an agreement before Twitch was forced to invest in raising a defense.

Twitch makes deal to escape Elon Musk suit alleging X ad boycott conspiracy Read More »

even-trump-may-not-be-able-to-save-elon-musk-from-his-old-tweets

Even Trump may not be able to save Elon Musk from his old tweets

A loss in the investors’ and SEC’s suits could force Musk to disgorge any ill-gotten gains from the alleged scheme, estimated at $150 million, as well as potential civil penalties.

The SEC and Musk’s X (formerly Twitter) did not respond to Ars’ request to comment. Investors’ lawyers declined to comment on the ongoing litigation.

SEC purge may slow down probes

Under the Biden administration, the SEC alleged that “Musk’s violation resulted in substantial economic harm to investors selling Twitter common stock.” For the lead plaintiffs in the investors’ suit, the Oklahoma Firefighters Pension and Retirement System, the scheme allegedly robbed retirees of gains used to sustain their quality of life at a particularly vulnerable time.

Musk has continued to argue that his alleged $200 million in savings from the scheme was minimal compared to his $44 billion purchase price. But the alleged gains represent about two-thirds of the $290 million price the billionaire paid to support Trump’s election, which won Musk a senior advisor position in the Trump administration, CNBC reported. So it’s seemingly not an insignificant amount of money in the grand scheme.

Likely bending to Musk’s influence, one of Trump’s earliest moves after taking office, CNBC reported, was reversing a 15-year-old policy allowing the SEC director of enforcement to launch probes like the one Musk is currently battling. It allowed the Tesla probe, for example, to be launched just seven days after Musk’s allegedly problematic tweets, the SEC boasted in a 2020 press release.

Now, after Trump’s rule change, investigations must be approved by a vote of SEC commissioners. That will likely slow down probes that the SEC had previously promised years ago would only speed up over time in order to more swiftly protect investors.

SEC expected to reduce corporate fines

For Musk, the SEC has long been a thorn in his side. At least two top officials (1, 2) cited the Tesla settlement as a career highlight, with the agency seeming especially proud of thinking “creatively about appropriate remedies,” the 2020 press release said. Monitoring Musk’s tweets, the SEC said, blocked “potential harm to investors” and put control over Musk’s tweets into the SEC’s hands.

Even Trump may not be able to save Elon Musk from his old tweets Read More »

uk-online-safety-law-musk-hates-kicks-in-today,-and-so-far,-trump-can’t-stop-it

UK online safety law Musk hates kicks in today, and so far, Trump can’t stop it

Enforcement of a first-of-its-kind United Kingdom law that Elon Musk wants Donald Trump to gut kicked in today, with potentially huge penalties possibly imminent for any Big Tech companies deemed non-compliant.

UK’s Online Safety Act (OSA) forces tech companies to detect and remove dangerous online content, threatening fines of up to 10 percent of global turnover. In extreme cases, widely used platforms like Musk’s X could be shut down or executives even jailed if UK online safety regulator Ofcom determines there has been a particularly egregious violation.

Critics call it a censorship bill, listing over 130 “priority” offenses across 17 categories detailing what content platforms must remove. The list includes illegal content connected to terrorism, child sexual exploitation, human trafficking, illegal drugs, animal welfare, and other crimes. But it also broadly restricts content in legally gray areas, like posts considered “extreme pornography,” harassment, or controlling behavior.

Matthew Lesh, a public policy fellow at the Institute of Economic Affairs, told The Telegraph that “the idea that Elon Musk, or any social media executive, could be jailed for failing to remove enough content should send chills down the spine of anyone who cares about free speech.”

Musk has publicly signaled that he expects Trump to intervene, saying, “Thank goodness Donald Trump will be president just in time,” regarding the OSA’s enforcement starting in March, The Telegraph reported last month. The X owner has been battling UK regulators since last summer after resisting requests from the UK government to remove misinformation during riots considered the “worst unrest in England for more than a decade,” The Financial Times reported.

According to Musk, X was refusing to censor UK users. Attacking the OSA, Musk falsely claimed Prime Minister Keir Starmer’s government was “releasing convicted pedophiles in order to imprison people for social media posts,” FT reported. Such a post, if seen as spreading misinformation potentially inciting violence, could be banned under the OSA, the FT suggested.

Trump’s UK deal may disappoint Musk

Musk hopes that Trump will strike a deal with the UK government to potentially water down the OSA.

UK online safety law Musk hates kicks in today, and so far, Trump can’t stop it Read More »

meta-plans-to-test-and-tinker-with-x’s-community-notes-algorithm

Meta plans to test and tinker with X’s community notes algorithm

Meta also confirmed that it won’t be reducing visibility of misleading posts with community notes. That’s a change from the prior system, Meta noted, which had penalties associated with fact-checking.

According to Meta, X’s algorithm cannot be gamed, supposedly safeguarding “against organized campaigns” striving to manipulate notes and “influence what notes get published or what they say.” Meta claims it will rely on external research on community notes to avoid that pitfall, but as recently as last October, outside researchers had suggested that X’s Community Notes were easily sabotaged by toxic X users.

“We don’t expect this process to be perfect, but we’ll continue to improve as we learn,” Meta said.

Meta confirmed that the company plans to tweak X’s algorithm over time to develop its own version of community notes, which “may explore different or adjusted algorithms to support how Community Notes are ranked and rated.”

In a post, X’s Support account said that X was “excited” that Meta was using its “well-established, academically studied program as a foundation” for its community notes.

Meta plans to test and tinker with X’s community notes algorithm Read More »

x’s-globe-trotting-defense-of-ads-on-nazi-posts-violates-tos,-media-matters-says

X’s globe-trotting defense of ads on Nazi posts violates TOS, Media Matters says

“X conceded that depending on what content a user follows and how long they’ve had their account, they might see advertisements placed next to extremist content,” MMFA alleged.

As MMFA sees it, Musk is trying to blame the organization for ad losses spurred by his own decisions after taking over the platform—like cutting content moderation teams, de-amplifying hateful content instead of removing it, and bringing back banned users. Through the lawsuits, Musk allegedly wants to make MMFA pay “hundreds of millions of dollars in lost advertising revenue” simply because its report didn’t outline “what accounts Media Matters followed or how frequently it refreshed its screen,” MMFA argued, previously likening this to suing MMFA for scrolling on X.

MMFA has already spent millions to defend against X’s multiple lawsuits, their filing said, while consistently contesting X’s chosen venues. If X loses the fight in California, the platform would potentially owe damages from improperly filing litigation outside the venue agreed upon in its TOS.

“This proliferation of claims over a single course of conduct, in multiple jurisdictions, is abusive,” MMFA’s complaint said, noting that the organization has a hearing in Singapore next month and another in Dublin in May. And it “does more than simply drive up costs: It means that Media Matters cannot focus its time and resources to mounting the best possible defense in one forum and must instead fight back piecemeal,” which allegedly prejudices MMFA’s “ability to most effectively defend itself.”

“Media Matters should not have to defend against attempts by X to hale Media Matters into court in foreign jurisdictions when the parties already agreed on the appropriate forum for any dispute related to X’s services,” MMFA’s complaint said. “That is—this Court.”

X still recovering from ad boycott

Although X CEO Linda Yaccarino started 2025 by signaling the X ad boycott was over, Ars found that external data did not support that conclusion. More recently, Business Insider cited independent data sources last month who similarly concluded that while X’s advertiser pool seemed to be increasing, its ad revenue was still “far” from where Twitter was prior to Musk’s takeover.

X’s globe-trotting defense of ads on Nazi posts violates TOS, Media Matters says Read More »