disinformation

toxic-x-users-sabotage-community-notes-that-could-derail-disinfo,-report-says

Toxic X users sabotage Community Notes that could derail disinfo, report says


It’s easy for biased users to bury accurate Community Notes, report says.

What’s the point of recruiting hundreds of thousands of X users to fact-check misleading posts before they go viral if those users’ accurate Community Notes are never displayed?

That’s the question the Center for Countering Digital Hate (CCDH) is asking after digging through a million notes in a public X dataset to find out how many misleading claims spreading widely on X about the US election weren’t quickly fact-checked.

In a report, the CCDH flagged 283 misleading X posts fueling election disinformation spread this year that never displayed a Community Note. Of these, 74 percent were found to have accurate notes proposed but ultimately never displayed—apparently due to toxic X users gaming Community Notes to hide information they politically disagree with.

On X, Community Notes are only displayed if a broad spectrum of X users with diverse viewpoints agree that the post is “helpful.” But the CCDH found that it’s seemingly easy to hide an accurate note that challenges a user’s bias by simply refusing to rate it or downranking it into oblivion.

“The problem is that for a Community Note to be shown, it requires consensus, and on polarizing issues, that consensus is rarely reached,” the CCDH’s report said. “As a result, Community Notes fail precisely where they are needed most.”

Among the most-viewed misleading claims where X failed to add accurate notes were posts spreading lies that “welfare offices in 49 states are handing out voter registration applications to illegal aliens,” the Democratic party is importing voters, most states don’t require ID to vote, and both electronic and mail-in voting are “too risky.”

These unchecked claims were viewed by tens of millions of users, the CCDH found.

One false narrative—that Dems import voters—was amplified in a post from Elon Musk that got 51 million views. In the background, proposed notes sought to correct the disinformation by noting that “lawful permanent residents (green card holders)” cannot vote in US elections until they’re granted citizenship after living in the US for five years. But even these seemingly straightforward citations to government resources did not pass muster for users politically motivated to hide the note.

This appears to be a common pattern on X, the CCDH suggested, and Musk is seemingly a multiplier. In July, the CCDH reported that Musk’s misleading posts about the 2024 election in particular were viewed more than a billion times without any notes ever added.

The majority of the misleading claims in the CCDH’s report seemed to come from conservative users. But X also failed to check a claim that Donald Trump “is no longer eligible to run for president and must drop out of the race immediately.” Posts spreading that false claim got 1.4 million views, the CCDH reported, and that content moderation misstep could potentially have risked negatively impacting Trump’s voter turnout at a time when Musk is campaigning for Trump.

Musk has claimed that while Community Notes will probably never be “perfect,” the fact-checking effort aspires to “be by far the best source of truth on Earth.” The CCDH has alleged that, actually, “most Community Notes are never seen by users, allowing misinformation to spread unchecked.”

Even X’s own numbers on notes seem low

On the Community Notes X account, X acknowledges that “speed is key to notes’ effectiveness—the faster they appear, the more people see them, and the greater effect they have.”

On the day before the CCDH report dropped, X announced that “lightning notes” have been introduced to deliver fact-checks in as little as 15 minutes after a misleading post is written.

“Ludicrously fast? Now reality!” X proclaimed.

Currently, more than 800,000 X users contribute to Community Notes, and with the lightning notes update, X can calculate their scores more quickly. That efficiency, X said, will either spike the amount of content removals or reduce sharing of false or misleading posts.

But while X insists Community Notes are working faster than ever to reduce harmful content spreading, the number of rapidly noted posts that X reports seems low. On a platform with an estimated 429 million daily active users worldwide, only about 400 notes were displayed within the past two weeks in less than an hour of a post going live. For notes that took longer—which the CCDH suggested is the majority if the fact-check is on a controversial topic—only about 60 more notes were displayed in more than an hour.

In July, an international NGO that monitors human rights abuses and corruption, Global Witness, found 45 “bot-like accounts that collectively produced around 610,000 posts” in a two-month period this summer on X, “amplifying racist and sexualized abuse, conspiracy theories, and climate disinformation” ahead of the UK general election.

Those accounts “posted prolifically during the UK general election,” then moved “to rapidly respond to emerging new topics amplifying divisive content,” including the US presidential race.

The CCDH reported that even when misleading posts get fact-checked, the original posts on average are viewed 13 times more than the note is seen, suggesting the majority of damage is done in the time before the note is posted.

Of course, content moderators are often called out for moving too slowly to remove harmful content, a Bloomberg opinion piece praising Community Notes earlier this year noted. That piece pointed to studies showing that “crowdsourcing worked just as well” as professional fact checkers “when assessing the accuracy of news stories,” concluding that “it may be impossible for any social media company to keep up, which is why it’s important to explore other approaches.”

X has said that it’s “common to see Community Notes appearing days faster than traditional fact checks,” while promising that more changes are coming to get notes ranked as “helpful” more quickly.

X risks becoming an echo chamber, data shows

Data that the market intelligence firm Sensor Tower recently shared with Ars offers a potential clue as to why the CCDH is seeing so many accurate notes that are never voted as “helpful.”

According to Sensor Tower’s estimates, global daily active users on X are down by 28 percent in September 2024, compared to October 2022 when Elon Musk took over Twitter. While many users have fled the platform, those who remained are seemingly more engaged than ever—with global engagement up by 8 percent in the same time period. (Rivals like TikTok and Facebook saw much lower growth, up by 3 and 1 percent, respectively.)

This paints a picture of X risking becoming an echo chamber, as loyal users engage more with the platform where misleading posts can seemingly easily go unchecked and buried notes potentially warp discussion in Musk’s “digital town square.”

When Musk initially bought Twitter, one of his earliest moves was to make drastic cuts to the trust and safety teams chiefly responsible for content-moderation decisions. He then expanded the role of Twitter’s Community Notes to substitute for trust and safety team efforts, where before Community Notes was viewed as merely complementary to broader monitoring.

The CCDH says that was a mistake and that the best way to ensure that X is safe for users is to build back X’s trust and safety teams.

“Our social media feeds have no neutral ‘town square’ for rational debate,” the CCDH report said. “In reality, it is messy, complicated, and opaque rules and systems make it impossible for all voices to be heard. Without checks and balances, proper oversight, and well-resourced trust and safety teams in place, X cannot rely on Community Notes to keep X safe.”

More transparency is needed on Community Notes

X and the CCDH have long clashed, with X unsuccessfully suing to seemingly silence the CCDH’s reporting on hate speech on X, which X claimed caused tens of millions in advertising losses. During that legal battle, the CCDH called Musk a “thin-skinned tyrant” who could not tolerate independent research on his platform. And a federal judge agreed that X was clearly suing to “punish” and censor the CCDH, dismissing X’s lawsuit last March.

Since then, the CCDH has resumed its reporting on X. In the most recent report, the CCDH urged that X needed to be more transparent about Community Notes, arguing that “researchers must be able to freely, without intimidation, study how disinformation and unchecked claims spread across platforms.”

The research group also recommended remedies, including continuing to advise that advertisers “evaluate whether their budgets are funding the misleading election claims identified in this report.”

That could lead brands to continue withholding spending on X, which is seemingly already happening. Sensor Tower estimated that “72 out of the top 100 spending US advertisers on X from October 2022 have ceased spending on the platform as of September 2024.” And compared to the first half of 2022, X’s ad revenue from the top 100 advertisers during the first half of 2024 was down 68 percent.

Most drastically, the CCDH recommended that US lawmakers reform Section 230 of the Communications Decency Act “to provide an avenue for accountability” by mandating risk assessments of social media platforms. That would “expose the risk posed by disinformation” and enable lawmakers to “prescribe possible mitigation measures including a comprehensive moderation strategy.”

Globally, the CCDH noted, some regulators have the power to investigate the claims in the CCDH’s report, including the European Commission under the Digital Services Act and the UK’s Ofcom under the Online Safety Act.

“X and social media companies as an industry have been able to avoid taking responsibility,” the CCDH’s report said, offering only “unreliable self-regulation.” Apps like X “thus invent inadequate systems like Community Notes because there is no legal mechanism to hold them accountable for their harms,” the CCDH’s report warned.

Perhaps Musk will be open to the CCDH’s suggestions. In the past, Musk has said that “suggestions for improving Community Notes are… always… much appreciated.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Toxic X users sabotage Community Notes that could derail disinfo, report says Read More »

elon-musk-claims-victory-after-judge-blocks-calif.-deepfake-law

Elon Musk claims victory after judge blocks Calif. deepfake law

“Almost any digitally altered content, when left up to an arbitrary individual on the Internet, could be considered harmful,” Mendez said, even something seemingly benign like AI-generated estimates of voter turnouts shared online.

Additionally, the Supreme Court has held that “even deliberate lies (said with ‘actual malice’) about the government are constitutionally protected” because the right to criticize the government is at the heart of the First Amendment.

“These same principles safeguarding the people’s right to criticize government and government officials apply even in the new technological age when media may be digitally altered: civil penalties for criticisms on the government like those sanctioned by AB 2839 have no place in our system of governance,” Mendez said.

According to Mendez, X posts like Kohls’ parody videos are the “political cartoons of today” and California’s attempt to “bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment” is not justified by even “a well-founded fear of a digitally manipulated media landscape.” If officials find deepfakes are harmful to election prospects, there is already recourse through privacy torts, copyright infringement, or defamation laws, Mendez suggested.

Kosseff told Ars that there could be more narrow ways that government officials looking to protect election integrity could regulate deepfakes online. The Supreme Court has suggested that deepfakes spreading disinformation on the mechanics of voting could possibly be regulated, Kosseff said.

Mendez got it “exactly right” by concluding that the best remedy for election-related deepfakes is more speech, Kosseff said. As Mendez described it, a vague law like AB 2839 seemed to only “uphold the State’s attempt to suffocate” speech.

Parody is vital to democratic debate, judge says

The only part of AB 2839 that survives strict scrutiny, Mendez noted, is a section describing audio disclosures in a “clearly spoken manner and in a pitch that can be easily heard by the average listener, at the beginning of the audio, at the end of the audio, and, if the audio is greater than two minutes in length, interspersed within the audio at intervals of not greater than two minutes each.”

Elon Musk claims victory after judge blocks Calif. deepfake law Read More »

creator-of-fake-kamala-harris-video-musk-boosted-sues-calif.-over-deepfake-laws

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws

After California passed laws cracking down on AI-generated deepfakes of election-related content, a popular conservative influencer promptly sued, accusing California of censoring protected speech, including satire and parody.

In his complaint, Christopher Kohls—who is known as “Mr Reagan” on YouTube and X (formerly Twitter)—said that he was suing “to defend all Americans’ right to satirize politicians.” He claimed that California laws, AB 2655 and AB 2839, were urgently passed after X owner Elon Musk shared a partly AI-generated parody video on the social media platform that Kohls created to “lampoon” presidential hopeful Kamala Harris.

AB 2655, known as the “Defending Democracy from Deepfake Deception Act,” prohibits creating “with actual malice” any “materially deceptive audio or visual media of a candidate for elective office with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, within 60 days of the election.” It requires social media platforms to block or remove any reported deceptive material and label “certain additional content” deemed “inauthentic, fake, or false” to prevent election interference.

The other law at issue, AB 2839, titled “Elections: deceptive media in advertisements,” bans anyone from “knowingly distributing an advertisement or other election communication” with “malice” that “contains certain materially deceptive content” within 120 days of an election in California and, in some cases, within 60 days after an election.

Both bills were signed into law on September 17, and Kohls filed his complaint that day, alleging that both must be permanently blocked as unconstitutional.

Elon Musk called out for boosting Kohls’ video

Kohls’ video that Musk shared seemingly would violate these laws by using AI to make Harris appear to give speeches that she never gave. The manipulated audio sounds like Harris, who appears to be mocking herself as a “diversity hire” and claiming that any critics must be “sexist and racist.”

“Making fun of presidential candidates and other public figures is an American pastime,” Kohls said, defending his parody video. He pointed to a long history of political cartoons and comedic impressions of politicians, claiming that “AI-generated commentary, though a new mode of speech, falls squarely within this tradition.”

While Kohls’ post was clearly marked “parody” in the YouTube title and in his post on X, that “parody” label did not carry over when Musk re-posted the video. This lack of a parody label on Musk’s post—which got approximately 136 million views, roughly twice as many as Kohls’ post—set off California governor Gavin Newsom, who immediately blasted Musk’s post and vowed on X to make content like Kohls’ video “illegal.”

In response to Newsom, Musk poked fun at the governor, posting that “I checked with renowned world authority, Professor Suggon Deeznutz, and he said parody is legal in America.” For his part, Kohls put up a second parody video targeting Harris, calling Newsom a “bully” in his complaint and claiming that he had to “punch back.”

Shortly after these online exchanges, California lawmakers allegedly rushed to back the governor, Kohls’ complaint said. They allegedly amended the deepfake bills to ensure that Kohls’ video would be banned when the bills were signed into law, replacing a broad exception for satire in one law with a narrower safe harbor that Kohls claimed would chill humorists everywhere.

“For videos,” his complaint said, disclaimers required under AB 2839 must “appear for the duration of the video” and “must be in a font size ‘no smaller than the largest font size of other text appearing in the visual media.'” For a satirist like Kohls who uses large fonts to optimize videos for mobile, this “would require the disclaimer text to be so large that it could not fit on the screen,” his complaint said.

On top of seeming impractical, the disclaimers would “fundamentally” alter “the nature of his message” by removing the comedic effect for viewers by distracting from what allegedly makes the videos funny—”the juxtaposition of over-the-top statements by the AI-generated ‘narrator,’ contrasted with the seemingly earnest style of the video as if it were a genuine campaign ad,” Kohls’ complaint alleged.

Imagine watching Saturday Night Live with prominent disclaimers taking up your TV screen, his complaint suggested.

It’s possible that Kohls’ concerns about AB 2839 are unwarranted. Newsom spokesperson Izzy Gardon told Politico that Kohls’ parody label on X was good enough to clear him of liability under the law.

“Requiring them to use the word ‘parody’ on the actual video avoids further misleading the public as the video is shared across the platform,” Gardon said. “It’s unclear why this conservative activist is suing California. This new disclosure law for election misinformation isn’t any more onerous than laws already passed in other states, including Alabama.”

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws Read More »

“fascists”:-elon-musk-responds-to-proposed-fines-for-disinformation-on-x

“Fascists”: Elon Musk responds to proposed fines for disinformation on X

Being responsible is so hard —

“Elon Musk’s had more positions on free speech than the Kama Sutra,” says lawmaker.

A smartphone displays Elon Musk's profile on X, the app formerly known as Twitter.

Getty Images | Dan Kitwood

Elon Musk has lambasted Australia’s government as “fascists” over proposed laws that could levy substantial fines on social media companies if they fail to comply with rules to combat the spread of disinformation and online scams.

The billionaire owner of social media site X posted the word “fascists” on Friday in response to the bill, which would strengthen the Australian media regulator’s ability to hold companies responsible for the content on their platforms and levy potential fines of up to 5 percent of global revenue. The bill, which was proposed this week, has yet to be passed.

Musk’s comments drew rebukes from senior Australian politicians, with Stephen Jones, Australia’s finance minister, telling national broadcaster ABC that it was “crackpot stuff” and the legislation was a matter of sovereignty.

Bill Shorten, the former leader of the Labor Party and a cabinet minister, accused the billionaire of only championing free speech when it was in his commercial interests. “Elon Musk’s had more positions on free speech than the Kama Sutra,” Shorten said in an interview with Australian radio.

The exchange marks the second time that Musk has confronted Australia over technology regulation.

In May, he accused the country’s eSafety Commissioner of censorship after the government agency took X to court in an effort to force it to remove graphic videos of a stabbing attack in Sydney. A court later denied the eSafety Commissioner’s application.

Musk has also been embroiled in a bitter dispute with authorities in Brazil, where the Supreme Court ruled last month that X should be blocked over its failure to remove or suspend certain accounts accused of spreading misinformation and hateful content.

Australia has been at the forefront of efforts to regulate the technology sector, pitting it against some of the world’s largest social media companies.

This week, the government pledged to introduce a minimum age limit for social media use to tackle “screen addiction” among young people.

In March, Canberra threatened to take action against Meta after the owner of Facebook and Instagram said it would withdraw from a world-first deal to pay media companies to link to news stories.

The government also introduced new data privacy measures to parliament on Thursday that would impose hefty fines and potential jail terms of up to seven years for people found guilty of “doxxing” individuals or groups.

Prime Minister Anthony Albanese’s government had pledged to outlaw doxxing—the publication of personal details online for malicious purposes—this year after the details of a private WhatsApp group containing hundreds of Jewish Australians were published online.

Australia is one of the first countries to pursue laws outlawing doxxing. It is also expected to introduce a tranche of laws in the coming months to regulate how personal data can be used by artificial intelligence.

“These reforms give more teeth to the regulation,” said Monique Azzopardi at law firm Clayton Utz.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

“Fascists”: Elon Musk responds to proposed fines for disinformation on X Read More »

rfk-jr’s-anti-vaccine-group-can’t-sue-meta-for-agreeing-with-cdc,-judge-rules

RFK Jr’s anti-vaccine group can’t sue Meta for agreeing with CDC, judge rules

Independent presidential candidate Robert F. Kennedy Jr.

Enlarge / Independent presidential candidate Robert F. Kennedy Jr.

The Children’s Health Defense (CHD), an anti-vaccine group founded by Robert F. Kennedy Jr, has once again failed to convince a court that Meta acted as a state agent when censoring the group’s posts and ads on Facebook and Instagram.

In his opinion affirming a lower court’s dismissal, US Ninth Circuit Court of Appeals Judge Eric Miller wrote that CHD failed to prove that Meta acted as an arm of the government in censoring posts. Concluding that Meta’s right to censor views that the platforms find “distasteful” is protected by the First Amendment, Miller denied CHD’s requested relief, which had included an injunction and civil monetary damages.

“Meta evidently believes that vaccines are safe and effective and that their use should be encouraged,” Miller wrote. “It does not lose the right to promote those views simply because they happen to be shared by the government.”

CHD told Reuters that the group “was disappointed with the decision and considering its legal options.”

The group first filed the complaint in 2020, arguing that Meta colluded with government officials to censor protected speech by labeling anti-vaccine posts as misleading or removing and shadowbanning CHD posts. This caused CHD’s traffic on the platforms to plummet, CHD claimed, and ultimately, its pages were removed from both platforms.

However, critically, Miller wrote, CHD did not allege that “the government was actually involved in the decisions to label CHD’s posts as ‘false’ or ‘misleading,’ the decision to put the warning label on CHD’s Facebook page, or the decisions to ‘demonetize’ or ‘shadow-ban.'”

“CHD has not alleged facts that allow us to infer that the government coerced Meta into implementing a specific policy,” Miller wrote.

Instead, Meta “was entitled to encourage” various “input from the government,” justifiably seeking vaccine-related information provided by the World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC) as it navigated complex content moderation decisions throughout the pandemic, Miller wrote.

Therefore, Meta’s actions against CHD were due to “Meta’s own ‘policy of censoring,’ not any provision of federal law,” Miller concluded. “The evidence suggested that Meta had independent incentives to moderate content and exercised its own judgment in so doing.”

None of CHD’s theories that Meta coordinated with officials to deprive “CHD of its constitutional rights” were plausible, Miller wrote, whereas the “innocent alternative”—”that Meta adopted the policy it did simply because” CEO Mark Zuckerberg and Meta “share the government’s view that vaccines are safe and effective”—appeared “more plausible.”

Meta “does not become an agent of the government just because it decides that the CDC sometimes has a point,” Miller wrote.

Equally not persuasive were CHD’s notions that Section 230 immunity—which shields platforms from liability for third-party content—”‘removed all legal barriers’ to the censorship of vaccine-related speech,” such that “Meta’s restriction of that content should be considered state action.”

“That Section 230 operates in the background to immunize Meta if it chooses to suppress vaccine misinformation—whether because it shares the government’s health concerns or for independent commercial reasons—does not transform Meta’s choice into state action,” Miller wrote.

One judge dissented over Section 230 concerns

In his dissenting opinion, Judge Daniel Collins defended CHD’s Section 230 claim, however, suggesting that the appeals court erred and should have granted CHD injunctive and declaratory relief from alleged censorship. CHD CEO Mary Holland told The Defender that the group was pleased the decision was not unanimous.

According to Collins, who like Miller is a Trump appointee, Meta could never have built its massive social platforms without Section 230 immunity, which grants platforms the ability to broadly censor viewpoints they disfavor.

It was “important to keep in mind” that “the vast practical power that Meta exercises over the speech of millions of others ultimately rests on a government-granted privilege to which Meta is not constitutionally entitled,” Collins wrote. And this power “makes a crucial difference in the state-action analysis.”

As Collins sees it, CHD could plausibly allege that Meta’s communications with government officials about vaccine-related misinformation targeted specific users, like the “disinformation dozen” that includes both CHD and Kennedy. In that case, it appears possible to Collins that Section 230 provides a potential opportunity for government to target speech that it disfavors through mechanisms provided by the platforms.

“Having specifically and purposefully created an immunized power for mega-platform operators to freely censor the speech of millions of persons on those platforms, the Government is perhaps unsurprisingly tempted to then try to influence particular uses of such dangerous levers against protected speech expressing viewpoints the Government does not like,” Collins warned.

He further argued that “Meta’s relevant First Amendment rights” do not “give Meta an unbounded freedom to work with the Government in suppressing speech on its platforms.” Disagreeing with the majority, he wrote that “in this distinctive scenario, applying the state-action doctrine promotes individual liberty by keeping the Government’s hands away from the tempting levers of censorship on these vast platforms.”

The majority agreed, however, that while Section 230 immunity “is undoubtedly a significant benefit to companies like Meta,” lawmakers’ threats to weaken Section 230 did not suggest that Meta’s anti-vaccine policy was coerced state action.

“Many companies rely, in one way or another, on a favorable regulatory environment or the goodwill of the government,” Miller wrote. “If that were enough for state action, every large government contractor would be a state actor. But that is not the law.”

RFK Jr’s anti-vaccine group can’t sue Meta for agreeing with CDC, judge rules Read More »

elon-musk’s-x-tests-letting-users-request-community-notes-on-bad-posts

Elon Musk’s X tests letting users request Community Notes on bad posts

Elon Musk’s X tests letting users request Community Notes on bad posts

Continuing to evolve the fact-checking service that launched as Twitter’s Birdwatch, X has announced that Community Notes can now be requested to clarify problematic posts spreading on Elon Musk’s platform.

X’s Community Notes account confirmed late Thursday that, due to “popular demand,” X had launched a pilot test on the web-based version of the platform. The test is active now and the same functionality will be “coming soon” to Android and iOS, the Community Notes account said.

Through the current web-based pilot, if you’re an eligible user, you can click on the “•••” menu on any X post on the web and request fact-checking from one of Community Notes’ top contributors, X explained. If X receives five or more requests within 24 hours of the post going live, a Community Note will be added.

Only X users with verified phone numbers will be eligible to request Community Notes, X said, and to start, users will be limited to five requests a day.

“The limit may increase if requests successfully result in helpful notes, or may decrease if requests are on posts that people don’t agree need a note,” X’s website said. “This helps prevent spam and keep note writers focused on posts that could use helpful notes.”

Once X receives five or more requests for a Community Note within a single day, top contributors with diverse views will be alerted to respond. On X, top contributors are constantly changing, as their notes are voted as either helpful or not. If at least 4 percent of their notes are rated “helpful,” X explained on its site, and the impact of their notes meets X standards, they can be eligible to receive alerts.

“A contributor’s Top Writer status can always change as their notes are rated by others,” X’s website said.

Ultimately, X considers notes helpful if they “contain accurate, high-quality information” and “help inform people’s understanding of the subject matter in posts,” X said on another part of its site. To gauge the former, X said that the platform partners with “professional reviewers” from the Associated Press and Reuters. X also continually monitors whether notes marked helpful by top writers match what general X users marked as helpful.

“We don’t expect all notes to be perceived as helpful by all people all the time,” X’s website said. “Instead, the goal is to ensure that on average notes that earn the status of Helpful are likely to be seen as helpful by a wide range of people from different points of view, and not only be seen as helpful by people from one viewpoint.”

X will also be allowing half of the top contributors to request notes during the pilot phase, which X said will help the platform evaluate “whether it is beneficial for Community Notes contributors to have both the ability to write notes and request notes.”

According to X, the criteria for requesting a note have intentionally been designed to be simple during the pilot stage, but X expects “these criteria to evolve, with the goal that requests are frequently found valuable to contributors, and not noisy.”

It’s hard to tell from the outside looking in how helpful Community Notes are to X users. The most recent Community Notes survey data that X points to is from 2022 when the platform was still called Twitter and the fact-checking service was still called Birdwatch.

That data showed that “on average,” users were “20–40 percent less likely to agree with the substance of a potentially misleading Tweet than someone who sees the Tweet alone.” And based on Twitter’s “internal data” at that time, the platform also estimated that “people on Twitter who see notes are, on average, 15–35 percent less likely to Like or Retweet a Tweet than someone who sees the Tweet alone.”

Elon Musk’s X tests letting users request Community Notes on bad posts Read More »

russia-and-china-are-using-openai-tools-to-spread-disinformation

Russia and China are using OpenAI tools to spread disinformation

New tool —

Iran and Israel have been getting in on the action as well.

OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis

Enlarge / OpenAI said it was committed to uncovering disinformation campaigns and was building its own AI-powered tools to make detection and analysis “more effective.”

FT montage/NurPhoto via Getty Images

OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.

The San Francisco-based maker of the ChatGPT chatbot said in a report on Thursday that five covert influence operations had used its AI models to generate text and images at a high volume, with fewer language errors than previously, as well as to generate comments or replies to their own posts. OpenAI’s policies prohibit the use of its models to deceive or mislead others.

The content focused on issues “including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” OpenAI said in the report.

The networks also used AI to enhance their own productivity, applying it to tasks such as debugging code or doing research into public social media activity, it said.

Social media platforms, including Meta and Google’s YouTube, have sought to clamp down on the proliferation of disinformation campaigns in the wake of Donald Trump’s 2016 win in the US presidential election when investigators found evidence that a Russian troll farm had sought to manipulate the vote.

Pressure is mounting on fast-growing AI companies such as OpenAI, as rapid advances in their technology mean it is cheaper and easier than ever for disinformation perpetrators to create realistic deepfakes and manipulate media and then spread that content in an automated fashion.

As about 2 billion people head to the polls this year, policymakers have urged the companies to introduce and enforce appropriate guardrails.

Ben Nimmo, principal investigator for intelligence and investigations at OpenAI, said on a call with reporters that the campaigns did not appear to have “meaningfully” boosted their engagement or reach as a result of using OpenAI’s models.

But, he added, “this is not the time for complacency. History shows that influence operations which spent years failing to get anywhere can suddenly break out if nobody’s looking for them.”

Microsoft-backed OpenAI said it was committed to uncovering such disinformation campaigns and was building its own AI-powered tools to make detection and analysis “more effective.” It added its safety systems already made it difficult for the perpetrators to operate, with its models refusing in multiple instances to generate the text or images asked for.

In the report, OpenAI revealed several well-known state-affiliated disinformation actors had been using its tools. These included a Russian operation, Doppelganger, which was first discovered in 2022 and typically attempts to undermine support for Ukraine, and a Chinese network known as Spamouflage, which pushes Beijing’s interests abroad. Both campaigns used its models to generate text or comment in multiple languages before posting on platforms such as Elon Musk’s X.

It flagged a previously unreported Russian operation, dubbed Bad Grammar, saying it used OpenAI models to debug code for running a Telegram bot and to create short, political comments in Russian and English that were then posted on messaging platform Telegram.

X and Telegram have been approached for comment.

It also said it had thwarted a pro-Israel disinformation-for-hire effort, allegedly run by a Tel Aviv-based political campaign management business called STOIC, which used its models to generate articles and comments on X and across Meta’s Instagram and Facebook.

Meta on Wednesday released a report stating it removed the STOIC content. The accounts linked to these operations were terminated by OpenAI.

Additional reporting by Cristina Criddle

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Russia and China are using OpenAI tools to spread disinformation Read More »

kremlin-backed-actors-spread-disinformation-ahead-of-us-elections

Kremlin-backed actors spread disinformation ahead of US elections

MANUFACTURING DIVISION —

To a lesser extent, China and Iran also peddle disinfo in hopes of influencing voters.

Kremlin-backed actors spread disinformation ahead of US elections

Kremlin-backed actors have stepped up efforts to interfere with the US presidential election by planting disinformation and false narratives on social media and fake news sites, analysts with Microsoft reported Wednesday.

The analysts have identified several unique influence-peddling groups affiliated with the Russian government seeking to influence the election outcome, with the objective in large part to reduce US support of Ukraine and sow domestic infighting. These groups have so far been less active during the current election cycle than they were during previous ones, likely because of a less contested primary season.

Stoking divisions

Over the past 45 days, the groups have seeded a growing number of social media posts and fake news articles that attempt to foment opposition to US support of Ukraine and stoke divisions over hot-button issues such as election fraud. The influence campaigns also promote questions about President Biden’s mental health and corrupt judges. In all, Microsoft has tracked scores of such operations in recent weeks.

In a report published Wednesday, the Microsoft analysts wrote:

The deteriorated geopolitical relationship between the United States and Russia leaves the Kremlin with little to lose and much to gain by targeting the US 2024 presidential election. In doing so, Kremlin-backed actors attempt to influence American policy regarding the war in Ukraine, reduce social and political support to NATO, and ensnare the United States in domestic infighting to distract from the world stage. Russia’s efforts thus far in 2024 are not novel, but rather a continuation of a decade-long strategy to “win through the force of politics, rather than the politics of force,” or active measures. Messaging regarding Ukraine—via traditional media and social media—picked up steam over the last two months with a mix of covert and overt campaigns from at least 70 Russia-affiliated activity sets we track.

The most prolific of the influence-peddling groups, Microsoft said, is tied to the Russian Presidential Administration, which according to the Marshal Center think tank, is a secretive institution that acts as the main gatekeeper for President Vladimir Putin. The affiliation highlights the “the increasingly centralized nature of Russian influence campaigns,” a departure from campaigns in previous years that primarily relied on intelligence services and a group known as the Internet Research Agency.

“Each Russian actor has shown the capability and willingness to target English-speaking—and in some cases Spanish-speaking—audiences in the US, pushing social and political disinformation meant to portray Ukrainian President Volodymyr Zelensky as unethical and incompetent, Ukraine as a puppet or failed state, and any American aid to Ukraine as directly supporting a corrupt and conspiratorial regime,” the analysts wrote.

An example is Storm-1516, the name Microsoft uses to track a group seeding anti-Ukraine narratives through US Internet and media sources. Content, published in English, Russian, French, Arabic, and Finnish, frequently originates through disinformation seeded by a purported whistleblower or citizen journalist over a purpose-built video channel and then picked up by a network of Storm-1516-controlled websites posing as independent news sources. These fake news sites reside in the Middle East and Africa as well as in the US, with DC Weekly, Miami Chronicle, and the Intel Drop among them.

Eventually, once the disinformation has circulated in subsequent days, US audiences begin amplifying it, in many cases without being aware of the original source. The following graphic illustrates the flow.

Storm-1516 process for laundering anti-Ukraine disinformation.

Enlarge / Storm-1516 process for laundering anti-Ukraine disinformation.

Microsoft

Wednesday’s report also referred to another group tracked as Storm-1099, which is best known for a campaign called Doppelganger. According to the disinformation research group Disinfo Research Lab, the campaign has targeted multiple countries since 2022 with content designed to undermine support for Ukraine and sow divisions among audiences. Two US outlets tied to Storm-1099 are Election Watch and 50 States of Lie, Microsoft said. The image below shows content recently published by the outlets:

Storm-1099 sites.

Enlarge / Storm-1099 sites.

Microsoft

Wednesday’s report also touched on two other Kremlin-tied operations. One attempts to revive a campaign perpetuated by NABU Leaks, a website that published content alleging then-Vice President Joe Biden colluded with former Ukrainian leader Petro Poroshenko, according to Reuters. In January, Andrei Derkoch—the ex-Ukrainian Parliamentarian and US-sanctioned Russian agent responsible for NABU Leaks—reemerged on social media for the first time in two years. In an interview, Derkoch propagated both old and new claims about Biden and other US political figures.

The other operation follows a playbook known as hack and leak, in which operatives obtain private information through hacking and leak it to news outlets.

Kremlin-backed actors spread disinformation ahead of US elections Read More »

elon-musk’s-x-allows-china-based-propaganda-banned-on-other-platforms

Elon Musk’s X allows China-based propaganda banned on other platforms

Rinse-wash-repeat. —

X accused of overlooking propaganda flagged by Meta and criminal prosecutors.

Elon Musk’s X allows China-based propaganda banned on other platforms

Lax content moderation on X (aka Twitter) has disrupted coordinated efforts between social media companies and law enforcement to tamp down on “propaganda accounts controlled by foreign entities aiming to influence US politics,” The Washington Post reported.

Now propaganda is “flourishing” on X, The Post said, while other social media companies are stuck in endless cycles, watching some of the propaganda that they block proliferate on X, then inevitably spread back to their platforms.

Meta, Google, and then-Twitter began coordinating takedown efforts with law enforcement and disinformation researchers after Russian-backed influence campaigns manipulated their platforms in hopes of swaying the 2016 US presidential election.

The next year, all three companies promised Congress to work tirelessly to stop Russian-backed propaganda from spreading on their platforms. The companies created explicit election misinformation policies and began meeting biweekly to compare notes on propaganda networks each platform uncovered, according to The Post’s interviews with anonymous sources who participated in these meetings.

However, after Elon Musk purchased Twitter and rebranded the company as X, his company withdrew from the alliance in May 2023.

Sources told The Post that the last X meeting attendee was Irish intelligence expert Aaron Rodericks—who was allegedly disciplined for liking an X post calling Musk “a dipshit.” Rodericks was subsequently laid off when Musk dismissed the entire election integrity team last September, and after that, X apparently ditched the biweekly meeting entirely and “just kind of disappeared,” a source told The Post.

In 2023, for example, Meta flagged 150 “artificial influence accounts” identified on its platform, of which “136 were still present on X as of Thursday evening,” according to The Post’s analysis. X’s seeming oversight extends to all but eight of the 123 “deceptive China-based campaigns” connected to accounts that Meta flagged last May, August, and December, The Post reported.

The Post’s report also provided an exclusive analysis from the Stanford Internet Observatory (SIO), which found that 86 propaganda accounts that Meta flagged last November “are still active on X.”

The majority of these accounts—81—were China-based accounts posing as Americans, SIO reported. These accounts frequently ripped photos from Americans’ LinkedIn profiles, then changed the real Americans’ names while posting about both China and US politics, as well as people often trending on X, such as Musk and Joe Biden.

Meta has warned that China-based influence campaigns are “multiplying,” The Post noted, while X’s standards remain seemingly too relaxed. Even accounts linked to criminal investigations remain active on X. One “account that is accused of being run by the Chinese Ministry of Public Security,” The Post reported, remains on X despite its posts being cited by US prosecutors in a criminal complaint.

Prosecutors connected that account to “dozens” of X accounts attempting to “shape public perceptions” about the Chinese Communist Party, the Chinese government, and other world leaders. The accounts also comment on hot-button topics like the fentanyl problem or police brutality, seemingly to convey “a sense of dismay over the state of America without any clear partisan bent,” Elise Thomas, an analyst for a London nonprofit called the Institute for Strategic Dialogue, told The Post.

Some X accounts flagged by The Post had more than 1 million followers. Five have paid X for verification, suggesting that their disinformation campaigns—targeting hashtags to confound discourse on US politics—are seemingly being boosted by X.

SIO technical research manager Renée DiResta criticized X’s decision to stop coordinating with other platforms.

“The presence of these accounts reinforces the fact that state actors continue to try to influence US politics by masquerading as media and fellow Americans,” DiResta told The Post. “Ahead of the 2022 midterms, researchers and platform integrity teams were collaborating to disrupt foreign influence efforts. That collaboration seems to have ground to a halt, Twitter does not seem to be addressing even networks identified by its peers, and that’s not great.”

Musk shut down X’s election integrity team because he claimed that the team was actually “undermining” election integrity. But analysts are bracing for floods of misinformation to sway 2024 elections, as some major platforms have removed election misinformation policies just as rapid advances in AI technologies have made misinformation spread via text, images, audio, and video harder for the average person to detect.

In one prominent example, a fake robocaller relied on AI voice technology to pose as Biden to tell Democrats not to vote. That incident seemingly pushed the Federal Trade Commission on Thursday to propose penalizing AI impersonation.

It seems apparent that propaganda accounts from foreign entities on X will use every tool available to get eyes on their content, perhaps expecting Musk’s platform to be the slowest to police them. According to The Post, some of the X accounts spreading propaganda are using what appears to be AI-generated images of Biden and Donald Trump to garner tens of thousands of views on posts.

It’s possible that X will start tightening up on content moderation as elections draw closer. Yesterday, X joined Amazon, Google, Meta, OpenAI, TikTok, and other Big Tech companies in signing an agreement to fight “deceptive use of AI” during 2024 elections. Among the top goals identified in the “AI Elections accord” are identifying where propaganda originates, detecting how propaganda spreads across platforms, and “undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing” with propaganda.

Elon Musk’s X allows China-based propaganda banned on other platforms Read More »