It’s easy for biased users to bury accurate Community Notes, report says.
What’s the point of recruiting hundreds of thousands of X users to fact-check misleading posts before they go viral if those users’ accurate Community Notes are never displayed?
That’s the question the Center for Countering Digital Hate (CCDH) is asking after digging through a million notes in a public X dataset to find out how many misleading claims spreading widely on X about the US election weren’t quickly fact-checked.
In a report, the CCDH flagged 283 misleading X posts fueling election disinformation spread this year that never displayed a Community Note. Of these, 74 percent were found to have accurate notes proposed but ultimately never displayed—apparently due to toxic X users gaming Community Notes to hide information they politically disagree with.
On X, Community Notes are only displayed if a broad spectrum of X users with diverse viewpoints agree that the post is “helpful.” But the CCDH found that it’s seemingly easy to hide an accurate note that challenges a user’s bias by simply refusing to rate it or downranking it into oblivion.
“The problem is that for a Community Note to be shown, it requires consensus, and on polarizing issues, that consensus is rarely reached,” the CCDH’s report said. “As a result, Community Notes fail precisely where they are needed most.”
Among the most-viewed misleading claims where X failed to add accurate notes were posts spreading lies that “welfare offices in 49 states are handing out voter registration applications to illegal aliens,” the Democratic party is importing voters, most states don’t require ID to vote, and both electronic and mail-in voting are “too risky.”
These unchecked claims were viewed by tens of millions of users, the CCDH found.
One false narrative—that Dems import voters—was amplified in a post from Elon Musk that got 51 million views. In the background, proposed notes sought to correct the disinformation by noting that “lawful permanent residents (green card holders)” cannot vote in US elections until they’re granted citizenship after living in the US for five years. But even these seemingly straightforward citations to government resources did not pass muster for users politically motivated to hide the note.
This appears to be a common pattern on X, the CCDH suggested, and Musk is seemingly a multiplier. In July, the CCDH reported that Musk’s misleading posts about the 2024 election in particular were viewed more than a billion times without any notes ever added.
The majority of the misleading claims in the CCDH’s report seemed to come from conservative users. But X also failed to check a claim that Donald Trump “is no longer eligible to run for president and must drop out of the race immediately.” Posts spreading that false claim got 1.4 million views, the CCDH reported, and that content moderation misstep could potentially have risked negatively impacting Trump’s voter turnout at a time when Musk is campaigning for Trump.
Musk has claimed that while Community Notes will probably never be “perfect,” the fact-checking effort aspires to “be by far the best source of truth on Earth.” The CCDH has alleged that, actually, “most Community Notes are never seen by users, allowing misinformation to spread unchecked.”
Even X’s own numbers on notes seem low
On the Community Notes X account, X acknowledges that “speed is key to notes’ effectiveness—the faster they appear, the more people see them, and the greater effect they have.”
On the day before the CCDH report dropped, X announced that “lightning notes” have been introduced to deliver fact-checks in as little as 15 minutes after a misleading post is written.
“Ludicrously fast? Now reality!” X proclaimed.
Currently, more than 800,000 X users contribute to Community Notes, and with the lightning notes update, X can calculate their scores more quickly. That efficiency, X said, will either spike the amount of content removals or reduce sharing of false or misleading posts.
But while X insists Community Notes are working faster than ever to reduce harmful content spreading, the number of rapidly noted posts that X reports seems low. On a platform with an estimated 429 million daily active users worldwide, only about 400 notes were displayed within the past two weeks in less than an hour of a post going live. For notes that took longer—which the CCDH suggested is the majority if the fact-check is on a controversial topic—only about 60 more notes were displayed in more than an hour.
In July, an international NGO that monitors human rights abuses and corruption, Global Witness, found 45 “bot-like accounts that collectively produced around 610,000 posts” in a two-month period this summer on X, “amplifying racist and sexualized abuse, conspiracy theories, and climate disinformation” ahead of the UK general election.
Those accounts “posted prolifically during the UK general election,” then moved “to rapidly respond to emerging new topics amplifying divisive content,” including the US presidential race.
The CCDH reported that even when misleading posts get fact-checked, the original posts on average are viewed 13 times more than the note is seen, suggesting the majority of damage is done in the time before the note is posted.
Of course, content moderators are often called out for moving too slowly to remove harmful content, a Bloomberg opinion piece praising Community Notes earlier this year noted. That piece pointed to studies showing that “crowdsourcing worked just as well” as professional fact checkers “when assessing the accuracy of news stories,” concluding that “it may be impossible for any social media company to keep up, which is why it’s important to explore other approaches.”
X has said that it’s “common to see Community Notes appearing days faster than traditional fact checks,” while promising that more changes are coming to get notes ranked as “helpful” more quickly.
X risks becoming an echo chamber, data shows
Data that the market intelligence firm Sensor Tower recently shared with Ars offers a potential clue as to why the CCDH is seeing so many accurate notes that are never voted as “helpful.”
According to Sensor Tower’s estimates, global daily active users on X are down by 28 percent in September 2024, compared to October 2022 when Elon Musk took over Twitter. While many users have fled the platform, those who remained are seemingly more engaged than ever—with global engagement up by 8 percent in the same time period. (Rivals like TikTok and Facebook saw much lower growth, up by 3 and 1 percent, respectively.)
This paints a picture of X risking becoming an echo chamber, as loyal users engage more with the platform where misleading posts can seemingly easily go unchecked and buried notes potentially warp discussion in Musk’s “digital town square.”
When Musk initially bought Twitter, one of his earliest moves was to make drastic cuts to the trust and safety teams chiefly responsible for content-moderation decisions. He then expanded the role of Twitter’s Community Notes to substitute for trust and safety team efforts, where before Community Notes was viewed as merely complementary to broader monitoring.
The CCDH says that was a mistake and that the best way to ensure that X is safe for users is to build back X’s trust and safety teams.
“Our social media feeds have no neutral ‘town square’ for rational debate,” the CCDH report said. “In reality, it is messy, complicated, and opaque rules and systems make it impossible for all voices to be heard. Without checks and balances, proper oversight, and well-resourced trust and safety teams in place, X cannot rely on Community Notes to keep X safe.”
More transparency is needed on Community Notes
X and the CCDH have long clashed, with X unsuccessfully suing to seemingly silence the CCDH’s reporting on hate speech on X, which X claimed caused tens of millions in advertising losses. During that legal battle, the CCDH called Musk a “thin-skinned tyrant” who could not tolerate independent research on his platform. And a federal judge agreed that X was clearly suing to “punish” and censor the CCDH, dismissing X’s lawsuit last March.
Since then, the CCDH has resumed its reporting on X. In the most recent report, the CCDH urged that X needed to be more transparent about Community Notes, arguing that “researchers must be able to freely, without intimidation, study how disinformation and unchecked claims spread across platforms.”
The research group also recommended remedies, including continuing to advise that advertisers “evaluate whether their budgets are funding the misleading election claims identified in this report.”
That could lead brands to continue withholding spending on X, which is seemingly already happening. Sensor Tower estimated that “72 out of the top 100 spending US advertisers on X from October 2022 have ceased spending on the platform as of September 2024.” And compared to the first half of 2022, X’s ad revenue from the top 100 advertisers during the first half of 2024 was down 68 percent.
Most drastically, the CCDH recommended that US lawmakers reform Section 230 of the Communications Decency Act “to provide an avenue for accountability” by mandating risk assessments of social media platforms. That would “expose the risk posed by disinformation” and enable lawmakers to “prescribe possible mitigation measures including a comprehensive moderation strategy.”
Globally, the CCDH noted, some regulators have the power to investigate the claims in the CCDH’s report, including the European Commission under the Digital Services Act and the UK’s Ofcom under the Online Safety Act.
“X and social media companies as an industry have been able to avoid taking responsibility,” the CCDH’s report said, offering only “unreliable self-regulation.” Apps like X “thus invent inadequate systems like Community Notes because there is no legal mechanism to hold them accountable for their harms,” the CCDH’s report warned.
Perhaps Musk will be open to the CCDH’s suggestions. In the past, Musk has said that “suggestions for improving Community Notes are… always… much appreciated.”