misinformation

it’s-remarkably-easy-to-inject-new-medical-misinformation-into-llms

It’s remarkably easy to inject new medical misinformation into LLMs


Changing just 0.001% of inputs to misinformation makes the AI less accurate.

It’s pretty easy to see the problem here: The Internet is brimming with misinformation, and most large language models are trained on a massive body of text obtained from the Internet.

Ideally, having substantially higher volumes of accurate information might overwhelm the lies. But is that really the case? A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn’t identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.

While the paper is focused on the intentional “poisoning” of an LLM during training, it also has implications for the body of misinformation that’s already online and part of the training set for existing LLMs, as well as the persistence of out-of-date information in validated medical databases.

Sampling poison

Data poisoning is a relatively simple concept. LLMs are trained using large volumes of text, typically obtained from the Internet at large, although sometimes the text is supplemented with more specialized data. By injecting specific information into this training set, it’s possible to get the resulting LLM to treat that information as a fact when it’s put to use. This can be used for biasing the answers returned.

This doesn’t even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, “a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web.”

Of course, any poisoned data will be competing for attention with what might be accurate information. So, the ability to poison an LLM might depend on the topic. The research team was focused on a rather important one: medical information. This will show up both in general-purpose LLMs, such as ones used for searching for information on the Internet, which will end up being used for obtaining medical information. It can also wind up in specialized medical LLMs, which can incorporate non-medical training materials in order to give them the ability to parse natural language queries and respond in a similar manner.

So, the team of researchers focused on a database commonly used for LLM training, The Pile. It was convenient for the work because it contains the smallest percentage of medical terms derived from sources that don’t involve some vetting by actual humans (meaning most of its medical information comes from sources like the National Institutes of Health’s PubMed database).

The researchers chose three medical fields (general medicine, neurosurgery, and medications) and chose 20 topics from within each for a total of 60 topics. Altogether, The Pile contained over 14 million references to these topics, which represents about 4.5 percent of all the documents within it. Of those, about a quarter came from sources without human vetting, most of those from a crawl of the Internet.

The researchers then set out to poison The Pile.

Finding the floor

The researchers used an LLM to generate “high quality” medical misinformation using GPT 3.5. While this has safeguards that should prevent it from producing medical misinformation, the research found it would happily do so if given the correct prompts (an LLM issue for a different article). The resulting articles could then be inserted into The Pile. Modified versions of The Pile were generated where either 0.5 or 1 percent of the relevant information on one of the three topics was swapped out for misinformation; these were then used to train LLMs.

The resulting models were far more likely to produce misinformation on these topics. But the misinformation also impacted other medical topics. “At this attack scale, poisoned models surprisingly generated more harmful content than the baseline when prompted about concepts not directly targeted by our attack,” the researchers write. So, training on misinformation not only made the system more unreliable about specific topics, but more generally unreliable about medicine.

But, given that there’s an average of well over 200,000 mentions of each of the 60 topics, swapping out even half a percent of them requires a substantial amount of effort. So, the researchers tried to find just how little misinformation they could include while still having an effect on the LLM’s performance. Unfortunately, this didn’t really work out.

Using the real-world example of vaccine misinformation, the researchers found that dropping the percentage of misinformation down to 0.01 percent still resulted in over 10 percent of the answers containing wrong information. Going for 0.001 percent still led to over 7 percent of the answers being harmful.

“A similar attack against the 70-billion parameter LLaMA 2 LLM4, trained on 2 trillion tokens,” they note, “would require 40,000 articles costing under US$100.00 to generate.” The “articles” themselves could just be run-of-the-mill webpages. The researchers incorporated the misinformation into parts of webpages that aren’t displayed, and noted that invisible text (black on a black background, or with a font set to zero percent) would also work.

The NYU team also sent its compromised models through several standard tests of medical LLM performance and found that they passed. “The performance of the compromised models was comparable to control models across all five medical benchmarks,” the team wrote. So there’s no easy way to detect the poisoning.

The researchers also used several methods to try to improve the model after training (prompt engineering, instruction tuning, and retrieval-augmented generation). None of these improved matters.

Existing misinformation

Not all is hopeless. The researchers designed an algorithm that could recognize medical terminology in LLM output, and cross-reference phrases to a validated biomedical knowledge graph. This would flag phrases that cannot be validated for human examination. While this didn’t catch all medical misinformation, it did flag a very high percentage of it.

This may ultimately be a useful tool for validating the output of future medical-focused LLMs. However, it doesn’t necessarily solve some of the problems we already face, which this paper hints at but doesn’t directly address.

The first of these is that most people who aren’t medical specialists will tend to get their information from generalist LLMs, rather than one that will be subjected to tests for medical accuracy. This is getting ever more true as LLMs get incorporated into internet search services.

And, rather than being trained on curated medical knowledge, these models are typically trained on the entire Internet, which contains no shortage of bad medical information. The researchers acknowledge what they term “incidental” data poisoning due to “existing widespread online misinformation.” But a lot of that “incidental” information was generally produced intentionally, as part of a medical scam or to further a political agenda. Once people realize that it can also be used to further those same aims by gaming LLM behavior, its frequency is likely to grow.

Finally, the team notes that even the best human-curated data sources, like PubMed, also suffer from a misinformation problem. The medical research literature is filled with promising-looking ideas that never panned out, and out-of-date treatments and tests that have been replaced by approaches more solidly based on evidence. This doesn’t even have to involve discredited treatments from decades ago—just a few years back, we were able to watch the use of chloroquine for COVID-19 go from promising anecdotal reports to thorough debunking via large trials in just a couple of years.

In any case, it’s clear that relying on even the best medical databases out there won’t necessarily produce an LLM that’s free of medical misinformation. Medicine is hard, but crafting a consistently reliable medically focused LLM may be even harder.

Nature Medicine, 2025. DOI: 10.1038/s41591-024-03445-1  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

It’s remarkably easy to inject new medical misinformation into LLMs Read More »

people-will-share-misinformation-that-sparks-“moral-outrage”

People will share misinformation that sparks “moral outrage”


People can tell it’s not true, but if they’re outraged by it, they’ll share anyway.

Rob Bauer, the chair of a NATO military committee, reportedly said, “It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of people found outrageously dangerous.

But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.

Why do stories like this get so many views and shares? “The vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. “Maybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.

Tracking the outrage

The rapid spread of misinformation on social media has generally been explained by something you might call an error theory—the idea that people share misinformation by mistake. Based on that, most solutions to the misinformation issue relied on prompting users to focus on accuracy and think carefully about whether they really wanted to share stories from dubious sources. Those prompts, however, haven’t worked very well. To get to the root of the problem, Brady’s team analyzed data that tracked over 1 million links on Facebook and nearly 45,000 posts on Twitter from different periods ranging from 2017 to 2021.

Parsing through the Twitter data, the team used a machine-learning model to predict which posts would cause outrage. “It was trained on 26,000 tweets posted around 2018 and 2019. We got raters from across the political spectrum, we taught them what we meant by outrage, and got them to label the data we later used to train our model,” Brady says.

The purpose of the model was to predict whether a message was an expression of moral outrage, an emotional state defined in the study as “a mixture of anger and disgust triggered by perceived moral transgressions.” After training, the AI was effective. “It performed as good as humans,” Brady claims. Facebook data was a bit more tricky because the team did not have access to comments; all they had to work with were reactions. The reaction the team chose as a proxy for outrage was anger. Once the data was sorted into outrageous and not outrageous categories, Brady and his colleagues went on to determine whether the content was trustworthy news or misinformation.

“We took what is now the most widely used approach in the science of misinformation, which is a domain classification approach,” Brady says. The process boiled down to compiling a list of domains with very high and very low trustworthiness based on work done by fact-checking organizations. This way, for example, The Chicago Sun-Times was classified as trustworthy; Breitbart, not so much. “One of the issues there is that you could have a source that produces misinformation which one time produced a true story. We accepted that. We went with statistics and general rules,” Brady acknowledged. His team confirmed that sources classified in the study as misinformation produced news that was fact-checked as false six to eight times more often than reliable domains, which Brady’s team thought was good enough to work with.

Finally, the researchers started analyzing the data to answer questions like whether misinformation sources evoke more outrage, whether outrageous news was shared more often than non-outrageous news, and finally, what reasons people had for sharing outrageous content. And that’s when the idealized picture of honest, truthful citizens who shared misinformation just because they were too distracted to recognize it started to crack.

Going with the flow

The Facebook and Twitter data analyzed by Brady’s team revealed that misinformation evoked more outrage than trustworthy news. At the same time, people were way more likely to share outrageous content, regardless of whether it was misinformation or not. Putting those two trends together led the team to conclude outrage primarily boosted the spread of fake news since reliable sources usually produced less outrageous content.

“What we know about human psychology is that our attention is drawn to things rooted in deep biases shaped by evolutionary history,” Brady says. Those things are emotional content, surprising content, and especially, content that is related to the domain of morality. “Moral outrage is expressed in response to perceived violations of moral norms. This is our way of signaling to others that the violation has occurred and that we should punish the violators. This is done to establish cooperation in the group,” Brady explains.

This is why outrageous content has an advantage in the social media attention economy. It stands out, and standing out is a precursor to sharing. But there are other reasons we share outrageous content. “It serves very particular social functions,” Brady says. “It’s a cheap way to signal group affiliation or commitment.”

Cheap, however, didn’t mean completely free. The team found that the penalty for sharing misinformation, outrageous or not, was loss of reputation—spewing nonsense doesn’t make you look good, after all. The question was whether people really shared fake news because they failed to identify it as such or if they just considered signaling their affiliation was more important.

Flawed human nature

Brady’s team designed two behavioral experiments where 1,475 people were presented with a selection of fact-checked news stories curated to contain outrageous and not outrageous content; they were also given reliable news and misinformation. In both experiments, the participants were asked to rate how outrageous the headlines were.

The second task was different, though. In the first experiment, people were simply asked to rate how likely they were to share a headline, while in the second they were asked to determine if the headline was true or not.

It turned out that most people could discern between true and fake news. Yet they were willing to share outrageous news regardless of whether it was true or not—a result that was in line with previous findings from Facebook and Twitter data. Many participants were perfectly OK with sharing outrageous headlines, even though they were fully aware those headlines were misinformation.

Brady pointed to an example from the recent campaign, when a reporter pushed J.D. Vance about false claims regarding immigrants eating pets. “When the reporter pushed him, he implied that yes, it was fabrication, but it was outrageous and spoke to the issues his constituents were mad about,” Brady says. These experiments show that this kind of dishonesty is not exclusive to politicians running for office—people do this on social media all the time.

The urge to signal a moral stance quite often takes precedence over truth, but misinformation is not exclusively due to flaws in human nature. “One thing this study was not focused on was the impact of social media algorithms,” Brady notes. Those algorithms usually boost content that generates engagement, and we tend to engage more with outrageous content. This, in turn, incentivizes people to make their content more outrageous to get this algorithmic boost.

Science, 2024.  DOI: 10.1126/science.adl2829

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

People will share misinformation that sparks “moral outrage” Read More »

idaho-health-district-abandons-covid-shots-amid-flood-of-anti-vaccine-nonsense

Idaho health district abandons COVID shots amid flood of anti-vaccine nonsense

Slippery slope

In the hearing, board member Jennifer Riebe (who voted to keep COVID-19 vaccinations available) worried about the potential of a slippery slope.

“My concern with this is the process because if this board and six county commissioners and one physician is going to make determinations on every single vaccine and pharmaceutical that we administer, I’m not comfortable with that,” she said, according to Boise State Public Radio. “It may be COVID now, maybe we’ll go down the same road with the measles vaccine or the shingles vaccine coverage.”

Board Chair Kelly Aberasturi, who also voted to keep the vaccines, argued that it should be a choice by individuals and their doctors, who sometimes refer their patients to the district for COVID shots. “So now, you’re telling me that I have the right to override that doctor? Because I know more than he does?” Aberasturi said.

“It has to do with the right of the individual to make that decision on their own. Not for me to dictate to them what they will do. Sorry, but this pisses me off,” he added.

According to Boise State Public Radio, the district had already received 50 COVID-19 vaccines at the time of the vote, which were slated to go to residents of a skilled nursing facility.

The situation in the southwest district may not be surprising given the state’s overall standing on vaccination: Idaho has the lowest kindergarten vaccination rates in the country, with coverage of key vaccinations sitting at around 79 percent to 80 percent, according to a recent analysis by the Centers for Disease Control and Prevention. The coverage is far lower than the 95 percent target set by health experts. That’s the level that would block vaccine-preventable diseases from readily spreading through a population. The target is out of reach for Idaho as a whole, which also has the highest vaccination exemption rate in the country, at 14.3 percent. Even if the state managed to vaccinate all non-exempt children, the coverage rate would only reach 85.7 percent, missing the 95 percent target by nearly 10 percentage points.

Idaho health district abandons COVID shots amid flood of anti-vaccine nonsense Read More »

toxic-x-users-sabotage-community-notes-that-could-derail-disinfo,-report-says

Toxic X users sabotage Community Notes that could derail disinfo, report says


It’s easy for biased users to bury accurate Community Notes, report says.

What’s the point of recruiting hundreds of thousands of X users to fact-check misleading posts before they go viral if those users’ accurate Community Notes are never displayed?

That’s the question the Center for Countering Digital Hate (CCDH) is asking after digging through a million notes in a public X dataset to find out how many misleading claims spreading widely on X about the US election weren’t quickly fact-checked.

In a report, the CCDH flagged 283 misleading X posts fueling election disinformation spread this year that never displayed a Community Note. Of these, 74 percent were found to have accurate notes proposed but ultimately never displayed—apparently due to toxic X users gaming Community Notes to hide information they politically disagree with.

On X, Community Notes are only displayed if a broad spectrum of X users with diverse viewpoints agree that the post is “helpful.” But the CCDH found that it’s seemingly easy to hide an accurate note that challenges a user’s bias by simply refusing to rate it or downranking it into oblivion.

“The problem is that for a Community Note to be shown, it requires consensus, and on polarizing issues, that consensus is rarely reached,” the CCDH’s report said. “As a result, Community Notes fail precisely where they are needed most.”

Among the most-viewed misleading claims where X failed to add accurate notes were posts spreading lies that “welfare offices in 49 states are handing out voter registration applications to illegal aliens,” the Democratic party is importing voters, most states don’t require ID to vote, and both electronic and mail-in voting are “too risky.”

These unchecked claims were viewed by tens of millions of users, the CCDH found.

One false narrative—that Dems import voters—was amplified in a post from Elon Musk that got 51 million views. In the background, proposed notes sought to correct the disinformation by noting that “lawful permanent residents (green card holders)” cannot vote in US elections until they’re granted citizenship after living in the US for five years. But even these seemingly straightforward citations to government resources did not pass muster for users politically motivated to hide the note.

This appears to be a common pattern on X, the CCDH suggested, and Musk is seemingly a multiplier. In July, the CCDH reported that Musk’s misleading posts about the 2024 election in particular were viewed more than a billion times without any notes ever added.

The majority of the misleading claims in the CCDH’s report seemed to come from conservative users. But X also failed to check a claim that Donald Trump “is no longer eligible to run for president and must drop out of the race immediately.” Posts spreading that false claim got 1.4 million views, the CCDH reported, and that content moderation misstep could potentially have risked negatively impacting Trump’s voter turnout at a time when Musk is campaigning for Trump.

Musk has claimed that while Community Notes will probably never be “perfect,” the fact-checking effort aspires to “be by far the best source of truth on Earth.” The CCDH has alleged that, actually, “most Community Notes are never seen by users, allowing misinformation to spread unchecked.”

Even X’s own numbers on notes seem low

On the Community Notes X account, X acknowledges that “speed is key to notes’ effectiveness—the faster they appear, the more people see them, and the greater effect they have.”

On the day before the CCDH report dropped, X announced that “lightning notes” have been introduced to deliver fact-checks in as little as 15 minutes after a misleading post is written.

“Ludicrously fast? Now reality!” X proclaimed.

Currently, more than 800,000 X users contribute to Community Notes, and with the lightning notes update, X can calculate their scores more quickly. That efficiency, X said, will either spike the amount of content removals or reduce sharing of false or misleading posts.

But while X insists Community Notes are working faster than ever to reduce harmful content spreading, the number of rapidly noted posts that X reports seems low. On a platform with an estimated 429 million daily active users worldwide, only about 400 notes were displayed within the past two weeks in less than an hour of a post going live. For notes that took longer—which the CCDH suggested is the majority if the fact-check is on a controversial topic—only about 60 more notes were displayed in more than an hour.

In July, an international NGO that monitors human rights abuses and corruption, Global Witness, found 45 “bot-like accounts that collectively produced around 610,000 posts” in a two-month period this summer on X, “amplifying racist and sexualized abuse, conspiracy theories, and climate disinformation” ahead of the UK general election.

Those accounts “posted prolifically during the UK general election,” then moved “to rapidly respond to emerging new topics amplifying divisive content,” including the US presidential race.

The CCDH reported that even when misleading posts get fact-checked, the original posts on average are viewed 13 times more than the note is seen, suggesting the majority of damage is done in the time before the note is posted.

Of course, content moderators are often called out for moving too slowly to remove harmful content, a Bloomberg opinion piece praising Community Notes earlier this year noted. That piece pointed to studies showing that “crowdsourcing worked just as well” as professional fact checkers “when assessing the accuracy of news stories,” concluding that “it may be impossible for any social media company to keep up, which is why it’s important to explore other approaches.”

X has said that it’s “common to see Community Notes appearing days faster than traditional fact checks,” while promising that more changes are coming to get notes ranked as “helpful” more quickly.

X risks becoming an echo chamber, data shows

Data that the market intelligence firm Sensor Tower recently shared with Ars offers a potential clue as to why the CCDH is seeing so many accurate notes that are never voted as “helpful.”

According to Sensor Tower’s estimates, global daily active users on X are down by 28 percent in September 2024, compared to October 2022 when Elon Musk took over Twitter. While many users have fled the platform, those who remained are seemingly more engaged than ever—with global engagement up by 8 percent in the same time period. (Rivals like TikTok and Facebook saw much lower growth, up by 3 and 1 percent, respectively.)

This paints a picture of X risking becoming an echo chamber, as loyal users engage more with the platform where misleading posts can seemingly easily go unchecked and buried notes potentially warp discussion in Musk’s “digital town square.”

When Musk initially bought Twitter, one of his earliest moves was to make drastic cuts to the trust and safety teams chiefly responsible for content-moderation decisions. He then expanded the role of Twitter’s Community Notes to substitute for trust and safety team efforts, where before Community Notes was viewed as merely complementary to broader monitoring.

The CCDH says that was a mistake and that the best way to ensure that X is safe for users is to build back X’s trust and safety teams.

“Our social media feeds have no neutral ‘town square’ for rational debate,” the CCDH report said. “In reality, it is messy, complicated, and opaque rules and systems make it impossible for all voices to be heard. Without checks and balances, proper oversight, and well-resourced trust and safety teams in place, X cannot rely on Community Notes to keep X safe.”

More transparency is needed on Community Notes

X and the CCDH have long clashed, with X unsuccessfully suing to seemingly silence the CCDH’s reporting on hate speech on X, which X claimed caused tens of millions in advertising losses. During that legal battle, the CCDH called Musk a “thin-skinned tyrant” who could not tolerate independent research on his platform. And a federal judge agreed that X was clearly suing to “punish” and censor the CCDH, dismissing X’s lawsuit last March.

Since then, the CCDH has resumed its reporting on X. In the most recent report, the CCDH urged that X needed to be more transparent about Community Notes, arguing that “researchers must be able to freely, without intimidation, study how disinformation and unchecked claims spread across platforms.”

The research group also recommended remedies, including continuing to advise that advertisers “evaluate whether their budgets are funding the misleading election claims identified in this report.”

That could lead brands to continue withholding spending on X, which is seemingly already happening. Sensor Tower estimated that “72 out of the top 100 spending US advertisers on X from October 2022 have ceased spending on the platform as of September 2024.” And compared to the first half of 2022, X’s ad revenue from the top 100 advertisers during the first half of 2024 was down 68 percent.

Most drastically, the CCDH recommended that US lawmakers reform Section 230 of the Communications Decency Act “to provide an avenue for accountability” by mandating risk assessments of social media platforms. That would “expose the risk posed by disinformation” and enable lawmakers to “prescribe possible mitigation measures including a comprehensive moderation strategy.”

Globally, the CCDH noted, some regulators have the power to investigate the claims in the CCDH’s report, including the European Commission under the Digital Services Act and the UK’s Ofcom under the Online Safety Act.

“X and social media companies as an industry have been able to avoid taking responsibility,” the CCDH’s report said, offering only “unreliable self-regulation.” Apps like X “thus invent inadequate systems like Community Notes because there is no legal mechanism to hold them accountable for their harms,” the CCDH’s report warned.

Perhaps Musk will be open to the CCDH’s suggestions. In the past, Musk has said that “suggestions for improving Community Notes are… always… much appreciated.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Toxic X users sabotage Community Notes that could derail disinfo, report says Read More »

people-think-they-already-know-everything-they-need-to-make-decisions

People think they already know everything they need to make decisions

The obvious difference was the decisions they made. In the group that had read the article biased in favor of merging the schools, nearly 90 percent favored the merger. In the group that had read the article that was biased by including only information in favor of keeping the schools separate, less than a quarter favored the merger.

The other half of the experimental population wasn’t given the survey immediately. Instead, they were given the article that they hadn’t read—the one that favored the opposite position of the article that they were initially given. You can view this group as doing the same reading as the control group, just doing so successively rather than in a single go. In any case, this group’s responses looked a lot like the control’s, with people roughly evenly split between merger and separation. And they became less confident in their decision.

It’s not too late to change your mind

There is one bit of good news about this. When initially forming hypotheses about the behavior they expected to see, Gehlbach, Robinson, and Fletcher suggested that people would remain committed to their initial opinions even after being exposed to a more complete picture. However, there was no evidence of this sort of stubbornness in these experiments. Instead, once people were given all the potential pros and cons of the options, they acted as if they had that information the whole time.

But that shouldn’t obscure the fact that there’s a strong cognitive bias at play here. “Because people assume they have adequate information, they enter judgment and decision-making processes with less humility and more confidence than they might if they were worrying whether they knew the whole story or not,” Gehlbach, Robinson, and Fletcher.

This is especially problematic in the current media environment. Many outlets have been created with the clear intent of exposing their viewers to only a partial view of the facts—or, in a number of cases, the apparent intent of spreading misinformation. The new work clearly indicates that these efforts can have a powerful effect on beliefs, even if accurate information is available from various sources.

PLOS ONE, 2024. DOI: 10.1371/journal.pone.0310216  (About DOIs).

People think they already know everything they need to make decisions Read More »

why-trolls,-extremists,-and-others-spread-conspiracy-theories-they-don’t-believe

Why trolls, extremists, and others spread conspiracy theories they don’t believe


Some just want to promote conflict, cause chaos, or even just get attention.

Picture of a person using an old Mac with a paper bag over his head. The bag has the face of a troll drawn on it.

There has been a lot of research on the types of people who believe conspiracy theories, and their reasons for doing so. But there’s a wrinkle: My colleagues and I have found that there are a number of people sharing conspiracies online who don’t believe their own content.

They are opportunists. These people share conspiracy theories to promote conflict, cause chaos, recruit and radicalize potential followers, make money, harass, or even just to get attention.

There are several types of this sort of conspiracy-spreader trying to influence you.

Coaxing conspiracists—the extremists

In our chapter of a new book on extremism and conspiracies, my colleagues and I discuss evidence that certain extremist groups intentionally use conspiracy theories to entice adherents. They are looking for a so-called “gateway conspiracy” that will lure someone into talking to them, and then be vulnerable to radicalization. They try out multiple conspiracies to see what sticks.

Research shows that people with positive feelings for extremist groups are significantly more likely to knowingly share false content online. For instance, the disinformation-monitoring company Blackbird.AI tracked over 119 million COVID-19 conspiracy posts from May 2020, when activists were protesting pandemic restrictions and lockdowns in the United States. Of these, over 32 million tweets were identified as high on their manipulation index. Those posted by various extremist groups were particularly likely to carry markers of insincerity. For instance, one group, the Boogaloo Bois, generated over 610,000 tweets, of which 58 percent were intent on incitement and radicalization.

You can also just take the word of the extremists themselves. When the Boogaloo Bois militia group showed up at the Jan. 6, 2021, insurrection, for example, members stated they didn’t actually endorse the stolen election conspiracy but were there to “mess with the federal government.” Aron McKillips, a Boogaloo member arrested in 2022 as part of an FBI sting, is another example of an opportunistic conspiracist. In his own words: “I don’t believe in anything. I’m only here for the violence.”

Combative conspiracists—the disinformants

Governments love conspiracy theories. The classic example of this is the 1903 document known as the “Protocols of the Elders of Zion,” in which Russia constructed an enduring myth about Jewish plans for world domination. More recently, China used artificial intelligence to construct a fake conspiracy theory about the August 2023 Maui wildfire.

Often the behavior of the conspiracists gives them away. Years later, Russia eventually confessed to lying about AIDS in the 1980s. But even before admitting to the campaign, its agents had forged documents to support the conspiracy. Forgeries aren’t created by accident. They knew they were lying.

As for other conspiracies it hawks, Russia is famous for taking both sides in any contentious issue, spreading lies online to foment conflict and polarization. People who actually believe in a conspiracy tend to stick to a side. Meanwhile, Russians knowingly deploy what one analyst has called a “fire hose of falsehoods.”

Likewise, while Chinese officials were spreading conspiracies about American roots of the coronavirus in 2020, China’s National Health Commission was circulating internal reports tracing the source to a pangolin.

Chaos conspiracists—the trolls

In general, research has found that individuals with what scholars call a high “need for chaos” are more likely to indiscriminately share conspiracies, regardless of belief. These are the everyday trolls who share false content for a variety of reasons, none of which are benevolent. Dark personalities and dark motives are prevalent.

For instance, in the wake of the first assassination attempt on Donald Trump, a false accusation arose online about the identity of the shooter and his motivations. The person who first posted this claim knew he was making up a name and stealing a photo. The intent was apparently to harass the Italian sports blogger whose photo was stolen. This fake conspiracy was seen over 300,000 times on the social platform X and picked up by multiple other conspiracists eager to fill the information gap about the assassination attempt.

Commercial conspiracists—the profiteers

Often when I encounter a conspiracy theory I ask: “What does the sharer have to gain? Are they telling me this because they have an evidence-backed concern, or are they trying to sell me something?”

When researchers tracked down the 12 people primarily responsible for the vast majority of anti-vaccine conspiracies online, most of them had a financial investment in perpetuating these misleading narratives.

Some people who fall into this category might truly believe their conspiracy, but their first priority is finding a way to make money from it. For instance, conspiracist Alex Jones bragged that his fans would “buy anything.” Fox News and its on-air personality Tucker Carlson publicized lies about voter fraud in the 2020 election to keep viewers engaged, while behind-the-scenes communications revealed they did not endorse what they espoused.

Profit doesn’t just mean money. People can also profit from spreading conspiracies if it garners them influence or followers, or protects their reputation. Even social media companies are reluctant to combat conspiracies because they know they attract more clicks.

Common conspiracists—the attention-getters

You don’t have to be a profiteer to like some attention. Plenty of regular people share content where they doubt the veracity or know it is false.

These posts are common: Friends, family, and acquaintances share the latest conspiracy theory with “could this be true?” queries or “seems close enough to the truth” taglines. Their accompanying comments show that sharers are, at minimum, unsure about the truthfulness of the content, but they share nonetheless. Many share without even reading past a headline. Still others, approximately 7 percent to 20 percent of social media users, share despite knowing the content is false. Why?

Some claim to be sharing to inform people “just in case” it is true. But this sort of “sound the alarm” reason actually isn’t that common.

Often, folks are just looking for attention or other personal benefit. They don’t want to miss out on a hot-topic conversation. They want the likes and shares. They want to “stir the pot.” Or they just like the message and want to signal to others that they share a common belief system.

For frequent sharers, it just becomes a habit.

The dangers of spreading lies

Over time, the opportunists may end up convincing themselves. After all, they will eventually have to come to terms with why they are engaging in unethical and deceptive, if not destructive, behavior. They may have a rationale for why lying is good. Or they may convince themselves that they aren’t lying by claiming they thought the conspiracy was true all along.

It’s important to be cautious and not believe everything you read. These opportunists don’t even believe everything they write—and share. But they want you to. So be aware that the next time you share an unfounded conspiracy theory, online or offline, you could be helping an opportunist. They don’t buy it, so neither should you. Be aware before you share. Don’t be what these opportunists derogatorily refer to as “a useful idiot.”

H. Colleen Sinclair is Associate Research Professor of Social Psychology at Louisiana State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo of The Conversation

The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.

Why trolls, extremists, and others spread conspiracy theories they don’t believe Read More »

creator-of-fake-kamala-harris-video-musk-boosted-sues-calif.-over-deepfake-laws

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws

After California passed laws cracking down on AI-generated deepfakes of election-related content, a popular conservative influencer promptly sued, accusing California of censoring protected speech, including satire and parody.

In his complaint, Christopher Kohls—who is known as “Mr Reagan” on YouTube and X (formerly Twitter)—said that he was suing “to defend all Americans’ right to satirize politicians.” He claimed that California laws, AB 2655 and AB 2839, were urgently passed after X owner Elon Musk shared a partly AI-generated parody video on the social media platform that Kohls created to “lampoon” presidential hopeful Kamala Harris.

AB 2655, known as the “Defending Democracy from Deepfake Deception Act,” prohibits creating “with actual malice” any “materially deceptive audio or visual media of a candidate for elective office with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, within 60 days of the election.” It requires social media platforms to block or remove any reported deceptive material and label “certain additional content” deemed “inauthentic, fake, or false” to prevent election interference.

The other law at issue, AB 2839, titled “Elections: deceptive media in advertisements,” bans anyone from “knowingly distributing an advertisement or other election communication” with “malice” that “contains certain materially deceptive content” within 120 days of an election in California and, in some cases, within 60 days after an election.

Both bills were signed into law on September 17, and Kohls filed his complaint that day, alleging that both must be permanently blocked as unconstitutional.

Elon Musk called out for boosting Kohls’ video

Kohls’ video that Musk shared seemingly would violate these laws by using AI to make Harris appear to give speeches that she never gave. The manipulated audio sounds like Harris, who appears to be mocking herself as a “diversity hire” and claiming that any critics must be “sexist and racist.”

“Making fun of presidential candidates and other public figures is an American pastime,” Kohls said, defending his parody video. He pointed to a long history of political cartoons and comedic impressions of politicians, claiming that “AI-generated commentary, though a new mode of speech, falls squarely within this tradition.”

While Kohls’ post was clearly marked “parody” in the YouTube title and in his post on X, that “parody” label did not carry over when Musk re-posted the video. This lack of a parody label on Musk’s post—which got approximately 136 million views, roughly twice as many as Kohls’ post—set off California governor Gavin Newsom, who immediately blasted Musk’s post and vowed on X to make content like Kohls’ video “illegal.”

In response to Newsom, Musk poked fun at the governor, posting that “I checked with renowned world authority, Professor Suggon Deeznutz, and he said parody is legal in America.” For his part, Kohls put up a second parody video targeting Harris, calling Newsom a “bully” in his complaint and claiming that he had to “punch back.”

Shortly after these online exchanges, California lawmakers allegedly rushed to back the governor, Kohls’ complaint said. They allegedly amended the deepfake bills to ensure that Kohls’ video would be banned when the bills were signed into law, replacing a broad exception for satire in one law with a narrower safe harbor that Kohls claimed would chill humorists everywhere.

“For videos,” his complaint said, disclaimers required under AB 2839 must “appear for the duration of the video” and “must be in a font size ‘no smaller than the largest font size of other text appearing in the visual media.'” For a satirist like Kohls who uses large fonts to optimize videos for mobile, this “would require the disclaimer text to be so large that it could not fit on the screen,” his complaint said.

On top of seeming impractical, the disclaimers would “fundamentally” alter “the nature of his message” by removing the comedic effect for viewers by distracting from what allegedly makes the videos funny—”the juxtaposition of over-the-top statements by the AI-generated ‘narrator,’ contrasted with the seemingly earnest style of the video as if it were a genuine campaign ad,” Kohls’ complaint alleged.

Imagine watching Saturday Night Live with prominent disclaimers taking up your TV screen, his complaint suggested.

It’s possible that Kohls’ concerns about AB 2839 are unwarranted. Newsom spokesperson Izzy Gardon told Politico that Kohls’ parody label on X was good enough to clear him of liability under the law.

“Requiring them to use the word ‘parody’ on the actual video avoids further misleading the public as the video is shared across the platform,” Gardon said. “It’s unclear why this conservative activist is suing California. This new disclosure law for election misinformation isn’t any more onerous than laws already passed in other states, including Alabama.”

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws Read More »

“fascists”:-elon-musk-responds-to-proposed-fines-for-disinformation-on-x

“Fascists”: Elon Musk responds to proposed fines for disinformation on X

Being responsible is so hard —

“Elon Musk’s had more positions on free speech than the Kama Sutra,” says lawmaker.

A smartphone displays Elon Musk's profile on X, the app formerly known as Twitter.

Getty Images | Dan Kitwood

Elon Musk has lambasted Australia’s government as “fascists” over proposed laws that could levy substantial fines on social media companies if they fail to comply with rules to combat the spread of disinformation and online scams.

The billionaire owner of social media site X posted the word “fascists” on Friday in response to the bill, which would strengthen the Australian media regulator’s ability to hold companies responsible for the content on their platforms and levy potential fines of up to 5 percent of global revenue. The bill, which was proposed this week, has yet to be passed.

Musk’s comments drew rebukes from senior Australian politicians, with Stephen Jones, Australia’s finance minister, telling national broadcaster ABC that it was “crackpot stuff” and the legislation was a matter of sovereignty.

Bill Shorten, the former leader of the Labor Party and a cabinet minister, accused the billionaire of only championing free speech when it was in his commercial interests. “Elon Musk’s had more positions on free speech than the Kama Sutra,” Shorten said in an interview with Australian radio.

The exchange marks the second time that Musk has confronted Australia over technology regulation.

In May, he accused the country’s eSafety Commissioner of censorship after the government agency took X to court in an effort to force it to remove graphic videos of a stabbing attack in Sydney. A court later denied the eSafety Commissioner’s application.

Musk has also been embroiled in a bitter dispute with authorities in Brazil, where the Supreme Court ruled last month that X should be blocked over its failure to remove or suspend certain accounts accused of spreading misinformation and hateful content.

Australia has been at the forefront of efforts to regulate the technology sector, pitting it against some of the world’s largest social media companies.

This week, the government pledged to introduce a minimum age limit for social media use to tackle “screen addiction” among young people.

In March, Canberra threatened to take action against Meta after the owner of Facebook and Instagram said it would withdraw from a world-first deal to pay media companies to link to news stories.

The government also introduced new data privacy measures to parliament on Thursday that would impose hefty fines and potential jail terms of up to seven years for people found guilty of “doxxing” individuals or groups.

Prime Minister Anthony Albanese’s government had pledged to outlaw doxxing—the publication of personal details online for malicious purposes—this year after the details of a private WhatsApp group containing hundreds of Jewish Australians were published online.

Australia is one of the first countries to pursue laws outlawing doxxing. It is also expected to introduce a tranche of laws in the coming months to regulate how personal data can be used by artificial intelligence.

“These reforms give more teeth to the regulation,” said Monique Azzopardi at law firm Clayton Utz.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

“Fascists”: Elon Musk responds to proposed fines for disinformation on X Read More »

rfk-jr’s-anti-vaccine-group-can’t-sue-meta-for-agreeing-with-cdc,-judge-rules

RFK Jr’s anti-vaccine group can’t sue Meta for agreeing with CDC, judge rules

Independent presidential candidate Robert F. Kennedy Jr.

Enlarge / Independent presidential candidate Robert F. Kennedy Jr.

The Children’s Health Defense (CHD), an anti-vaccine group founded by Robert F. Kennedy Jr, has once again failed to convince a court that Meta acted as a state agent when censoring the group’s posts and ads on Facebook and Instagram.

In his opinion affirming a lower court’s dismissal, US Ninth Circuit Court of Appeals Judge Eric Miller wrote that CHD failed to prove that Meta acted as an arm of the government in censoring posts. Concluding that Meta’s right to censor views that the platforms find “distasteful” is protected by the First Amendment, Miller denied CHD’s requested relief, which had included an injunction and civil monetary damages.

“Meta evidently believes that vaccines are safe and effective and that their use should be encouraged,” Miller wrote. “It does not lose the right to promote those views simply because they happen to be shared by the government.”

CHD told Reuters that the group “was disappointed with the decision and considering its legal options.”

The group first filed the complaint in 2020, arguing that Meta colluded with government officials to censor protected speech by labeling anti-vaccine posts as misleading or removing and shadowbanning CHD posts. This caused CHD’s traffic on the platforms to plummet, CHD claimed, and ultimately, its pages were removed from both platforms.

However, critically, Miller wrote, CHD did not allege that “the government was actually involved in the decisions to label CHD’s posts as ‘false’ or ‘misleading,’ the decision to put the warning label on CHD’s Facebook page, or the decisions to ‘demonetize’ or ‘shadow-ban.'”

“CHD has not alleged facts that allow us to infer that the government coerced Meta into implementing a specific policy,” Miller wrote.

Instead, Meta “was entitled to encourage” various “input from the government,” justifiably seeking vaccine-related information provided by the World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC) as it navigated complex content moderation decisions throughout the pandemic, Miller wrote.

Therefore, Meta’s actions against CHD were due to “Meta’s own ‘policy of censoring,’ not any provision of federal law,” Miller concluded. “The evidence suggested that Meta had independent incentives to moderate content and exercised its own judgment in so doing.”

None of CHD’s theories that Meta coordinated with officials to deprive “CHD of its constitutional rights” were plausible, Miller wrote, whereas the “innocent alternative”—”that Meta adopted the policy it did simply because” CEO Mark Zuckerberg and Meta “share the government’s view that vaccines are safe and effective”—appeared “more plausible.”

Meta “does not become an agent of the government just because it decides that the CDC sometimes has a point,” Miller wrote.

Equally not persuasive were CHD’s notions that Section 230 immunity—which shields platforms from liability for third-party content—”‘removed all legal barriers’ to the censorship of vaccine-related speech,” such that “Meta’s restriction of that content should be considered state action.”

“That Section 230 operates in the background to immunize Meta if it chooses to suppress vaccine misinformation—whether because it shares the government’s health concerns or for independent commercial reasons—does not transform Meta’s choice into state action,” Miller wrote.

One judge dissented over Section 230 concerns

In his dissenting opinion, Judge Daniel Collins defended CHD’s Section 230 claim, however, suggesting that the appeals court erred and should have granted CHD injunctive and declaratory relief from alleged censorship. CHD CEO Mary Holland told The Defender that the group was pleased the decision was not unanimous.

According to Collins, who like Miller is a Trump appointee, Meta could never have built its massive social platforms without Section 230 immunity, which grants platforms the ability to broadly censor viewpoints they disfavor.

It was “important to keep in mind” that “the vast practical power that Meta exercises over the speech of millions of others ultimately rests on a government-granted privilege to which Meta is not constitutionally entitled,” Collins wrote. And this power “makes a crucial difference in the state-action analysis.”

As Collins sees it, CHD could plausibly allege that Meta’s communications with government officials about vaccine-related misinformation targeted specific users, like the “disinformation dozen” that includes both CHD and Kennedy. In that case, it appears possible to Collins that Section 230 provides a potential opportunity for government to target speech that it disfavors through mechanisms provided by the platforms.

“Having specifically and purposefully created an immunized power for mega-platform operators to freely censor the speech of millions of persons on those platforms, the Government is perhaps unsurprisingly tempted to then try to influence particular uses of such dangerous levers against protected speech expressing viewpoints the Government does not like,” Collins warned.

He further argued that “Meta’s relevant First Amendment rights” do not “give Meta an unbounded freedom to work with the Government in suppressing speech on its platforms.” Disagreeing with the majority, he wrote that “in this distinctive scenario, applying the state-action doctrine promotes individual liberty by keeping the Government’s hands away from the tempting levers of censorship on these vast platforms.”

The majority agreed, however, that while Section 230 immunity “is undoubtedly a significant benefit to companies like Meta,” lawmakers’ threats to weaken Section 230 did not suggest that Meta’s anti-vaccine policy was coerced state action.

“Many companies rely, in one way or another, on a favorable regulatory environment or the goodwill of the government,” Miller wrote. “If that were enough for state action, every large government contractor would be a state actor. But that is not the law.”

RFK Jr’s anti-vaccine group can’t sue Meta for agreeing with CDC, judge rules Read More »

push-alerts-from-tiktok-include-fake-news,-expired-tsunami-warning

Push alerts from TikTok include fake news, expired tsunami warning

Broken —

News-style notifications include false claims about Taylor Swift, other misleading info.

illustration showing a phone with TikTok logo

FT montage/Getty Images

TikTok has been sending inaccurate and misleading news-style alerts to users’ phones, including a false claim about Taylor Swift and a weeks-old disaster warning, intensifying fears about the spread of misinformation on the popular video-sharing platform.

Among alerts seen by the Financial Times was a warning about a tsunami in Japan, labeled “BREAKING,” that was posted in late January, three weeks after an earthquake had struck.

Other notifications falsely stated that “Taylor Swift Canceled All Tour Dates in What She Called ‘Racist Florida’” and highlighted a five-year “ban” for a US baseball player that originated as an April Fool’s day prank.

The notifications, which sometimes contain summaries from user-generated posts, pop up on screen in the style of a news alert. Researchers say that format, adopted widely to boost engagement through personalized video recommendations, may make users less critical of the veracity of the content and open them up to misinformation.

“Notifications have this additional stamp of authority,” said Laura Edelson, a researcher at Northeastern University, in Boston. “When you get a notification about something, it’s often assumed to be something that has been curated by the platform and not just a random thing from your feed.”

Social media groups such as TikTok, X, and Meta are facing greater scrutiny to police their platforms, particularly in a year of major national elections, including November’s vote in the US. The rise of artificial intelligence adds to the pressure given that the fast-evolving technology makes it quicker and easier to spread misinformation, including through synthetic media, known as deepfakes.

TikTok, which has more than 1 billion global users, has repeatedly promised to step up its efforts to counter misinformation in response to pressure from governments around the world, including the UK and EU. In May, the video-sharing platform committed to becoming the first major social media network to label some AI-generated content automatically.

The false claim about Swift canceling her tour in Florida, which also circulated on X, mirrored an article published in May in the satirical newspaper The Dunning-Kruger Times, although this article was not linked or directly referred to in the TikTok post.

At least 20 people said on a comment thread that they had clicked on the notification and were directed to a video on TikTok repeating the claim, even though they did not follow the account. At least one person in the thread said they initially thought the notification “was a news article.”

Swift is still scheduled to perform three concerts in Miami in October and has not publicly called Florida “racist.”

Another push notification inaccurately stated that a Japanese pitcher who plays for the Los Angeles Dodgers faced a ban from Major League Baseball: “Shohei Ohtani has been BANNED from the MLB for 5 years following his gambling investigation… ”

The words directly matched the description of a post uploaded as an April Fools’ day prank. Tens of commenters on the original video, however, reported receiving alerts in mid-April. Several said they had initially believed it before they checked other sources.

Users have also reported notifications that appeared to contain news updates but were generated weeks after the event.

One user received an alert on January 23 that read: “BREAKING: A tsunami alert has been issued in Japan after a major earthquake.” The notification appeared to refer to a natural disaster warning issued more than three weeks earlier after an earthquake struck Japan’s Noto peninsula on New Year’s Day.

TikTok said it had removed the specific notifications flagged by the FT.

The alerts appear automatically to scrape the descriptions of posts that are receiving, or are likely to receive, high levels of engagement on the viral video app, owned by China’s ByteDance, researchers said. They seem to be tailored to users’ interests, which means that each one is likely to be limited to a small pool of people.

“The way in which those alerts are positioned, it can feel like the platform is speaking directly to [users] and not just a poster,” said Kaitlyn Regehr, an associate professor of digital humanities at University College London.

TikTok declined to reveal how the app determined which videos to promote through notifications, but the sheer volume of personalized content recommendations must be “algorithmically generated,” said Dani Madrid-Morales, co-lead of the University of Sheffield’s Disinformation Research Cluster.

Edelson, who is also co-director of the Cybersecurity for Democracy group, suggested that a responsible push notification algorithm could be weighted towards trusted sources, such as verified publishers or officials. “The question is: Are they choosing a high-traffic thing from an authoritative source?” she said. “Or is this just a high-traffic thing?”

Additional reporting by Hannah Murphy in San Francisco and Cristina Criddle in London.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Push alerts from TikTok include fake news, expired tsunami warning Read More »

elon-musk’s-x-tests-letting-users-request-community-notes-on-bad-posts

Elon Musk’s X tests letting users request Community Notes on bad posts

Elon Musk’s X tests letting users request Community Notes on bad posts

Continuing to evolve the fact-checking service that launched as Twitter’s Birdwatch, X has announced that Community Notes can now be requested to clarify problematic posts spreading on Elon Musk’s platform.

X’s Community Notes account confirmed late Thursday that, due to “popular demand,” X had launched a pilot test on the web-based version of the platform. The test is active now and the same functionality will be “coming soon” to Android and iOS, the Community Notes account said.

Through the current web-based pilot, if you’re an eligible user, you can click on the “•••” menu on any X post on the web and request fact-checking from one of Community Notes’ top contributors, X explained. If X receives five or more requests within 24 hours of the post going live, a Community Note will be added.

Only X users with verified phone numbers will be eligible to request Community Notes, X said, and to start, users will be limited to five requests a day.

“The limit may increase if requests successfully result in helpful notes, or may decrease if requests are on posts that people don’t agree need a note,” X’s website said. “This helps prevent spam and keep note writers focused on posts that could use helpful notes.”

Once X receives five or more requests for a Community Note within a single day, top contributors with diverse views will be alerted to respond. On X, top contributors are constantly changing, as their notes are voted as either helpful or not. If at least 4 percent of their notes are rated “helpful,” X explained on its site, and the impact of their notes meets X standards, they can be eligible to receive alerts.

“A contributor’s Top Writer status can always change as their notes are rated by others,” X’s website said.

Ultimately, X considers notes helpful if they “contain accurate, high-quality information” and “help inform people’s understanding of the subject matter in posts,” X said on another part of its site. To gauge the former, X said that the platform partners with “professional reviewers” from the Associated Press and Reuters. X also continually monitors whether notes marked helpful by top writers match what general X users marked as helpful.

“We don’t expect all notes to be perceived as helpful by all people all the time,” X’s website said. “Instead, the goal is to ensure that on average notes that earn the status of Helpful are likely to be seen as helpful by a wide range of people from different points of view, and not only be seen as helpful by people from one viewpoint.”

X will also be allowing half of the top contributors to request notes during the pilot phase, which X said will help the platform evaluate “whether it is beneficial for Community Notes contributors to have both the ability to write notes and request notes.”

According to X, the criteria for requesting a note have intentionally been designed to be simple during the pilot stage, but X expects “these criteria to evolve, with the goal that requests are frequently found valuable to contributors, and not noisy.”

It’s hard to tell from the outside looking in how helpful Community Notes are to X users. The most recent Community Notes survey data that X points to is from 2022 when the platform was still called Twitter and the fact-checking service was still called Birdwatch.

That data showed that “on average,” users were “20–40 percent less likely to agree with the substance of a potentially misleading Tweet than someone who sees the Tweet alone.” And based on Twitter’s “internal data” at that time, the platform also estimated that “people on Twitter who see notes are, on average, 15–35 percent less likely to Like or Retweet a Tweet than someone who sees the Tweet alone.”

Elon Musk’s X tests letting users request Community Notes on bad posts Read More »

scotus-nixes-injunction-that-limited-biden-admin-contacts-with-social-networks

SCOTUS nixes injunction that limited Biden admin contacts with social networks

SCOTUS nixes injunction that limited Biden admin contacts with social networks

On Wednesday, the Supreme Court tossed out claims that the Biden administration coerced social media platforms into censoring users by removing COVID-19 and election-related content.

Complaints alleging that high-ranking government officials were censoring conservatives had previously convinced a lower court to order an injunction limiting the Biden administration’s contacts with platforms. But now that injunction has been overturned, re-opening lines of communication just ahead of the 2024 elections—when officials will once again be closely monitoring the spread of misinformation online targeted at voters.

In a 6–3 vote, the majority ruled that none of the plaintiffs suing—including five social media users and Republican attorneys general in Louisiana and Missouri—had standing. They had alleged that the government had “pressured the platforms to censor their speech in violation of the First Amendment,” demanding an injunction to stop any future censorship.

Plaintiffs may have succeeded if they were instead seeking damages for past harms. But in her opinion, Justice Amy Coney Barrett wrote that partly because the Biden administration seemingly stopped influencing platforms’ content policies in 2022, none of the plaintiffs could show evidence of a “substantial risk that, in the near future, they will suffer an injury that is traceable” to any government official. Thus, they did not seem to face “a real and immediate threat of repeated injury,” Barrett wrote.

“Without proof of an ongoing pressure campaign, it is entirely speculative that the platforms’ future moderation decisions will be attributable, even in part,” to government officials, Barrett wrote, finding that an injunction would do little to prevent future censorship.

Instead, plaintiffs’ claims “depend on the platforms’ actions,” Barrett emphasized, “yet the plaintiffs do not seek to enjoin the platforms from restricting any posts or accounts.”

“It is a bedrock principle that a federal court cannot redress ‘injury that results from the independent action of some third party not before the court,'” Barrett wrote.

Barrett repeatedly noted “weak” arguments raised by plaintiffs, none of which could directly link their specific content removals with the Biden administration’s pressure campaign urging platforms to remove vaccine or election misinformation.

According to Barrett, the lower court initially granting the injunction “glossed over complexities in the evidence,” including the fact that “platforms began to suppress the plaintiffs’ COVID-19 content” before the government pressure campaign began. That’s an issue, Barrett said, because standing to sue “requires a threshold showing that a particular defendant pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff’s speech on that topic.”

“While the record reflects that the Government defendants played a role in at least some of the platforms’ moderation choices, the evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment,” Barrett wrote.

Barrett was similarly unconvinced by arguments that plaintiffs risk platforms removing future content based on stricter moderation policies that were previously coerced by officials.

“Without evidence of continued pressure from the defendants, the platforms remain free to enforce, or not to enforce, their policies—even those tainted by initial governmental coercion,” Barrett wrote.

Judge: SCOTUS “shirks duty” to defend free speech

Justices Clarence Thomas and Neil Gorsuch joined Samuel Alito in dissenting, arguing that “this is one of the most important free speech cases to reach this Court in years” and that the Supreme Court had an “obligation” to “tackle the free speech issue that the case presents.”

“The Court, however, shirks that duty and thus permits the successful campaign of coercion in this case to stand as an attractive model for future officials who want to control what the people say, hear, and think,” Alito wrote.

Alito argued that the evidence showed that while “downright dangerous” speech was suppressed, so was “valuable speech.” He agreed with the lower court that “a far-reaching and widespread censorship campaign” had been “conducted by high-ranking federal officials against Americans who expressed certain disfavored views about COVID-19 on social media.”

“For months, high-ranking Government officials placed unrelenting pressure on Facebook to suppress Americans’ free speech,” Alito wrote. “Because the Court unjustifiably refuses to address this serious threat to the First Amendment, I respectfully dissent.”

At least one plaintiff who opposed masking and vaccines, Jill Hines, was “indisputably injured,” Alito wrote, arguing that evidence showed that she was censored more frequently after officials pressured Facebook into changing their policies.

“Top federal officials continuously and persistently hectored Facebook to crack down on what the officials saw as unhelpful social media posts, including not only posts that they thought were false or misleading but also stories that they did not claim to be literally false but nevertheless wanted obscured,” Alito wrote.

While Barrett and the majority found that platforms were more likely responsible for injury, Alito disagreed, writing that with the threat of antitrust probes or Section 230 amendments, Facebook acted like “a subservient entity determined to stay in the good graces of a powerful taskmaster.”

Alito wrote that the majority was “applying a new and heightened standard” by requiring plaintiffs to “untangle Government-caused censorship from censorship that Facebook might have undertaken anyway.” In his view, it was enough that Hines showed that “one predictable effect of the officials’ action was that Facebook would modify its censorship policies in a way that affected her.”

“When the White House pressured Facebook to amend some of the policies related to speech in which Hines engaged, those amendments necessarily impacted some of Facebook’s censorship decisions,” Alito wrote. “Nothing more is needed. What the Court seems to want are a series of ironclad links.”

“That is regrettable,” Alito said.

SCOTUS nixes injunction that limited Biden admin contacts with social networks Read More »