conspiracy theories

why-trolls,-extremists,-and-others-spread-conspiracy-theories-they-don’t-believe

Why trolls, extremists, and others spread conspiracy theories they don’t believe


Some just want to promote conflict, cause chaos, or even just get attention.

Picture of a person using an old Mac with a paper bag over his head. The bag has the face of a troll drawn on it.

There has been a lot of research on the types of people who believe conspiracy theories, and their reasons for doing so. But there’s a wrinkle: My colleagues and I have found that there are a number of people sharing conspiracies online who don’t believe their own content.

They are opportunists. These people share conspiracy theories to promote conflict, cause chaos, recruit and radicalize potential followers, make money, harass, or even just to get attention.

There are several types of this sort of conspiracy-spreader trying to influence you.

Coaxing conspiracists—the extremists

In our chapter of a new book on extremism and conspiracies, my colleagues and I discuss evidence that certain extremist groups intentionally use conspiracy theories to entice adherents. They are looking for a so-called “gateway conspiracy” that will lure someone into talking to them, and then be vulnerable to radicalization. They try out multiple conspiracies to see what sticks.

Research shows that people with positive feelings for extremist groups are significantly more likely to knowingly share false content online. For instance, the disinformation-monitoring company Blackbird.AI tracked over 119 million COVID-19 conspiracy posts from May 2020, when activists were protesting pandemic restrictions and lockdowns in the United States. Of these, over 32 million tweets were identified as high on their manipulation index. Those posted by various extremist groups were particularly likely to carry markers of insincerity. For instance, one group, the Boogaloo Bois, generated over 610,000 tweets, of which 58 percent were intent on incitement and radicalization.

You can also just take the word of the extremists themselves. When the Boogaloo Bois militia group showed up at the Jan. 6, 2021, insurrection, for example, members stated they didn’t actually endorse the stolen election conspiracy but were there to “mess with the federal government.” Aron McKillips, a Boogaloo member arrested in 2022 as part of an FBI sting, is another example of an opportunistic conspiracist. In his own words: “I don’t believe in anything. I’m only here for the violence.”

Combative conspiracists—the disinformants

Governments love conspiracy theories. The classic example of this is the 1903 document known as the “Protocols of the Elders of Zion,” in which Russia constructed an enduring myth about Jewish plans for world domination. More recently, China used artificial intelligence to construct a fake conspiracy theory about the August 2023 Maui wildfire.

Often the behavior of the conspiracists gives them away. Years later, Russia eventually confessed to lying about AIDS in the 1980s. But even before admitting to the campaign, its agents had forged documents to support the conspiracy. Forgeries aren’t created by accident. They knew they were lying.

As for other conspiracies it hawks, Russia is famous for taking both sides in any contentious issue, spreading lies online to foment conflict and polarization. People who actually believe in a conspiracy tend to stick to a side. Meanwhile, Russians knowingly deploy what one analyst has called a “fire hose of falsehoods.”

Likewise, while Chinese officials were spreading conspiracies about American roots of the coronavirus in 2020, China’s National Health Commission was circulating internal reports tracing the source to a pangolin.

Chaos conspiracists—the trolls

In general, research has found that individuals with what scholars call a high “need for chaos” are more likely to indiscriminately share conspiracies, regardless of belief. These are the everyday trolls who share false content for a variety of reasons, none of which are benevolent. Dark personalities and dark motives are prevalent.

For instance, in the wake of the first assassination attempt on Donald Trump, a false accusation arose online about the identity of the shooter and his motivations. The person who first posted this claim knew he was making up a name and stealing a photo. The intent was apparently to harass the Italian sports blogger whose photo was stolen. This fake conspiracy was seen over 300,000 times on the social platform X and picked up by multiple other conspiracists eager to fill the information gap about the assassination attempt.

Commercial conspiracists—the profiteers

Often when I encounter a conspiracy theory I ask: “What does the sharer have to gain? Are they telling me this because they have an evidence-backed concern, or are they trying to sell me something?”

When researchers tracked down the 12 people primarily responsible for the vast majority of anti-vaccine conspiracies online, most of them had a financial investment in perpetuating these misleading narratives.

Some people who fall into this category might truly believe their conspiracy, but their first priority is finding a way to make money from it. For instance, conspiracist Alex Jones bragged that his fans would “buy anything.” Fox News and its on-air personality Tucker Carlson publicized lies about voter fraud in the 2020 election to keep viewers engaged, while behind-the-scenes communications revealed they did not endorse what they espoused.

Profit doesn’t just mean money. People can also profit from spreading conspiracies if it garners them influence or followers, or protects their reputation. Even social media companies are reluctant to combat conspiracies because they know they attract more clicks.

Common conspiracists—the attention-getters

You don’t have to be a profiteer to like some attention. Plenty of regular people share content where they doubt the veracity or know it is false.

These posts are common: Friends, family, and acquaintances share the latest conspiracy theory with “could this be true?” queries or “seems close enough to the truth” taglines. Their accompanying comments show that sharers are, at minimum, unsure about the truthfulness of the content, but they share nonetheless. Many share without even reading past a headline. Still others, approximately 7 percent to 20 percent of social media users, share despite knowing the content is false. Why?

Some claim to be sharing to inform people “just in case” it is true. But this sort of “sound the alarm” reason actually isn’t that common.

Often, folks are just looking for attention or other personal benefit. They don’t want to miss out on a hot-topic conversation. They want the likes and shares. They want to “stir the pot.” Or they just like the message and want to signal to others that they share a common belief system.

For frequent sharers, it just becomes a habit.

The dangers of spreading lies

Over time, the opportunists may end up convincing themselves. After all, they will eventually have to come to terms with why they are engaging in unethical and deceptive, if not destructive, behavior. They may have a rationale for why lying is good. Or they may convince themselves that they aren’t lying by claiming they thought the conspiracy was true all along.

It’s important to be cautious and not believe everything you read. These opportunists don’t even believe everything they write—and share. But they want you to. So be aware that the next time you share an unfounded conspiracy theory, online or offline, you could be helping an opportunist. They don’t buy it, so neither should you. Be aware before you share. Don’t be what these opportunists derogatorily refer to as “a useful idiot.”

H. Colleen Sinclair is Associate Research Professor of Social Psychology at Louisiana State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo of The Conversation

The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.

Why trolls, extremists, and others spread conspiracy theories they don’t believe Read More »

ai-chatbots-might-be-better-at-swaying-conspiracy-theorists-than-humans

AI chatbots might be better at swaying conspiracy theorists than humans

Out of the rabbit hole —

Co-author Gordon Pennycook: “The work overturns a lot of how we thought about conspiracies.”

A woman wearing a sweatshirt for the QAnon conspiracy theory on October 11, 2020 in Ronkonkoma, New York.

Enlarge / A woman wearing a sweatshirt for the QAnon conspiracy theory on October 11, 2020 in Ronkonkoma, New York.

Stephanie Keith | Getty Images

Belief in conspiracy theories is rampant, particularly in the US, where some estimates suggest as much as 50 percent of the population believes in at least one outlandish claim. And those beliefs are notoriously difficult to debunk. Challenge a committed conspiracy theorist with facts and evidence, and they’ll usually just double down—a phenomenon psychologists usually attribute to motivated reasoning, i.e., a biased way of processing information.

A new paper published in the journal Science is challenging that conventional wisdom, however. Experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory showed that the interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual.

“These are some of the most fascinating results I’ve ever seen,” co-author Gordon Pennycook, a psychologist at Cornell University, said during a media briefing. “The work overturns a lot of how we thought about conspiracies, that they’re the result of various psychological motives and needs. [Participants] were remarkably responsive to evidence. There’s been a lot of ink spilled about being in a post-truth world. It’s really validating to know that evidence does matter. We can act in a more adaptive way using this new technology to get good evidence in front of people that is specifically relevant to what they think, so it’s a much more powerful approach.”

When confronted with facts that challenge a deeply entrenched belief, people will often seek to preserve it rather than update their priors (in Bayesian-speak) in light of the new evidence. So there has been a good deal of pessimism lately about ever reaching those who have plunged deep down the rabbit hole of conspiracy theories, which are notoriously persistent and “pose a serious threat to democratic societies,” per the authors. Pennycook and his fellow co-authors devised an alternative explanation for that stubborn persistence of belief.

Bespoke counter-arguments

The issue is that “conspiracy theories just vary a lot from person to person,” said co-author Thomas Costello, a psychologist at American University who is also affiliated with MIT. “They’re quite heterogeneous. People believe a wide range of them and the specific evidence that people use to support even a single conspiracy may differ from one person to another. So debunking attempts where you try to argue broadly against a conspiracy theory are not going to be effective because people have different versions of that conspiracy in their heads.”

By contrast, an AI chatbot would be able to tailor debunking efforts to those different versions of a conspiracy. So in theory a chatbot might prove more effective in swaying someone from their pet conspiracy theory.

To test their hypothesis, the team conducted a series of experiments with 2,190 participants who believed in one or more conspiracy theories. The participants engaged in several personal “conversations” with a large language model (GT-4 Turbo) in which they shared their pet conspiracy theory and the evidence they felt supported that belief. The LLM would respond by offering factual and evidence-based counter-arguments tailored to the individual participant. GPT-4 Turbo’s responses were professionally fact-checked, which showed that 99.2 percent of the claims it made were true, with just 0.8 percent being labeled misleading, and zero as false. (You can try your hand at interacting with the debunking chatbot here.)

Screenshot of the chatbot opening page asking questions to prepare for a conversation

Enlarge / Screenshot of the chatbot opening page asking questions to prepare for a conversation

Thomas H. Costello

Participants first answered a series of open-ended questions about the conspiracy theories they strongly believed and the evidence they relied upon to support those beliefs. The AI then produced a single-sentence summary of each belief, for example, “9/11 was an inside job because X, Y, and Z.” Participants would rate the accuracy of that statement in terms of their own beliefs and then filled out a questionnaire about other conspiracies, their attitude toward trusted experts, AI, other people in society, and so forth.

Then it was time for the one-on-one dialogues with the chatbot, which the team programmed to be as persuasive as possible. The chatbot had also been fed the open-ended responses of the participants, which made it better to tailor its counter-arguments individually. For example, if someone thought 9/11 was an inside job and cited as evidence the fact that jet fuel doesn’t burn hot enough to melt steel, the chatbot might counter with, say, the NIST report showing that steel loses its strength at much lower temperatures, sufficient to weaken the towers’ structures so that it collapsed. Someone who thought 9/11 was an inside job and cited demolitions as evidence would get a different response tailored to that.

Participants then answered the same set of questions after their dialogues with the chatbot, which lasted about eight minutes on average. Costello et al. found that these targeted dialogues resulted in a 20 percent decrease in the participants’ misinformed beliefs—a reduction that persisted even two months later when participants were evaluated again.

As Bence Bago (Tilburg University) and Jean-Francois Bonnefon (CNRS, Toulouse, France) noted in an accompanying perspective, this is a substantial effect compared to the 1 to 6 percent drop in beliefs achieved by other interventions. They also deemed the persistence of the effect noteworthy, while cautioning that two months is “insufficient to completely eliminate misinformed conspiracy beliefs.”

AI chatbots might be better at swaying conspiracy theorists than humans Read More »