AI Could Reduce People’s Beliefs in Conspiracy Theories, Study Suggests
AI chatbots could be effective in eroding people’s beliefs in conspiracy theories, suggests a new study that used ChatGPT to counter them with fact-checked information.
The conventional understanding of why people believe in conspiracy theories holds that once someone goes down the rabbit hole—be it ancient aliens, the Illuminati, or the theory that Princess Diana was murdered—it’s almost impossible to convince them otherwise, even when confronted with compelling evidence to the contrary.
Researchers from MIT, Cornell, and American University say they have come up with a solution involving an AI chatbot. Across two experiments, the researchers had ChatGPT (running on the GPT-4 Turbo model) interact with more than 2,000 Americans about a conspiracy theory they believed in.
Within three rounds of conversation with the chatbot, participants’ belief in their chosen conspiracy theory was reduced by 20 percent on average. Following up two months later, they found that the participants hadn’t reverted to their previous strongly held beliefs.
Even larger effects were noticed by the researchers: “The debunking also spilled over to reduce beliefs in unrelated conspiracies, indicating a general decrease in conspiratorial worldview, and increased intentions to rebut other conspiracy believers,” said the paper published in the journal Science.
“I was most surprised by how big, and how durable, the effect of the debunking was. It is much larger and longer-lasting than anything else I’ve seen before,” one of the authors, Prof. David Rand, told Newsweek.
A significant portion of the U.S. population believes in one or more conspiracy theories—between 25 and 50 percent, depending on the conspiracy, according to a 2014 study in the American Journal of Political Science.
But why does this matter? When it relates to issues like public health, for example, the COVID-19 conspiracy theories included claims the virus was a hoax and made up to exert control over an unsuspecting public. The spread of these beliefs can have an impact on public health, democratic beliefs, and the tendency towards extremism, say experts.
In the political sphere, from Pizzagate to the ‘birther’ theory and, most recently, the ‘immigrants eating pets‘ rumor, there have always been conspiracy theories designed to stir up public opinion.
Much of this genre of conspiracy belief and political misinformation peaked around 2020 with the emergence of QAnon and aided by viral dissemination across social media platforms, so much so that it gave rise to a new branch of science known as infodemiology, designed to study how misinformation and conspiracy theories can spread like a disease.
When it comes to dissuading people from believing such conspiracies, why did ChatGPT seemingly work where humans have failed? The AI chatbot, according to the study, was providing detailed counterevidence in a neutral manner to the holder of the conspiracy belief and then following up with more evidence when asked.
More importantly, the generative AI language model tailored the information to the individual (this information was verified by an independent fact-checker, according to the research paper).
The researchers found that some respondents found the chatbot more convincing than previous human rebuttals to their conspiracy belief: “Now this is the very first time I have gotten a response that made real, logical, sense. I must admit this really shifted my imagination when it comes to the subject of Illuminati. I think it was extremely helpful in my conclusion of [whether] the Illuminati is actually real.”
The study authors provided a list of all the conspiracies discussed by participants and the full conversations they had with ChatGPT. From more recent conspiracies including the theory that Jeffrey Epstein was murdered, the belief that the 2020 election was ‘stolen’, to the 9/11 ‘insider’ conspiracy and more general theories around secret government agendas and big corporations, there were 15 categories in total, with John F. Kennedy’s assassination proving the most popular.
In the case of one participant stating that “the theory that Lee Harvey Oswald did not kill JFK is one that is compelling to me” and scoring themselves as being 62 percent confident this was the case, after talking to ChatGPT, they changed their score to 34 percent.
After ChatGPT debunked the “magic bullet theory”, the participant asked how “someone with no marksman skills could have been so precise”. The generative AI chatbot went on to address this as “a very valid question” and provided additional information, leading to the participant replying that “it could have been plausible”.
But how does this kind of intervention work on a practical level, given that people holding certain conspiracy beliefs would not necessarily volunteer to have them challenged? The researchers suggest some ways to implement their findings.
“Internet search terms related to conspiracies could be met with AI-generated summaries of accurate information—tailored to the precise search—that solicit the user’s response and engagement,” the researchers outline.
“Similarly, AI-powered social media accounts could reply to users who share inaccurate conspiracy-related content (providing corrective information for the potential benefit of both the poster and observers),” the authors added.
Rand explained further that “interventions like this definitely have the potential to reduce the spread of conspiracy theories.”
“Tech companies could automatically invite people making posts containing, or entering search strings associated with, conspiracy theories to discuss (e.g. “do their own research”) with the AI. Or the AI could be hooked up to social media accounts that respond to those posting conspiracy theories,” added Rand.
Do you have an AI story to share with Newsweek? Do you have a question about conspiracy theories? Let us know via science@newsweek.com.
Reference
Thomas H. Costello et al. (2024), Durably reducing conspiracy beliefs through dialogues with AI. Science (385). DOI:10.1126/science.adq1814