Send the conspiracy theorists in your life to the Debunk Bot
MIT researchers say it can reduce belief in conspiracy theories by 20%
Researchers Thomas Costello (MIT, American University), Gordon Pennycook (Cornell University), and David Rand (MIT) recently published an article in Science, entitled, “Durably reducing conspiracy beliefs through dialogues with AI.” As part of their research, they created “Debunk Bot,” which uses GPT-4 Turbo to engage users in dialogues about conspiracy theories. They found that these dialogic interventions, which focused on presenting non-conspiratorial explanations, facts, and counterevidence, and on encouraging critical thinking, reduced participants’ belief in their chosen conspiracy theory by about 20%. The authors explain:
The treatment reduced participants’ belief in their chosen conspiracy theory by 20% on average. This effect persisted undiminished for at least 2 months; was consistently observed across a wide range of conspiracy theories, from classic conspiracies involving the assassination of John F. Kennedy, aliens, and the illuminati, to those pertaining to topical events such as COVID-19 and the 2020 US presidential election; and occurred even for participants whose conspiracy beliefs were deeply entrenched and important to their identities. Notably, the AI did not reduce belief in true conspiracies. Furthermore, when a professional fact-checker evaluated a sample of 128 claims made by the AI, 99.2% were true, 0.8% were misleading, and none were false. The debunking also spilled over to reduce beliefs in unrelated conspiracies, indicating a general decrease in conspiratorial worldview, and increased intentions to rebut other conspiracy believers.
Here’s the abstract of the study:
Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored. To evaluate this possibility, we leveraged developments in generative artificial intelligence and engaged 2190 conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo. The intervention reduced conspiracy belief by ~20%. The effect remained 2 months later, generalized across a wide range of conspiracy theories, and occurred even among participants with deeply entrenched beliefs. Although the dialogues focused on a single conspiracy, they nonetheless diminished belief in unrelated conspiracies and shifted conspiracy-related behavioral intentions. These findings suggest that many conspiracy theory believers can revise their views if presented with sufficiently compelling evidence.
The authors conclude:
Many people who strongly believe in seemingly fact-resistant conspiratorial beliefs can change their minds when presented with compelling evidence. From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: Conspiratorial rabbit holes may indeed have an exit. Psychological needs and motivations do not inherently blind conspiracists to evidence-it simply takes the right evidence to reach them. Practically, by demonstrating the persuasive power of LLMs, our findings emphasize both the potential positive impacts of generative AI when deployed responsibly and the pressing importance of minimizing opportunities for this technology to be used irresponsibly.
Co-author David G. Rand, who describes himself as a Professor at MIT “working on misinformation/fake news, social media, intuition vs deliberation, cooperation, politics, religion (he/him)” provided a great summary of the paper in a thread on the site formerly known as Twitter:
Conspiracy beliefs famously resist correction, ya? WRONG: We show brief convos w GPT4 reduce conspiracy beliefs by ~20%! -Lasts over 2mo -Works on entrenched beliefs -Tailored AI response rebuts specific evidence offered by believers
Attempts to debunk conspiracies are often futile, leading many to conclude that psychological needs/motivations blind pple & make them resistant to evidence. But maybe past attempts just didn’t deliver sufficiently specific/compelling evidence+arguments?
Constructing compelling rebuttals to all variations of all prevalent conspiracies is not humanly possible- but maybe easy for LLMs? To find out, we had @OpenAI GPT4turbo deliver personalized counterevidence to 2,190 conspiracy believers via real-time conversations in Qualtrics
Participants -described a conspiracy they believed & evidence supporting their belief -rated their belief on 0-100 scale -had 3 round text convo with the AI, which was prompted to refute the conspiracy -re-rated their belief. . .
RESULTS: It worked!! Participants reduced their conspiracy belief by more than 20% – 25% of participants, who all initially believed their chosen conspiracy, didn’t believe (were below 50 on 0-100 scale) after the conversation -Effect was very durable: undiminished after 2mo!
The effect size did not significantly differ based on which conspiracy was being debunked -Worked for classic conspiracies like JFK, govt hiding evidence of aliens, moon landing hoax, Illuminati – Worked for modern conspiracies like 2020 election fraud, COVID, 911 inside job
Remarkably, the treatment worked even for “true believers”: those who strongly believed the conspiracy, felt it was very important for their identity, and/or had conspiratorial mindset. Causal forest machine learning analysis finds a meaningful effect across *all* subgroups
The AI convo focused on the specific theory articulated by the participant, yet effect “spilled over” to reduce beliefs in unrelated conspiracies. It also affected behavioral intentions e.g. willingness to challenge others who espouse conspiracy & unfollow them on social media
How accurate was the AI? – Fact-checker evaluated of 128 claims made by AI: 99.2% true, 0.08% misleading, none false! Conspiracy debunks prob well-represented in training data – 1.2% of ppl named *true* conspiracies (MK Ultra) and AI confirmed rather than reducing belief in those.
So what was AI actually *doing* in those conversation? Offering non-conspiratorial explanations+counterevidence, encouraging critical thinking-NOT relying on psychological approaches. In-prep exp shows that telling it not to use evidence kills effect. It’s the facts/evidence.
SUMMARY Evidence + arguments CAN change beliefs about conspiracy theories – and many ppl appreciated it! Needs + motives do not totally blind you once you’ve gone down the rabbit hole – it just requires detailed, tailored evidence to help pull you back. Intervention is possible!
Of course, w/o guardrails, LLMs might also be able to convince people to *believe* conspiracies or other falsehoods. Our findings emphasize both the potential positive impacts of genAI, and the importance of minimizing opportunities for this technology to be used irresponsibly.
Our methodology-where LLMs are integrated into survey experiments-has a HUGE variety of applications. We are on the edge of a massive revolution in experimental methodologies and it is VERY exciting. . . What to try talking to the AI yourself? Check out http://DebunkBot.com! Pass the link on to your conspiratorial friends and family members, have it ready at Thanksgiving.
Read the study here. And try out Debunk Bot here.
The post Send the conspiracy theorists in your life to the Debunk Bot appeared first on Boing Boing.