Should AI be used to fight conspiracy theories?
In every dusty corner of the internet, there is a conspiracy theory to be found.
Whether it be the hotly contested debate of whether Avril Lavigne was in fact replaced by a doppelganger, Melissa Vandella, in 2003, or even whether Paul McCartney was swapped with Scottish orphan Billy Shears after the original star died in 1966. There are urban legends aplenty to keep the masses entertained.
Yet conspiracy theories are not without their dangers, proliferating even at the highest levels of government. Between 2017 and 2019, Donald Trump promoted posts on X (formerly Twitter) drawing on elements of QAnon conspiracy ideas 265 times. QAnon conspiracists had previously acted violently, leading to hundreds of arrests and convictions. Similarly, theories surrounding medical practices can be harmful, such as the negativity surrounding Covid-19 vaccines.
Professor of psychology at the American University in Washington, Thomas Costello, launched an experiment using AI, to see if a chatbot could dissuade conspirators from some of their beliefs and promote objective evidence.
Almost 2,200 people were questioned whether they believed in any conspiracy theories, and, if so, which ones. They were also asked to write why they believed them. A generative AI model turned their answers into a single sentence, which the participant had to rate their agreement with on a scale of zero to a hundred.
After this, the participant went into a conversation with GPT-4 Turbo, which was prompted such that it knew exactly what they had said. It was told to effectively, and politely, persuade them that this conspiracy theory was incorrect.
After an eight-minute conversation, participants again rated their agreement with the prior sentence. On average, people reduced their belief in their chosen conspiracy by about 20%. One in four people who believed their conspiracy theory was true left with scores under 50, meaning, on balance, they no longer thought it was true. They were followed up 10 days and two months later to see if beliefs had remained changed, and they had.
Yet why was the chatbot more persuasive than an informed person? Research published in the journal Psychological Bulletin by MIT showed that conspiracy theorists are motivated by a need to understand and feel safe in their environment. Those who strongly believe in conspiracy theories are more likely to be insecure, emotionally volatile, and antagonistic.
Dealing with a chatbot could alleviate these traits and anxieties, making the theorist more open-minded. Research from The Ohio State University found that when people are worried about others judging them, they feel less embarrassed with a chatbot than a human. While this was in relation to online shopping, we might draw a parallel and consider that conspiracy theorists are often met with indignant counterarguments to their theories, leading them to double down, feel defensive, embarrassed, and judged, enhancing those traits of antagonism and volatility.
Chatbots are perceived as being less able to feel emotions and make appraisals about people, perhaps therefore allowing for a less defensive state of mind. For conspiracy theorists, whose main interest is to uncover the truth, chatbots are perceived to be aligned in that interest, rather than being motivated by putting the conspiracy theorist down or making them look foolish. The chatbot that was used for the persuasion was made to be polite and not affronting.
There is something endearing about promoting the simplicity of kindness, openness, and politeness in persuasion. It is timely that these findings should arise in tandem with the release of Paddington 3, the furry protagonist of which famously states, “If you’re kind and polite, the world will be right.” The chatbot was kind and polite and stirred the conspiracy theorists to the ‘right’, or more objective, view of the world.
But the experiment itself brings about a kind of moral debate of truth and raises questions about whether we should really be persuading people away from certain beliefs. Surely our frenzies over the moon landing being faked and the dead internet theory are rather a sign of the richness of human activity and an opportunity to exercise our skills of scepticism?
Often, conspirators may simply be indulging in debate. Would AI be stripping away part of what it means for them to be human? Yet the fact remains that many conspiracy theories stir up dangerous attitudes and must be treated cautiously. Perhaps there lies a future in which those debating whether Coca-Cola really did switch their formula can continue in frivolity, while AI could be a means to abate offensive and damaging conspiracy theories. Or perhaps, no matter how sophisticated AI becomes, the spirit of the conspiracy theorist will never be dampened.
“Should AI be used to fight conspiracy theories?” was originally created and published by Verdict, a GlobalData owned brand.
The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.