Monday, December 23, 2024

conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

Conspiracy

How an AI ‘debunkbot’ can change a conspiracy theorist’s mind

In 2024, online conspiracy theories can feel almost impossible to avoid. Podcasters, prominent public figures, and leading political figures have breathed oxygen into once fringe ideas of collusion and deception. People are listening. Nationwide, nearly half of adults surveyed by the polling firm YouGov said they believe there is a secret group of people that control world events. Nearly a third (29%) believe voting machines were manipulated to alter votes in the 2020 presidential election. A surprising amount of Americans think the Earth is flat. Anyone who’s spent time trying to refute those claims to a true believer knows how challenging of a task that can be. But what if a ChatGPT-like large language model could do some of that headache-inducing heavy lifting? 

A group of researchers from the Massachusetts Institute of Technology, Cornell, and American University put that idea to the test with a custom made chatbot they are now calling “debunkbot.” The researchers, who published their findings in Science, had self-described conspiracy theorists engage in a back-and-forth conversation with a chatbot, which was instructed to produce detailed counter arguments to refute their position and ultimately try to change their minds. In the end, conversations with the chatbot reduced the participant’s overall confidence in their professed conspiracy theory by an average of 20%. Around a quarter of the participants disavowed their conspiracy theory entirely after speaking with the AI. 

“We see that the AI overwhelmingly was providing non-con conspiratorial explanations for these seemingly conspiratorial events and encouraging people to engage in critical thinking and providing counter evidence,” MIT professor and paper co-author David Rand said during a press briefing. 

“This is really exciting,” he added. “It seemed like it worked and it worked quite broadly.”  

Researchers created an AI fine-tuned for debunking

The experiment involved 2,190 US adults who openly claimed they believed in at least one idea that meets the general description of a conspiracy theory. Participants ran the conspiracy and ideological gambit, with some expressing support for older classic theories involving President John F. Kennedy’s assassination and the alien abductions to more modern claims about Covid-19 and the 2020 election. Each participant was asked to rate how strongly they believed in one particular theory on a scale of 0-100%. They were then asked to provide several reasons or explanations, in writing, for why they believed that theory.

Those responses were then fed into the debunkbot, which is a customized version of OpenAI’s GPT Turbo model. The researchers fine-tuned the bot to address each piece of “evidence” provided by the conspiracy theorist and respond to it with precise counterarguments pulled from its training data. Researchers say debunkbot was instructed to “very effectively persuade” users against their beliefs while also maintaining a respectful and patent tone. After three rounds of black and forth with the AI, the respondents were once again asked to provide a rating on how strongly they believed their stated conspiracy theory. 

Overall ratings supporting conspiracy beliefs decreased by 16.8 points on average following the back and forth. Nearly a third of the respondents left the exchange saying they were no longer certain of the belief they had going in. Those shifts in belief largely persisted even when researchers checked back in with the participants two months later. In instances where participants expressed belief in a “true” conspiracy theory—such as efforts by the tobacco industry to hook kids or the CIA’s clandestine MKUltra mind control experiments—the AI actually validated the beliefs and provided more evidence to buttress them. Some of the respondents who shifted their beliefs after the dialogue thanked the chatbot for helping them see the other side.

“Now this is the very first time I have gotten a response that made real, logical, sense,” one of the participants said following the experiment. “I must admit this really shifted my imagination when it comes to the subject of Illuminati.”

“Our findings fundamentally challenge the view that evidence and arguments are of little use once someone has ‘gone down the rabbit hole’ and come to believe a conspiracy theory,” the researchers said. 

How was the chatbot able to break through? 

The researchers believe the chatbot’s apparent success lies in its ability to access stores of targeted, detailed, factual data points quickly. In theory, a human could perform this same process, but they would be at a disadvantage. Conspiracy theorists may often obsess over their issue of choice which means they may “know” many more details about it than a skeptic trying to counter their claims. As a result, human debunkers can get lost trying to refute various obscure arguments. That can require a level of memory and patience well suited to an AI.

“It’s really validating to know that evidence does matter,” Cornell University Professor and paper coauthor Gordon Pennycook said during a briefing. “Before we had this sort of technology, it was not straightforward to know exactly what we needed to debunk. We can act in a more adaptive way using this new technology.” 

Popular Science tested the findings with a version of the chatbot provided by the researchers. In our example, we told the AI we believed the 1969 moon landing was a hoax. To support our argument, we parroted three talking points common among moon landing skeptics. We asked why the photographed flag seemed to be flowing in the wind when there is no atmosphere on the moon, how astronauts could have survived passing through the highly irradiated Van Allen belts without being harmed, and why the US hasn’t placed another person on the moon despite advances in technology. Within three seconds the chatbot provided a paragraph clearly refuting each of those points. When I annoyingly followed up by asking the AI how it could trust figures provided by corrupt government sources, another common refrain among conspiracy theorists, the chatbot patiently responded by acknowledging my concerns and pointed me to additional data points. It’s unclear if even the most adept human debunker could maintain their composure when repeatedly pressed with strawman arguments and unfalsifiable claims. 

AI chatbots aren’t perfect. Numerous studies and real-world examples show some of the most popular AI tools released by Google and OpenAI repeatedly fabricating or “hallucinating” facts and figures. In this case, the researchers hired a professional fact checker to validate the various claims the chatbot made while conversing with the study participants. The fact-checker didn’t check all of AI’s thousands of responses. Instead they looked over 128 claims spread out across a representative sample of the conversations. 99.2% of those AI claims were deemed true and .8% were considered misleading. None were considered outright falsehoods by the fact-checker. 

AI chatbot could one day meet conspiracy theorist on web forums 

“We don’t want to run the risk of letting the perfect get in the way of the good,” Pennycock said. “Clearly, it [the AI model] is providing a lot of really high quality evidence in these conversations. There might be some cases where it’s not high quality, but overall it’s better to get the information than to not.”

Looking forward, the researchers are hopeful their debunkbot or something like it could be used in the real world to meet conspiracy theorists where they are and, maybe, make them reconsider their beliefs. The researchers proposed potentially having a version of the bot appear in Reddit forums popular among conspiracy theorists. Alternatively, researchers could potentially run Google ads on search terms common amongst conspiracy theorists. In that case, rather than get what they were looking for, the user could be directed to the chatbot. The researchers say they are also interested in collaborating with large tech platforms such as Meta to think of ways to surface these chabots on platforms. Whether or not people would willingly agree to take time out of their day to argue with robots outside of an experiment, however, remains far from certain. 

Still, the paper authors say the findings underscore a more fundamental point: facts and reason, when delivered properly can pull some people out of their conspiratorial rabbit holes. 

“Arguments and evidence should not be abandoned by those seeking to reduce belief in dubious conspiracy theories,” the researchers wrote.

“Psychological needs and motivations do not inherently blind conspiracists to evidence. It simply takes the right evidence to reach them.” 

That is, of course, if you’re persistent and patient enough.

Mack DeGeurin Avatar

Mack DeGeurin

Contributor

Mack DeGeurin is a tech reporter who’s spent years investigating where technology and politics collide. His work has previously appeared in Gizmodo, Insider, New York Magazine, and Vice.

***
This article has been archived by Conspiracy Resource for your research. The original version from Popular Science can be found here.