Monday, February 3, 2025

Conspiracy Resource

Conspiracy news & views from all angles, up-to-the-minute and uncensored

COVID-19

Artificial Intelligence Separates Conspiracy Theory From Conspiracy Fact – Spectrum News 1

4 min read

Facts don’t matter to people who believe in debunked conspiracy theories—at least that’s the belief. But this theory itself might prove untrue, according to psychology researchers.

Evidence delivered by an AI chatbot convinced a significant number of participants in a study to put less faith in a conspiracy theory they previously said was true, according to a study published today in Science. Researchers at MIT and Cornell University, led by Thomas Costello, an assistant professor of psychology at American University in Washington, D.C., concluded that chatbots excelled at delivering information that debunked the specific reasons participants believed in conspiracy theories.

“Many people who strongly believe in seemingly fact-resistant conspiratorial beliefs can change their minds when presented with compelling evidence,” the study’s authors wrote.

Current psychological research posits that conspiracy theorists resist facts because their beliefs serve some internal need, such as belonging to a group, maintaining a sense of control over their circumstances or feeling special. The researchers started with the hypothesis that conspiracy theorists could be swayed from their positions with clear, specific facts to refute the erroneous evidence their participants cited.

While many people may believe in a given conspiracy theory, the researchers said, the evidence they rely on varies among individuals.

“People have different versions of the conspiracy in their head,” Costello said in a press briefing.

Can a chatbot convince a conspiracy theorist?

To measure the chatbot’s effectiveness, the researchers sought out participants who endorsed theories including the belief that the 11 September, 2001 attacks were an inside job and that certain governments have funneled illegal drugs into ethnic minority communities. They defined a conspiracy theory as a belief that certain events were “caused by secret, malevolent plots involving powerful conspirators.”

The chatbot reduced participants’ confidence in a conspiracy theory by an average of 20 percent, as rated on a scale of 0 percent to 100 percent by the participants themselves before and after the conversations. In follow-up queries, the change in beliefs persisted at 10 days and again at 2 months. The chatbot was powered by GPT-4 Turbo, a large-language model from OpenAI that gave it a wide range of information to use in response to the participants’ remarks. Participants were told the study was investigating conversations about controversial topics between AI and humans.

The chatbot wasn’t prompted by researchers to refute true conspiracies. For example, the chatbot wouldn’t discredit the well-documented MKUltra program, in which the CIA tested drugs on human subjects in the mid-20th century. Fact checkers reviewed the evidence given by the chatbot and found it was accurate 99.2 percent of the time, and the other 0.8 percent of claims were misleading. They didn’t find any claims to be false or biased.

In one example presented in the paper, a participant explained to the chatbot why they believed the 11 Sept. attacks were planned by the U.S. government. At the start of the conversation, they said they were 100 percent confident in this theory. The chatbot requested more information about the evidence the participant found convincing, and then responded by summarizing the research that disproved these erroneous or misconstrued facts.

“Steel does not need to melt to lose its structural integrity,” the chatbot said while drawing on an investigation from the National Institute of Standards and Technology to correct the participant’s reliance on the misleading fact that jet fuel doesn’t burn hot enough to melt a buildings’ steel girders, adding, “It begins to weaken much earlier.”

Are facts the only thing that matters?

Psychologists have theorized that when people form part of their identities around a conspiracy theory, they are more likely to ignore or reject information that could debunk their beliefs. But if chatbots can move the needle with facts, humans may simply not be skilled enough in presenting the right evidence, the researchers said.

Instead of being reluctant to discuss these topics with non-believers, conspiracy theorists are often eager to go over the evidence they say supports their positions. In fact, the amount of information they’ve absorbed, while incorrect or misleading, “can leave skeptics outmatched in debates and arguments,” the researchers wrote.

Sander van der Linden, a professor of social psychology in society at the University of Cambridge who was not involved in the study, said the results were impressive, especially the amount of time the reduction in belief persisted. He also said several questions are left to explore.

For one, while the study relied on a control group of participants who had a neutral conversation with the AI chatbot, another approach would have been to have humans try to convince a separate group of conspiracy theorists. This would have made it more definitive that the chatbot was the reason people responded the way they did.

There’s also the psychological impact of talking to a chatbot instead of another person. It’s not known whether participants felt less judged or more trusting of the chatbot as a source, for example. It’s possible that the chatbots helped participants on an emotional level in addition to a factual one, van der Linden says.

“It’s important to avoid a false dichotomy,” says van der Linden. “I suspect both needs and evidence play a role in persuading conspiracy believers.”

The researchers acknowledged the open questions at the press briefing and said they’ve already begun to explore some of them. Upcoming research will look at whether it’s necessary for the chatbot to be polite and build rapport with statements like, “Thank you for sharing your thoughts and concerns.”

From Your Site Articles

***
This article has been archived by Conspiracy Resource for your research. The original version from IEEE Spectrum can be found here.