Sunday, November 24, 2024

conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

Conspiracy

How Deepfakes Fuel Conspiracy Theories

3 min read
Alamy/Getty Images

In March 2022, a video appeared purporting to depict Ukrainian president Volodymyr Zelenskyy advising the country’s soldiers to surrender to the invading Russian military. If the video’s creators wished to erode Ukrainian morale, they met limited success. But videos like this may be responsible for a more insidious type of erosion: undermining trust in media at large.

A study of tweets concerning AI-generated deepfake videos related to the Russo-Ukrainian war suggests that, even if the deepfakes themselves aren’t convincing, their very presence helps to spread doubt and conspiracy theories in the online imagination. Even nonmalicious deepfakes may contribute to this problem, fueling deepfake-related conspiracy theories.

“These had been very loosely theorized before, but there was no empirical evidence of their existence,” says John Twomey, a graduate student in psychology at University College Cork, in Ireland, and the study’s lead author.

“Deepfake” isn’t a technical term, but Twomey and his colleagues specifically defined a deepfake video as one “generated using deep-learning technology.” Twomey and his colleagues analyzed 1,392 tweets from the first seven months of 2022, all of which pertained to such deepfakes. Applying a semantic technique known as reflexive thematic analysis, the researchers could identify tweets containing deepfake-related skepticism.

“The current approach to deepfake interventions seems to be based on simply identifying whether or not a video is a deepfake. It is my belief that interventions must also be concerned with how deepfakes can increase the false positive.” —John Twomey, University College Cork

The bulk of that skepticism was unhealthy. The number of tweets accurately marking real deepfakes was eclipsed more than fivefold by the number of tweets accusing genuine videos of being deepfakes. Some pointed to video artifacts as “proof” of forgeries; others cited deepfakes as a reason to doubt any information on the conflict, even from reputable news sources. Still others used deepfakes as a launching pad for claims that journalists or governments were complicit in a nefarious agenda.

And at the most extreme, skeptical tweets delved into the language of conspiracy theories. “We are being deceived by Ukraine, they are probably laughing with Putin over our dollars. The war is a deepfake,” said one tweet. “This is a western media deepfake. These journalists are under the globalists thumb. We know what will happen next. Praise the lord,” said another.

Misinformation researchers sometimes speak of the liar’s dividend. This is the idea that some figures benefit from an environment that’s become toxic due to misinformation. For example, in such an environment, a politician can deflect critics by labeling their criticism as fake news. The findings suggest that even seemingly innocuous deepfakes can contribute to the liar’s dividend. For instance, the Ukrainian government’s official Twitter account posted a video of Vladimir Putin walking around the war-torn city of Mariupol, clearly labeled as nongenuine. If videos like this come from official sources, they may backfire and kindle doubt in those very sources.

The obvious dilemma, then, is how to prudently tackle deepfakes. “The current approach to deepfake interventions seems to be based on simply identifying whether or not a video is a deepfake,” Twomey says. “It is my belief that interventions must also be concerned with how deepfakes can increase the false positive.”

How to go about that is unclear. For one, not all deepfakes are created alike; deepfakes created for political misinformation require a different touch than deepfakes created for harmless fun or deepfakes created for harassment or nonconsensual pornography. For another, the technology used to create deepfakes is new and constantly evolving, and its effects are ill-understood.

“It would be really nice to show what is the actual impact of, for example, deepfakes on memory over a long-term period,” says Nils Köbis, a behavioral scientist studying human-AI interactions at the Max Planck Institute for Human Development, in Berlin, and who was not involved in the paper. “Long-term studies would be really useful to better understand the impact that deepfakes have.”

For example, some research on fake text news suggests that, while misinformation may kindle mistrust and suspicion, it might also increase faith in, say, trusted news outlets. “You just don’t trust unverified sources very much anymore, but the ones that are verified, where you know these are reputable sources—we might trust them relatively more than we used to,” Köbis says. “I do think that it’s a bit of a nuance that we need to understand.”

Deepfake video tied to specific governments is not a localized problem. Earlier this year, unknown actors used commercial software intended for producing AI-generated corporate training videos to instead produce propaganda supporting Burkina Faso’s military junta, which had seized power in a September 2022 coup. More recently, the Israel-Gaza conflict has spawned a new wave of deepfakes, though fewer than some observers feared.

Twomey and his colleagues’ work studied only deepfakes from one part of the world and on one social media platform. “There really needs to be more empirical work done in this area,” Twomey says.

Their work was published in the journal PLoS ONE on 25 October.

From Your Site Articles
Related Articles Around the Web

***
This article has been archived for your research. The original version from IEEE Spectrum can be found here.