Saturday, November 23, 2024

conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

Elections

Image-Based AI Tools Can Help Sow Mistrust in Elections, Report Finds

With the 2024 U.S. presidential election looming, artificial intelligence, or AI, might make it easy to sow distrust of elections.

Image-based generative artificial intelligence tools are at risk of being used to generate fake evidence in support of misinformation and disinformation about elections, a recent report found, adding to growing concerns globally about how AI can be used and abused to produce propaganda.

In the study, the image-based generative AI tools Midjourney, DALL-E 2 and Stable Diffusion accepted more than 85% of prompts (a starting point or an idea inserted in the AI tool to kickstart a project) seeking to generate fake evidence that would support false claims.

The companies did not reply to VOA’s emails requesting comment.

“These platforms don’t have a significant amount of moderation” to identify and block prompts “that create things that could potentially pollute the information space,” the report’s author, Kyle Walter, told VOA. “For the most part, there appears to be a major blind spot when it comes to generating evidence of these different disinformation narratives.”

The report focused on three image-based AI tools: Midjourney, created by the San Francisco-based research lab Midjourney Inc.; DALL-E 2, created by OpenAI, which also launched the AI chatbot ChatGPT; and Stable Diffusion, created by the company Stability AI.

False allegations of election fraud were rampant around the U.S. 2020 presidential election.

By entering prompts related to claims of a “stolen election,” Logically was able to create images of people appearing to stuff election ballot boxes on all three platforms. Logically also used prompts related to people “meddling” with voting machines to generate images on all three platforms.

Another common narrative is that election staffers have been involved in stealing elections. Logically successfully generated images of election staffers carrying ballot boxes on all three platforms.

These results were concerning to Walter, Logically’s head of research, who is based in Washington.

“Different malicious actors can obviously come in and produce content via these image- based platforms that can be harmful online,” he said.

In the study, Logically also was able to generate false evidence of phony claims related to elections in the United Kingdom and India.

This report is the latest in a spate of recent developments about how AI may risk being manipulated ahead of the 2024 election.

In late June, VOA reported that the Federal Election Commission, which enforces rules for U.S. elections, will not regulate AI-generated deepfakes in political advertising ahead of the 2024 election.

When Polygraph.info asked OpenAI’s chatbot ChatGPT what it considered the most impactful deepfakes, it cited the 2018 case in which actor Jordan Peele worked with Buzzfeed to create a deepfake of him impersonating former President Barack Obama talking about the dangers that deepfakes pose.

“It’s essential to remember that deepfakes can have serious implications, as they can be used to spread misinformation, create fake news, or damage the reputation of individuals,” ChatGPT said.

Some groups are already working to combat the potential harm caused by deepfakes. For example, the Massachusetts Institute of Technology’s Media Lab launched the Detect Fakes research project in an effort to develop strategies for combating AI-generated misinformation.

In July, major AI companies promised the U.S. government that they would try to mitigate the potential harms caused by their technologies. But WIRED reported last week that those voluntary commitments — like adding watermarks to AI-generated images — likely won’t be enough to combat disinformation.

Walter said the three platforms studied in the Logically report should enhance their content moderation policies so they’re less susceptible to facilitating the spread of propaganda.

“Not necessarily to say that every single thing should be moderated on these platforms. But just to say that there are specifically harmful narratives and information spaces out there that can cause real world harm that they need to be aware of,” he said.

Ahead of the 2024 U.S. presidential election, Walter said, “What I’m most concerned about is the unpredictability of it all.”

***
This article has been archived for your research. The original version from Polygraph.info can be found here.