The 2020 election saw fewer people clicking on misinformation websites, Stanford study finds – Stanford University News
Despite this apparent foreign disinformation blitz, people actually saw nine times as many posts coming from politicians and 25 times as many posted by news organisations. Even the content that did reach people’s screens had little impact as far as electoral politics is concerned. It mostly reached partisan Republican voters – those “least likely to need influencing” – and there was “no evidence of a meaningful relationship between exposure (…) and changes in attitudes, polarisation, or voting behaviour.”
So while it may be concerning that some overseas agents could be trying to influence US voters, their mission seems largely ineffective and journalists could do a better job of reflecting that. “There’s a lot of secondary coverage of misinformation in the mainstream media that I don’t think adequately characterises how little it’s seen,” says Allen. “There needs to just be much more careful reporting and contextualisation of efforts.”
Bringing attention closer to home, some scholars argue that an insular right-wing arm of the domestic news ecosystem is more effective at promoting “disinformation, propaganda, and just sheer bullshit,” than social media or Russian influence campaigns.
The authors of Network Propaganda, which looked at media coverage in the years either side of the 2016 election, described the US right-wing ecosystem of hyper-partisan sites as publishing “decontextualised truths, repeated falsehoods, and leaps of logic to create a fundamentally misleading view of the world.” In an interview with Boston Review, lead author Yochai Benkler said: “Its defining characteristic is pushing content that reinforces identity and political in-group membership.”
Whether or not stories are true or false is not really the point, he argues, and it’s therefore operating to a different standard than the rest of the American news ecosystem which largely adheres to traditional journalistic practices and standards.
Does AI pose a new misinformation risk?
The 2024 presidential election will be the first where generative AI applications are available at scale prompting fears that it is “supercharging the threat of election disinformation,” where “anyone can create high-quality ‘deepfakes’ with just a simple text prompt,” to deceive voters.
A contributor to a recent podcast by the Brookings Institution said of the threat of AI: “We could end up in a situation of an election being decided [emphasis added] based on false narratives.” Fears often centre around the quantity of misinformation that can be generated using AI, the increasing quality as the technology improves and the ability to create personalised misinformation targeted at recipients.
In a recent piece published by Harvard’s Misinformation Review, a group of researchers including Altay and the Reuters Institute’s Felix Simon argued that, overall, “current concerns about the effects of generative AI are overblown” and in keeping with other “moral panics surrounding new technologies” that emerged in the past.
Could individuals be hyper-targeted with AI-generated content? Possibly, but AI-generated content doesn’t distribute itself and there is little evidence from political campaigns that direct, personalised messages are very persuasive. As with the rise of existing technologies, argues Altay, AI will have “a huge influence in what we do. But it doesn’t mean it’s going to determine who we vote for.”
The US election is only one of the dozens of electoral processes taking place in 2024. During the Indian election, generative AI was used widely, often to inject an air of fun into campaigns, including by ‘resurrecting’ famous singers or cloning politicians’ voices to make personalised campaign calls.
Consistent with Indian politics, violent rhetoric, particularly against Muslims by Prime Minister Narendra Modi’s campaign, was in force and generated using AI. However, say Harvard Kennedy School’s Vandinika Shukla and Bruce Schneier, “the harm can be traced back to the hateful rhetoric itself and not necessarily the AI tools used to spread it.”
This sentiment was recently echoed by Ritu Kapur, co-founder of Indian news outlet The Quint, who said: “We didn’t need AI for misinformation in the Indian elections. We have plenty coming from politicians.”
How journalists unwittingly assist disinformation campaigns
When the Department of Justice announced their indictment against Tenet Media in September, it became a major news story. It involved Russian money, a shady plot and one of the country’s most popular YouTubers two months before election day. However, given the limits of the influence operation, journalists should have offered “much more careful reporting and contextualisation,” argues Allen. This often requires showing that the amounts of money involved and the overall reach were relatively small.
Another case shows how the news media’s overemphasis of the scale or success of such campaigns may play into the hands of those running them.
The 2022 ‘Doppelganger’ campaign created a network of fake websites posing as real ones to push information in line with Russia’s war aims. Thomas Rid, an expert in information warfare who saw leaked campaign documents, wrote: “The biggest boost the campaigners got was from the West’s own anxious coverage of the project” and that “far more people likely read the secondary coverage of the exposed forgery campaigns than ever viewed the primary disinformation.”
One of the company’s stated achievements was “the publication of a number of journalistic and industry investigations into Russian disinformation campaigns,” including by Meta and the Washington Post. Mainstream media coverage of the operation was one of the goals the campaign pursued.
It’s much more difficult to understand the exposure to online misinformation today than even two years ago, says Allen, who stresses that Twitter removed its free API, Facebook replaced CrowdTangle with something less useful and TikTok “is not transparent” about the platform. Another concern is that much of the evidence on the spread and influence of misinformation comes from Western Europe and the US, so researchers are not able to draw strong conclusions about the influence of misleading content in the Global South.
What is lost in pointing the finger at online misinformation
Despite the evidence against social media or AI-driven misinformation being persuasive enough to swing an election, this doesn’t mean misinformation in general is not problematic in other ways. Frequent false claims about the reliability of the 2020 election result by Donald Trump circulated among angry crowds outside vote-counting centres and culminated in the storming of the Capitol on 6 January 2021. In this election, with a potential photo finish, genuine uncertainty around the results and possible delays in vote counting could leave space for false claims to flourish at a time when they could be most harmful.
But focusing on online misinformation as the cause of unease or even unrest can distract from larger questions as to why people accept something that isn’t true or that is so at odds with prevailing beliefs. It can also let more powerful people off the hook, Altay says: “This focus on social media diverts our attention from politicians, from older institutions, or even from some media organisations that are much more powerful and influential and sometimes spread misinformation that I think is more consequential than what’s spread online by regular users.”
After all, it was misinformation coming from the very top that spread the lie about the 2020 election being stolen, and even four years later the public’s beliefs about the scale of election fraud are “wildly exaggerated”. As Professor Nyhan says, “the claim about Trump winning was of course spread widely by him and via the mainstream media.” The extent of this misperception would have been very different if only ordinary people had shared those claims on their social media channels.
As Trump was using every channel to promote these baseless claims, “people still would have likely learned about and believed [it] without the help of social media,” Allen suggests. Would the assault on the US Capitol have happened without Trump’s encouragement? The evidence suggests it’s very unlikely.
Evidence shows that millions of Americans hold false beliefs, including in the arena of politics. Reporters have also proved that overseas actors are trying to influence what people believe and that AI has now become a major campaigning tool in elections. But the evidence from previous elections suggests such fears about the actual impact of online misinformation are largely unwarranted, at least for now.
Being interested in the pursuit of truth inevitably means journalists are concerned when they repeatedly see misinformation themselves and they may want to sound the alarm in their coverage. But it is in large part due to the work of journalists, fact-checkers and the dominance of mainstream news outlets that online misinformation does not gain as much of a foothold as we might think. The US election might be more immune to these false narratives than many outspoken pundits say.