Wednesday, December 25, 2024

conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

5G

How to stop the spread of conspiracy theories and build societal resilience against fake news | Media@LSE – EUROPP

The pandemic and associated lockdowns have given rise to an acceleration in ‘fake news’ around the world over the last year. The phenomenon of mis- and disinformation takes root, evolves and proliferates and can cause real world harm. In this blog post, Pratik Dattani, of consulting group Economic Policy Group, delves deeper into how public paranoia helps conspiracy theories to turn viral through narrative localisation, and explores the measures that law enforcement officers can take to prevent the spread of fake news.

Mis- and disinformation stories found online are often particularly compelling. A little paranoia and urgency helps them to go viral.

Creating the ‘feeling’ that something is fishy

Mis- and disinformation stories found online are often particularly compelling. A little paranoia and urgency helps them to go viral. They tend to suggest a sense of higher purpose and that somehow they have access to information that the mainstream does not. Simultaneously staying anonymous and creating a new identity for oneself on the internet is easy, so such stories are – whether created by micro influencers or coordinated state activity – consistently self-referential. In other words, they all link to each other, boosting individual pages’ credibility on search engines and making it appear to a new visitor that the articles are well searched.

For example, an article entitled Coronavirus Bioweapon – How China Stole Coronavirus From Canada and Weaponized It, which was run by a little-known Indian conspiracy website – Great Game India – with alleged links to Russian disinformation networks, went viral in February 2020. It achieved this by distributing this speculative COVID-19 origin story across a variety of portals, publishing numerous articles implying or inferring the conspiracy before posting the piece in question. It also inserted backlinks to older articles and published around 30 supplementary articles to construct a timeline for the conspiracy. The theory was then popularised globally by the right-wing blog Zero Hedge. Anyone doing their own fact-checking through a search engine would see similar stories on multiple seemingly unconnected platforms and therefore be more likely to be convinced.

It’s the argument “if so many people think something is fishy, they can’t all be wrong?” that seems to convince many about the ‘legitimacy’ of the conspiracy theory. This points to the need for expert oversight to keep track of a volatile information ecosystem.

Photo by Charles Deluvio on Unsplash

Drawing form the analogy of a bee, human ‘pollinators’ take such content from one social media platform and post it on others WhatsApp, Parler, Telegram, MeWe, Gab, Telegram, Bitchute, Brandnewtube, Rumble, and the dark web, it can lead to a more pernicious evolution of the narrative. Groups on such platforms, often with hundreds of members, become echo chambers congenial to the spread of misinformation; potential propagators ready to embrace content supporting their common worldview.

For example, in the UK the 5G coronavirus theory, which claimed that 5G masts caused Covid-19, dwarfed the bioweapon theory. The 5G theory spiked on the same day that UK lockdown measures were announced and arsonists burnt down 80 mobile phone masts and recorded themselves harassing telecoms engineers. According to AI-driven fact-checker Logically.ai, one of the leading sources of disinformation on Covid-19 vaccines in the UK is former banker Brian Rose, who started a lucrative fundraising campaign to build a “digital freedom platform” which raised over a million dollars. He then ran as a candidate in the 2021 Mayor of London election, but lost.

In the US, the Center for Countering Digital Hate (CCDH) analysed more than 812,000 Facebook and Twitter vaccine-related posts and found that 65% of anti-vaccine posts came from just twelve people, including Robert F Kennedy Jr, a nephew of the former US president. Current legislators like Marjorie Taylor Greene have consistently amplified this content. When those in positions of influence amplify conspiracy theories, it lends legitimacy and helps them go viral.

 Conspiracy theories morph and adapt through narrative localisation

When conspiracy theories adapt to survive in different environments, this is called narrative localisation. Conspiracy theories tend to localise their narratives by discarding irrelevant elements for each context and adding in more appropriate ones.

Demographic localisation takes advantage of existing emotive subjects. For example, recent research from BBC Monitoring found that considerable overlap between anti-vaccine and anti-establishment sentiment in France. The number of followers of pages sharing extreme anti-vaccine content in French grew from 3.2m to nearly 4.1m likes last year.

Indian lawyer Vibhor Anand adapted a US-based QAnon conspiracy to accuse Indian movie star Salman Khan of being involved in child trafficking, and keeping the trafficked children at his farm hour. The conjecture was based on an item circulating elsewhere on social media based on the fictionalisation of psychotropic drug adrenochrome in the book Fear and Loathing in Las Vegas. He accused Khan (and even Queen Elizabeth II and Prince Phillip) of being ‘consumers of adrenochrome’, that can only be derived from ‘the brains of children subjected to extreme degrees of torture’. His only ‘proof’ – a red eye. His post went viral.

According to Logically, while covering the death by suicide of actor Sushant Singh Rajput case, Indian television news channels indulged in speculative reporting. Everyone “felt” something had to have gone wrong, which provided a conducive environment for the spread of problematic content. Khan was accused of bullying Rajput and this was believed to have sparked the spread of disinformation perpetrated by Anand. Logically identified coordinated inauthentic behaviour coupled with bot-like activity that amplified the hashtags #justiceforsushant and #cbiforssr (demand for investigation into the case by India’s premier agency the Central Bureau of Investigation). In other words, the feeling the public had of “wrongdoing” in the case, was whipped up by a ratings-hungry media and paid online activity.

This narrative localisation is driven by the emergence of disinformation micro-influencers. These are users who have built a significant following among like-minded people who “follow” their profiles. They are not constrained by Facebook’s content policy for verified pages, so can carve out a niche for themselves. If Facebook’s content moderation flags an item of content as problematic, the response is: “see, big tech and media are muzzling us; they don’t want the truth to get out.” In the case of Anand, his suspension on Twitter and arrest by the Mumbai police for spreading conspiracy theories only turned him into a “hero” and a “brave warrior”.

Building societal resilience against fake news

In low- or middle-income countries, tackling fake news at a wider scale is difficult. First, the level of digital literacy particularly in rural areas remains low. Second, the average citizen is still relatively new to the overabundance of unreliable information online. Third, the institutional mechanisms to prevent fake news spreading have not yet sufficiently developed.

In India, there is a growing understanding amongst law enforcement agencies that external support is required to prevent online harm causing real world harm. I’ve worked with law enforcement agencies and governments across the world on building some of that institutional capacity.  Some of my recommendations to them on disrupting the spread of conspiracy theories and building societal resilience against fake news include:

  • Combine technology with human intelligence: The menace of fake news can’t be completely resolved, but technology can help supplement and scale up human efforts.
  • Educate citizens: Build public awareness of fake news and empower everyone, from individual citizens to national governments, to identify and disarm damaging and misleading information being shared online.
  • Build knowledge of available countermeasures amongst government and law enforcement: Police action is just one such option. Others include takedown requests, terms of service violations, detailed investigation or simply ongoing monitoring.
  • Balance freedom of expression and privacy rights with security requirements: This may be contextual to each country, but it is important to have a society-wide conversation on this.

For example, Harssh Poddar, a Commandant police officer serving in the state of Maharashtra in India recently rescued a family from a lynch mob prompted by fake news on kidnapping gangs in the town of Malegaon. He later led the police to take the highest number of actions across the state against fake news and hate speech during the first wave of the Covid-19 pandemic.

As conspiracy theorists, anti-vaxxers and Covid-19 sceptics move from larger social media platforms to more closed-access forums, their views are further radicalised because they are less exposed to rational views. This makes policing more difficult. Combating it requires collaboration between multiple stakeholders, including government, law enforcement, technology platforms, civil society and the public.

This article gives the views of the author and does not represent the position of the Media@LSE blog, nor of the London School of Economics and Political Science.

*** This article has been archived for your research. The original version from European Politics and Policy can be found here ***