Sunday, March 2, 2025

Conspiracy Resource

Conspiracy news & views from all angles, up-to-the-minute and uncensored

COVID-19

Will Conspiracy Gain Momentum With Less Moderation?

Will Conspiracy Gain Momentum With Less Moderation?

As conspiracy theories, fake news, and concerning waves of disinformation started to typify the COVID-19 pandemic, social media platforms stepped up with a wave of moderation and warnings, seeking to help the public distinguish fact from fiction. While the response has been divided, with some decrying the “nanny state” and others citing that it doesn’t go far enough, it is undeniable that the presence of these warnings has had at least some impact on slowing the spread of targeted misinformation and disinformation. Now, these same platforms are rolling back their moderation systems in a move that they cite will support crowdsourced information and will stop the “mission creep” that the platforms had been accused of.

What are the changes?

Meta, which owns the Facebook and Instagram platforms, has made the most recent announcement. They’d previously introduced independent fact-checkers to verify the validity of the content but will now be replacing them with “community notes,” where commenting on the accuracy of a post is left to users. It is cited by Meta that the changes will remove the “political bias” of the third-party fact-checkers and would instead move back to Meta’s roots of “free expression.”

Aligned with the policies of the new Trump administration when it comes to free speech, Meta platforms look set to embrace a significantly wider set of political opinions, supporting users to take a “personalized approach” to politics on their feed. The social media giant sees this making way for considerably more widespread opinions, alongside individuals consuming whatever they see fit. While many are in support of the changes, there is also widespread concern that more freedom, combined with the algorithms employed to show you more of what you “like,” will inevitably create an echo chamber that goes on to fuel political and social extremism.

Does it matter?

In theory, no. In practice, most definitely.

If we assume a society with attitudes of honesty, open-mindedness, tolerance, and truth-telling, then there is the potential that a system like this can effectively and accurately crowdsource fact-checking. There have even been some test instances where the new system has been effective. The problem comes, however, not only from the divisiveness of individual opinion and the regularity with which opinion is presented as fact but more pressingly because there are threat actors who deliberately exploit these mechanisms in order to divide and confuse.

This exploitation is the very reason that systems like this fail, particularly when the algorithms are designed not to present a balanced or rational viewpoint but to respond to the user’s own experience and engagement with the platform, showing them more of what they are reading most. You can explore more on this concept in our post “How Algorithms Change How We Think.”

Oxford English Dictionary said it best in 2016 when they chose “Post-Truth” as their word of the year, referring to the era we are in as one where facts have lost their significance in modern society and politics. It is representative of the reality that facts don’t matter as much in a world pursuing engagement and likes at the cost of tolerance and facts.

Looking at the potential impact, what is clear is that while, in theory, individuals should employ critical thinking skills, our brains are hardwired by default to fail in their objectivity. This is because of:

1. Truth Bias refers to the phenomenon of believing that we will believe people are telling us the truth, even if we are told it is a lie. Originally coined in 1984 by McCornack and Parks during the development of their Model of Deception Detection, its scientific name is Meta-Cognitive Myopia. The field of study that specifically explores truth bias in the context of communication is “Truth Default Theory.”

A study by Pantazi (Oxford University), Klein, and Kissine (Free University of Brussels) highlighted just how powerful truth bias is. They conducted an experiment based on a simulated jury and found that when people received negative information about the defendant, they were influenced by it, even when they had been explicitly told it was not true. Even more strikingly, they identified that memory also plays a role, whereby participants misremembered false evidence as being true simply by reading or hearing it.

2. The Illusory Truth Effect — coined in 1977 by Hasher et al. in their research paper about the accuracy of spotting true and false statements, the illusory truth effect, also known as the illusion of truth, highlights that the more times we encounter a piece of information, the more valid and entrenched it becomes. This is terrible for combatting misinformation but ideal for serving the algorithm.

Social Media Essential Reads

For the most part, it is because of “familiarity” whereby our heuristic processing favors things that feel familiar in order to facilitate quick and efficient decision-making. It’s for exactly this reason that propaganda machines are so effective.

3. Confirmation Biasreferring to an individual’s tendency to search for, interpret, favor, and/or recall information in a way that confirms or supports their existing views or values. This process is rarely conscious but stems from a physiological predisposition to favor the familiar.

There are several interesting studies evaluating the strength of confirmation bias, such as Shafir E. (1993), who demonstrated how our confirmation bias can be manipulated to determine specific outcomes based on the language used. Drew et al. (2006) conducted an experiment based on political leaning and found that participants were more likely to interpret political statements as contradictory if they politically opposed that candidate.

4. Our Poor Understanding of Statistics — many misinformation manipulations rely on psychologically triggering language and techniques to make them clickable and “sticky,” as well as infinitely sharable. Statistics—or, more accurately, big percentages—are one of these techniques. In Western education, when we learn statistics, we also learn percentages, and we often see big numbers in those percentages; 50 percent and 60 percent feel high, and 80 percent or 90 percent feel almost factual.

That’s fine when we’re talking simple, factual statistics, such as 70 percent of people have a certain characteristic, but it becomes a lot harder when we are using these same percentages in the context of a positive or negative change. For example, if the percentage increase is 80 percent, it is automatically assumed that the number is large. But an 80 percent increase of a 1 in 100,000 chance is a 0.000018 change. The number is actually tiny, but the 80 percent increase seems mighty intimidating!

People and organizations are worried, and it is easy to see why. Amnesty International is one of many organizations raising concerns over the ultimate impact of these changes, and it is easy to see why. There’s little doubt that the changes will, at minimum, fuel hoaxes and conspiracy theories, but, more detrimentally, political and social extremism, too.

We leave you with this final thought raised by this research by Ophir et al. at Stanford University (2009). They found that heavy media multitaskers have an overall diminished capacity for critical thinking, instead applying less and less dedicated attention to a single task—the antithesis of critical thinking, if you will.

***
This article has been archived by Conspiracy Resource for your research. The original version from Psychology Today can be found here.