Sunday, November 24, 2024

conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

Conspiracy

Georgetown Public Policy Review / Rate, Subscribe, and Libel: A New Approach to Deterring Conspiracy Theories on Social Media – Georgetown Public Policy Review – Georgetown Public Policy Review

Conspiracy theories are spreading on social media platforms, and algorithms promoting engagement may contribute to the problem. This article offers a public policy option promising reasonable incentives for social media companies and users to restrain our natural attraction to conspiracy theories. 

Under this proposed policy, unique defamation lawsuits targeting online falsehoods could result in social media companies being compelled to suspend users who spread misinformation. The resulting incentives should lead these companies to diminish engagement with conspiracy theories and should influence their users to be careful about sharing such falsehoods. While standard defamation lawsuits award money damages to remedy harm caused by communicating falsehoods, this new type of defamation lawsuit would focus on deterring social media companies and their users from spreading misinformation. 

A Different Kind of Political Conspiracy Theory

At the climax of the 2020 election cycle, an Indiana University study found 43.6% of surveyed Americans believed the QAnon conspiracy theory was “likely true” or “definitely true.” This finding should have been worrisome at the time, as a redacted 2019 FBI memo found such “fringe” political conspiracy theories “very likely” to motivate violence by domestic extremists. After seeing QAnon play a prominent role in the January 6th attack on the U.S. Capitol, social scientists and policymakers are warning against the future growth of political conspiracy theories.

However, QAnon differs from other American political conspiracy theories, such as those claiming the moon landing was faked or 9/11 was an “inside job.” These debunked theories aim to explain big, unexpected events as part of conspiracies arguably within the political motivations of their alleged leaders (e.g., in fighting the Cold War or starting wars in the Middle East). In contrast, the QAnon theory is about Donald Trump’s opponents forming a cabal of Satan-worshiping, cannibalistic pedophiles. QAnon appears to be less of an attempt to explain abnormal events and more a product of social media users vying for attention with inflammatory assertions. And, in fairness, some anti-Trump social media users also suggest and spread conspiracy theories.

The origins of QAnon and other conspiracy theories may reveal something about the current incentives in social media. Rabbit holes of misinformation pandering to human biases drive engagement with these platforms. Content suggested by algorithms attracts users and binds them together into unchallenged information bubbles. As a result, false news tends to spread more rapidly than real news. 

The Link Between Social Media and the New Conspiracy Theories

The dissemination of a unique type of conspiracy theory may be linked to the spread of a certain model of social media. Conspiracy theories arise because people search for meaning in patterns and therefore find causation and intention to match and explain adverse, unexpected effects which, in reality, result from a culmination of small or random causes. These ideas are seductive because they appeal to our biases and make people feel special or in control. Though anyone is susceptible to such thinking, conspiracy theories particularly target people with unmet psychological needs who feel anxiety, isolation, or powerlessness.

An examination of social media platforms reveals their many connections with the rise of conspiracy theories. The core business model of the major social media companies involves attracting the attention of users and thereby attracting funding from advertisers. To grab the attention of their users, these companies use statistical algorithms to find and present content that optimizes engagement. With their tendency to increase loneliness and anxiety, these online platforms may also enhance users’ susceptibility to conspiracy thinking. Because conspiracy theories thrive on biases and in insulated information-feedback communities, policy analysts have accused social media companies of using the attention-grabbing nature of these ideas to drive engagement and thus profits. Peer-reviewed social science research has repeatedly found a correlation between using social media for information and believing in conspiracy theories.

Especially because the new conspiracy theories threaten our democracy, policymakers should do something to shift the incentives of social media companies away from amplifying these ideas.

The Policy Solution: A New Type of Defamation Action

Policymakers on both sides of the political aisle are calling for something to be done about social media companies functioning as platforms of conspiracy theories. Some call for these companies to be held accountable or sued, but the remedies being sought are not precisely defined. Traditional defamation lawsuits require an identifiable plaintiff (absent in QAnon’s unnamed members of the “deep state”), and social media companies enjoy 47 U.S.C. section 230 protections against civil liability. Other solutions, such as public education campaigns and ad-hoc moderation of social media platforms, are slow and time-consuming.

Suspending users, as Twitter and Facebook have done with QAnon theorists, has proved to be a somewhat effective strategy in limiting the spread of conspiracy theories. However, this post-hoc approach often occurs too late, applied only after the groups are identifiable threats, rather than preventing their formation with deterrence. But because social media companies depend on high quantities of users to attract advertisers, a policy requiring the suspension of users who share provably false information presents better incentives.

As a better solution, “online public defamation” could serve as a new type of legal action against online misinformation. If this policy became law, a plaintiff could file a lawsuit claiming a social media company has allowed users to spread specific misinformation. The fact-finder would then determine whether a significant number of users have spread the falsehood while claiming it to be objective reality. If the action meets civil standards of proof, the court would order the platform to temporarily suspend users who spread the falsehood a certain number of times. 

In contrast to online public defamation, a traditional defamation action seeks to prove that a false statement was presented as fact, published to others, and resulted in measurable monetary damages. Because the purpose of existing defamation actions is to compensate for monetary harm resulting from false statements, and not to punish people who publish false statements, plaintiffs must prove an identifiable injury. The proposed online public defamation action is different in that it does not require proof of monetary damage suffered by an identifiable plaintiff. Instead, it prevents social media companies and their users from benefiting—in terms of receiving attention or profits—from the spread of provably false information.

Policy Implications

The presence of this policy would improve incentives facing social media companies and their users. Instead of benefiting from rabbit-holes of misinformation, social media companies would need to proactively prevent this activity so users are not suspended. Effective preventative measures could include tweaking content-driving algorithms to deter the dissemination of false information and reducing amplification of stories with unusual velocity spikes until they can be verified. Furthermore, users would be more motivated to research the validity of a story before sharing it, lest they risk suspension of their profiles.

This particular social media defamation action would allow our legal institutions to serve their role in balancing free speech with false statements and making sure our democracy operates on a common set of facts. Existing incentives allow certain social media companies and certain users to attract attention, for various reasons, by spreading conspiracy theories. As indicated by recent events, this dynamic holds grave consequences if left unchecked.

Photo by Blink O’fanaye.

*** This article has been archived for your research. Find the original article here ***