Facebook, COVID-19, and Disinformation: The Paradox of Unlimited Global Information
Results from Q2 2020, shared in July, found that Facebook has 2.7 billion monthly active users sending 100 billion-plus messages each day—making it the largest social network in the world. This year, with the COVID-19 crisis and presidential politicking, Facebook has been in the news quite a bit. The social media site is available in more than 100 languages across the globe, with 300,000-plus users accessing translations. The company’s mission is to “[g]ive people the power to build community and bring the world closer together.” However, it faces increasing criticism and scrutiny.
In April 2019 testimony before Congress, CEO Mark Zuckerberg apologized for “a big mistake,” which “was in fact several big mistakes, including exposing up to 87 million users’ data to Trump campaign consultants (a privacy breach for which Facebook would be fined $5 billion); concealing an organized Russian effort to interfere in the 2016 election; and continually failing to regulate genocidal hate speech,” The New York Times notes.
As CNN reported in September, Facebook “has allowed political advertisers to target hundreds of misleading ads about Joe Biden and the US Postal Service to swing-state voters ranging from Florida to Wisconsin in recent weeks, in an apparent failure to enforce its own platform rules less than two months before Election Day.” CNN continued, “The ads containing false or misleading information, primarily by a pro-Republican super PAC led by former Trump administration officials, have collectively been viewed more than 10 million times and some of the ads remain active on the service, according to an analysis of Facebook’s ad transparency data by the activist group Avaaz.”
CNN went on to describe the company’s perspective as allowing “politicians to make false claims in their ads—arguing that voters deserve an unfiltered view of what candidates and elected officials say” while “advertisements by super PACs and other independent groups are subject to the company’s policies on misinformation.”
Going beyond the problems Facebook is having with election misinformation, the company continues to chip away at some more glaring misinformation problems, notably those related to COVID-19 and climate change. Even with this, the company does not want to censor or remove false posts. In September, for example, Facebook launched its new Climate Science Information Center, which, like similar efforts regarding COVID-19 misinformation and political comments or ads, is intended to address “misleading and false content related to global warming,” as a Vanity Fair article says, “with hard science more or less lumped together with lies, industry statements, and conspiracy theories.”
Facebook describes the Climate Science Information Center as meeting the company’s need to defend science: “Climate change is real. The science is unambiguous and the need to act grows more urgent by the day. As a global company that connects more than 3 billion people across our apps every month, we understand the responsibility Facebook has and we want to make a real difference.”
The Vanity Fair article notes that “while Facebook has appeared unusually willing to combat pandemic misinformation by removing or labeling inaccurate posts—though it’s not clear its efforts have been effective—it seems Mark Zuckerberg and Co. plan to continue their typical hands-off approach to bunk climate claims outside the information center. While it will label misleading content, it’ll still largely avoid removing false posts.”
COMMUNICATIONS PLATFORM OR RUMORMONGERS?
With Facebook being a communications medium, trying to find a way to mix facts with the opinions of its members—while continuing to grow and develop its base—forces the company to try to moderate free speech and open communication, which is the hallmark of its mission.
The company’s “fact-checking rules dictate that pages can have their reach and advertising limited on the platform if they repeatedly spread information deemed inaccurate by its fact-checking partners. The company operates on a ‘strike’ basis, meaning a page can post inaccurate information and receive a one-strike warning before the platform takes action. Two strikes in 90 days places an account into ‘repeat offender’ status, which can lead to a reduction in distribution of the account’s content and a temporary block on advertising on the platform.”
Regarding climate misinformation, Facebook explains, “As with all types of claims debunked by our fact-checkers, we reduce the distribution of these posts in News Feed and apply a warning label on top of these posts both on Facebook and Instagram so people understand that the content has been rated false.”
RESEARCHERS FIND PROBLEMS WITH SOCIAL MEDIA TRUTHFULNESS
An August 2020 article in The American Journal of Tropical Medicine and Hygiene suggests that a single piece of coronavirus misinformation led to as many as 800 deaths. The authors “followed and examined COVID-19–related rumors, stigma, and conspiracy theories circulating on online platforms, including fact-checking agency websites, Facebook, Twitter, and online newspapers, and their impacts on public health,” concluding, “Misinformation fueled by rumors, stigma, and conspiracy theories can have potentially serious implications on the individual and community if prioritized over evidence-based guidelines.”
A major study by Avaaz found clear evidence that “health misinformation is a global public health threat.” Its research shows that “global health misinformation spreading networks on Facebook … reached an estimated 3.8 billion views in the last year,” yet “[o]nly 16% of all health misinformation analysed had a warning label from Facebook. Despite their content being fact-checked, the other 84% of articles and posts sampled in this report remain online without warnings.”
In an effort by Facebook to contain misinformation, in 2016, the company began using independent fact-checkers to flag questionable posts, including one about abortion never being medically necessary. That post’s author “quickly launched a petition protesting what she alleged was bias by Facebook’s fact-checking partner, a nonprofit called Health Feedback. Soon, four Republican senators, including Josh Hawley of Missouri and Ted Cruz of Texas, wrote a letter to Zuckerberg condemning what they called a ‘pattern of censorship.’ … Soon, the fact-check labels were gone.” As two of the fact-checkers reflect, “promulgating misinformation about when abortion is medically necessary is dangerous.” However, Facebook failed to reinstate the false claim to that particular posting, leaving fact-checkers to wonder about their value and role.
PREPARING FOR THE NEXT PANDEMIC?
“The spread of false and malicious content about the coronavirus has been a stark reminder of the uphill battle fought by researchers and internet companies,” The New York Times reported in March. “Security researchers have even found that hackers were setting up threadbare websites that claimed to have information about the coronavirus. The sites were actually digital traps, aimed at stealing personal data or breaking into the devices of people who landed on them.” The article continues, “The spread of false and malicious content about the coronavirus has been a stark reminder of the uphill battle fought by researchers and internet companies. Even when the companies are determined to protect the truth, they are often outgunned and outwitted by the internet’s liars and thieves.”
A research article in Nature Human Behavior states the following:
Social networks can amplify the spread of behaviours that are both harmful and beneficial during an epidemic, and these effects may spread through the network to friends, friends’ friends and even friends’ friends’ friends. The virus itself spreads from person to person, and since people centrally located in networks come into contact with more people, they are often among the first to be infected. But these very same central people may be instrumental in slowing the disease because they can spread positive interventions like hand washing and physical distancing by demonstrating them to a wide range of people. Some research suggests that a larger proportion of interventions can come not from direct effects on people who receive the intervention, but from indirect effects on their social contacts who copied the behavior. We may therefore leverage the impact of any behaviour change effort by targeting well-connected individuals and making their behaviour change visible and salient to others.
In February 2020, the World Health Organization’s Tedros Adhanom Ghebreyesus declared that COVID-19 is not the only public health emergency the world is facing: We are also suffering from an “infodemic” of fake medical news. Controlling the pandemic may prove easier than controlling the global communication about it. In October 2020, Kang-Xing Jin, Facebook’s head of health, blogged, “Facebook is supporting the global public health community’s work to keep people safe and informed during the coronavirus public health crisis.”
According to an article on Sprinklr, there were more than 19 million mentions of COVID-19 on social media, blogs, and online news sites across the globe on a single day (March 11, 2020). “It’s clear that coronavirus is the first global pandemic that is unfolding on social media with unprecedented volumes of conversations happening every second,” it notes.
The research article “Who to Trust on Social Media: How Opinion Leaders and Seekers Avoid Disinformation and Echo Chambers” says, “As trust in news media and social media dwindles and fears of disinformation and echo chambers spread, individuals need to find ways to access and assess reliable and trustworthy information.” Information professionals can show them a better way.
*** This article has been archived for your research. The original version from InfoToday.com can be found here ***