Thursday, November 28, 2024

conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

COVID-19

The coronavirus ‘infodemic’: truth and conspiracy online

Credit: Dominic Lipinski/PA Wire/PA Images

The internet has always loved a conspiracy theory. The Earth is flat. Avril Lavigne was secretly replaced with a clone, à la Paul McCartney. Elvis faked his own death and lived out a quiet retirement in Graceland. Drinking disinfectant will kill coronavirus.

Some may be a little more harmful than others. 

The spread of false information online has been high on the government’s radar in recent years. This encompasses both misinformation, which cover inaccuracies shared inadvertently, and disinformation, which is deliberately deceptive and propagated for political, ideological, or financial purposes.

In 2018 and 2019 reports were released by the Department for Culture, Media and Sport after an inquiry into ‘Disinformation and fake news’, and the ‘Online Harms’ white paper of April 2019 has laid the groundwork for the possible appointment of Ofcom as an ‘online harms regulator’. 

After a considerable increase in online misinformation surrounding the Covid-19 pandemic, the House of Commons DCMS Sub-committee on Online Harms and Disinformation launched an inquiry into what the World Health Organization (WHO) has coined the “infodemic” – the rise of misleading information specifically regarding the Coronavirus pandemic. 

The committee appealed for evidence from the public, and was able to speak with representatives from Facebook, Google, YouTube and Twitter. Evidence was also received from social media platform TikTok, BT, government ministers, front-line health workers in both the UK and US, academics, and Ofcom.

Coronavirus misinformation takes many forms online. Evidence presented to the committee ranged from stories of snake oil ‘treatments’ to phoney accounts of hospital negligence, claims that 5G encourages the spread of the virus to ‘studies’ that state certain medicines – such as painkillers – are dangerous.


70 
Verbal or physical against 5G workers
 

40 million 
Impressions on government’s coronavirus campaign on Google homepage

2018
Year in which the Rapid Response Unit was established in the Cabinet Office to rebut misinformation

2.5 billion
Approximate number of monthly active Facebook users


The false claims are hugely varied, and are passed around by a similarly varied cast of characters, which can make disputing them en masse impossible. 

“Recall [Donald Trump’s] public musings about using bleach or rubbing alcohol or UV light inserted into the body?” wrote Irwin Redlener, director, a professor of public health and pediatrics at New York’s Columbia University, in a letter submitted by the campaign group Avaaz and containing evidence from front-line health workers. 

Figures to have used their public standing to profit from the pandemic included a registered nurse who allegedly mis-sold health products over Facebook to her 11,000 followers. 

Not all of the misinformation originates from perpetrators seeking personal gain, though. A significant portion of Covid-19 misinformation seemingly directly benefits no-one. 

Part of the appeal of these conspiracies is their alleviation of the sense of helplessness felt in the face of a disease with no available cure or vaccine. While portions of the populace lose faith in institutions, conspiracies offer a glimpse at so-called “truth”. 

Dr Claire Wardle, director of anti-misinformation project First Draft News, noted that many people are therefore “inadvertently sharing false information believing they are doing the right thing”. 

A GP, who provided evidence anonymously, noted that viral messages being spread on the Facebook-owned WhatsApp were passing fictional accounts of hospital misdemeanors around predominantly Asian communities – already at higher risk from the virus. The messages claimed that hospitals were deliberately infecting patients, or else leaving them to suffer without aid. 

“This has led to patients refusing to go into hospital and deterred them from seeking medical help,” they said. “It has also made dealing with family members who have loved ones in hospital difficult to manage as they feel adamant that the doctors are actively trying to harm them or discharging them without treating them.”

The aversion to seeking help from hospitals has been a widespread response to certain fake news stories. 

Cassie Hudson, a student paramedic in the UK writing in the same Avaaz letter, stated that she had seen patients with COPD or cardiac disease, “just refusing to seek suitable levels of care because they’re afraid they’ll die of Covid-19 in the hospital.” 

Away from hospitals, false claims about the role of 5G in the spread of the virus led to almost 70 attacks against EE staff and subcontractors. 

“Misinformation has led to patients refusing to go into hospital and deterred them from seeking medical help. It has also made dealing with family members who have loved ones in hospital difficult to manage as they feel adamant that the doctors are actively trying to harm them.”

In written evidence submitted to the committee, BT stated that these incidents, “included threats to kill and vehicles driven directly at staff.” 

The inquiry found that “loss of trust in institutions” has been seen as both “an aim and opportunity for hostile actors”.

It found that during the course of the infodemic, “both state (Russia, China and Iran) and non-state (such as Daesh and the UK and US far right) campaigns have spread false news and malicious content”.

Fact versus fiction 
One of the government’s primary responses to the spread of misinformation has been to simply raise the profile of fact-checked resources. 

According to one announcement, “ensuring public health campaigns are promoted through reliable sources” was a priority, alongside removing and rebutting harmful content on social media. 

All of the social media companies who gave evidence to the sub-committee stressed their attempts to promote WHO-approved information to their users. 

Searches for ‘coronavirus’ on Twitter, TikTok and Google aim to direct users towards accurate information — often linking back directly to WHO, NHS and GOV.UK sources, or by adjusting search results and “other platform-specific features”. 

Facebook, for example, launched a dedicated ‘Facts about COVID-19’ section in their Coronavirus Information Centre in July, which seeks to debunk popular claims, such as “medical masks cause oxygen deficiency”. 

Twitter, TikTok and Facebook all also stressed that they had provided the government with pro bono advertising credits, which they could then use to promote verified sources of Covid-19 information, or to push public-health messaging such as the ‘Stay Home, Save Lives’ campaign. 

“We have seen more than 3.5 million visits to the Covid resources on NHS and UK government websites from Facebook and Instagram since January as a result of us directing to those resources,” Facebook’s head of product policy and counterterrorism, Monika Bickert, told the committee. 


£1.4m

Value of contract for a social media analytics firm government hopes can identify ‘key influencers’
 

90 million

Pieces of information flagged as false by Facebook fact-checkers in March and April 
 

70

Pieces of misinformation a week to be refuted by the RRU 
 

11 March

Date on which DCMS committee launched inquiry and wrote to department secretary Oliver Dowden to express concern


Government public health messaging on Google’s homepage also reportedly reached over 40 million impressions. 

As part of their social media push, the Government Communication Service (GCS) is currently hunting for an analytics firm to help highlight key influencers to push coronavirus public health messaging. 

The contract notice states that the chosen firm will assist with “gathering social media insights to build strategic communications strategies and public information campaigns.” 

Correcting the record 
While the government has been actively pursuing relationships with social media platforms in order to further the visibility of public health messaging, there is concern that simply promoting accurate information is not enough to quell the threat of viral misinformation. 

“I can speak to one person for 10 minutes and have an influence on that one person’s experience of healthcare,” explained advanced paramedic practitioner Thomas Knowles. 

Meanwhile, a pandemic documentary that was circulating on YouTube gathered 40 million views within a 48-hour period. 

“That is 25,000 people in 10 minutes. I cannot speak to 25,000 people in 10 minutes,” Knowles said.  

Offline, solutions such as encouraging better digital literacy have also been floated as potential aspects of a multi-faceted response. 

The government’s ‘Don’t Feed the Beast’ campaign – which has the tagline “Just because a story appears online, doesn’t mean it’s true” – encourages the public to approach online claims through the lens of a five-point checklist ‘SHARE checklist’, designed to alert people to false information before they spread it further. 

Of course, these measures only discourage the misinformation shared accidentally by the public, and not more sinister offerings from scammers and those seeking to capitalise on the fear and confusion caused by the pandemic. 

For that, the government’s Rapid Response Unit (RRU), which was first established in the Cabinet Office in 2018 and is now housed between the Cabinet Office and 10 Downing Street, has been tasked with responding to 70 pieces of misinformation a week. 

During the pandemic, the Counter Disinfomation Unit – which brings together existing analytics teams from various departments – has also been launched from DCMS and tasked with tackling coronavirus-related falsehoods.

For many of the medical professionals spoken to over the course of the inquiry, such measures do not go far enough to re-educate the public. There is rising pressure on social media companies not only to flag and delete misleading claims, but also to issue specific, targeted corrections. 

These “correct the record” tools, explained campaign group Avaaz, would mean alerting every user who had encountered misinformation online to inform them both of the falsity of the initial post, and to provide correctional resources.

“This means alerting and notifying every single person who has seen or interacted with health misinformation on their platforms, and sharing a well-designed and independently fact-checked correction – something shown to help prevent users believing harmful lies,” the group said.

Since implementing a similar ‘correction’ tool, Facebook has stated that, “100% of those who see content already flagged as false by our fact-checkers” would see a warning screen. This was applied to 40 million pieces of content in March and 50 million pieces of content in April. 

Those who share, like, or comment on misinformation are sent a ‘correct the record’ notification. However, the platform does not every user who has simply seen such information, for fear of “diluting” the effectiveness of the notification. 

The attention economy 
The committee examined whether social media companies are incentivised to “correct the record”.

Many would argue that the opposite is true, and that a complete removal of “fake news” would damage their business models. Billed the “attention economy”, these models rely on user engagement – without necessarily differentiating between positive and negative – to push content, generate users, and create more data and advertising opportunities. 

Stacie Hoffmann, digital policy and cybersecurity consultant at specialist firm Oxford Information Labs, told the committee that due to the strong reaction guaranteed by misinformation and disinformation, “the algorithms are rewarding negative reactions.”

The report published by the committee argues that “this is opposite to the corporate social responsibility policies espoused by tech companies relying on this business model”.

This is unlikely to change overnight. 

Day-to-day policing of users on social platforms is still the responsibility of the platforms themselves. Each has its own set of rules surrounding the use of the site, and imposes penalties on users who do not act in accordance with these policies based on their own judgements. 

It is these codes of conduct which dictate which content risks deletion – and it is not necessarily enough for the information presented to simply be incorrect. 

YouTube’s community guidelines, for example, explicitly ban “hate speech, predatory behaviour, graphic violence, malicious attacks and content that promotes harmful or dangerous behaviour”.

The inquiry found that, “prior to the pandemic, many of the tech companies did not have robust policies against harmful misinformation and have also often been slow in adapting their policies to combat it.”

This subjective implementation of existing policies has predictably led to confusion when attempting to establish a common response to misinformation. 

John Nicolson, SNP MP for Ochil and South Perthshire, summarised the issue in one oral evidence session by stating, “the problem is that Mark Zuckerberg sets the tone, doesn’t he?”, (pictured above, appearing on Fox News). 

Addressing a representative from Facebook directly, he added: “You will not take [misinformation] seriously until we, as parliamentarians, hit you with serious financial penalties. It worked in Germany, and I think we need to do it here.” 

German law states that any social network which does not take action to remove “fake news” misinformation, hate speech and other illegal content can be fined up to €50m (£43m). 

With or without a financial incentive to help counter the spread of fake posts, paramedic Knowles spoke of the “moral obligation” big tech platforms ought to feel to address the spread of damaging content. Especially in cases where such content is actively monetised by the host site. 

The cost of battling such misinformation “cannot be borne by the public purse when we are looking at organisations that are turning over billions of pounds a year,” he said. 

In this way, the spread of misinformation is not only a logistical issue, but a financial one as well, with both sides standing to lose money in the fight to counter it. 

It may seem unlikely that, while the algorithms rely so heavily on engagement, these platforms will endeavour to remove incendiary content completely. 

For this reason, one possible solution being considered is a change in the role of Ofcom. It was in February of this year that the government suggested it was “minded” to name Ofcom as a new ‘Online Harms Regulator’ – a role in which it would have further powers over the actions of social media companies. 

This would mean it would be empowered “to request explanations about the way an algorithm operates, and to look at the design choices that some companies have made and be able to call those into question”.

“We are working around the clock to save lives. The wealth of misinformation increases our work; it places our lives in danger; and adds additional stress and emotional and mental toll. We would appreciate social media companies standing behind us in this fight.”
Dr Meenakshi Bewtra, University of Pennsylvania

But the other proposed online harms legislation – put forward by the government with the aim to hold companies to their own policies and community standards – was found to be too weak by the DCMS committee.

“The government must empower the new regulator to go beyond ensuring that tech companies enforce their own policies, community standards and terms of service,” it said. “The regulator must ensure that these policies themselves are adequate in addressing the harms faced by society.” 

In order to do this, it suggests that the regulator should have the power to: standardise policies across platforms, thereby ensuring minimum standards; hand out fines for non-compliance; disrupt the activities of businesses who do not comply; and to ensure custodial sentences as sanctions where required. 

As the battle against coronavirus rages on, its online counterpart continues to eat into the time of those on the front lines. 

“We are working around the clock to save lives. The wealth of misinformation increases our work; it places our lives in danger; and adds additional stress and emotional and mental toll to all of us,” said Dr Meenakshi Bewtra of the University of Pennsylvania. 

“We would appreciate social media companies standing behind us in this fight.”