Are extremism and “fake news” social media’s fault? – LA Daily News
Members of Congress will be grilling Big Tech CEOs tomorrow at a hearing forebodingly titled “Disinformation Nation: Social Media’s Role in Promoting Extremism and Disinformation.”
This comes as no surprise.
The last two federal elections and Capitol Hill siege, as well as the proliferation of conspiracy theories around Covid-19, have certainly highlighted how social media users spread misinformation and propaganda on the platforms. But is the problem really all Big Tech’s fault?
That’s a question worth keeping in mind, as lawmakers prepare to use this hearing as a stage on which to push for reforms that’ll inevitably force social media platforms to censor innocent users and constrain free discourse online.
Social media platforms have become dynamic vectors for information. It’s easy for information to spread rapidly, and for users to be exposed to an unprecedented range of news sources consolidated in a single place that enables real-time engagement with a broad community of fellow users, including friends, family and acquaintances.
These features have delivered immense benefits for both users and media companies alike, with the latter relying on them to stay afloat amidst traditional print media’s decline. Equally, social media can help spread misinformation, backed by the false legitimacy of endorsement by otherwise trusted friends, colleagues and family members who share, like or comment on posts.
And while they’ve enabled the formation and growth of user communities who rally behind common interests and opinions, this also carries potential for nefarious elements to spread messages of hate, conspiracy theories, or extremist ideologies.
To combat these problems, large companies like Facebook have employed content moderation teams and independent “fact-checkers”, as well as advanced algorithms designed to capture fake news. However, the problem’s total elimination is a pipe dream.
It’s just not possible to vet every piece of information or viewpoint that billions of users post in real-time. Even if some algorithm enabled this, it’d likely capture innocent or innocuous speech too, as well as controversial opinions that are still important for public discourse given the rapidly evolving nature of understandings around various topics with the benefit of new or hitherto unconsidered information. Over-moderation will also likely undermine platforms’ basic functionality by preventing real-time discussion.
Consider COVID-19. Even trusted expert agencies, like the Centers for Disease Control and Prevention, have been forced to amend or withdraw their own social distancing and masking recommendations with the benefit of new information and experience. Taking social media companies to task for permitting users to question the official positions of these experts may prevent conspiracy theorists from spreading unfounded, fear-driven claims that could undermine public health. But it could equally prevent legitimate questions from being raised that are eventually vindicated in hindsight.
Bizarrely, even social media “fact-checking” can hurt those who aren’t making statements of fact or spreading misinformation, like artists and satirists, as proven by existing misinformation filters. Similarly, attempts to crackdown on extremist content through algorithms have inadvertently captured and censored journalists sharing videos to expose human rights abuse worldwide.
In reality, “fake news” and slanted propaganda pre-date social media, and the inherent psychological traits giving them appeal and helping them spread can’t be muted by overregulating platforms. During the Civil War, the New York Herald was accused of deliberately inflaming tensions between the North and South to sell more newspapers by falsely claiming that George Washington’s body had been taken from its tomb to the Virginian mountains.
The tendency of social media engagement to reward and highlight information confirming someone’s pre-existing views, rather than necessarily rewarding accurate or unbiased information, is a consequence of human nature. Not the sites or apps themselves.
The difference between the print and digital platform eras are that users aren’t just exposed to partisan news that plays into their biases. If they wish, they can easily find opposing sources and viewpoints aggregated in the same place, whether in a pure news aggregating platform like Google News, or social media sites like Facebook.
Indeed, when Google News withdrew from Spain in 2014, the overlap between the audiences of different outlets declined significantly — the kind of fragmentation that prevents narratives and biases from being challenged.
And just as social media could be doing more good than harm by exposing people to sources they wouldn’t otherwise seek themselves, security experts also credit public social media with assisting in monitoring and policing extremism. This becomes harder if extremists are pushed off public platforms onto encrypted messaging applications or the dark web. These confined spaces act as echo chambers for extremism by amplifying talking points and reinforcing ideologies.
Public resentment over the perceived censorship of conservative viewpoints is likely to worsen if the primarily liberal-driven crusade for more censorship to combat “fake news,” gains traction.
Sure, companies could tune their algorithms so highlighted comments and stories aren’t primarily those that appeal to users’ preconceptions or emotions. But even this isn’t a guaranteed solution and could have unintended consequences.
A better idea is to simply raise future generations of users to be skeptical and think critically without expecting big brother or tech giants to be perfect arbiters of falsehood. As Abraham Lincoln famously said, “don’t believe everything you read on the internet.”
Satya Marar is a senior contributor and tech policy fellow at Young Voices. His writings on technology and innovation have been featured in Washington Examiner, Washington Times, The Hill and RealClearPolicy.
*** This article has been archived for your research. Find the original article here ***