conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

Conspiracy

How Big Tech Helps Promote Disinformation—And How It Can Help Stop It

From 1949 until 1987, when the Federal Communications Commission (FCC) repealed it, the Fairness Doctrine had dictated that all news outlets that used the public airwaves must give equal time to opposing viewpoints. This forestalled the possibility of completely partisan editorializing and guaranteed at least some balance in broadcasting, probably in part by keeping extremist shows off the air, because the network would then be required to provide time for rebuttal. After the Fairness Doctrine was repealed, one of the most popular radio shows to enjoy its new freedom was Rush Limbaugh in 1988. This opened the floodgates for the first broadcast of Fox News on television in 1996, and all that followed.

One of the driving forces behind repeal of the Fairness Doctrine was the idea that it interfered with freedom of speech. Despite calls to reinstate it—in light of some of the harms that became obvious a few years later—in 2008 even President Obama felt that, instead of this, it was more important to focus on “opening up the airwaves and modern communications to as many diverse viewpoints as possible.” In a free market of ideas, wouldn’t the truth win out?

Actually, no. More speech across diverse outlets does not balance out disinformation, because if no individual network has to be “fair,” this incentivizes news siloes that are devoted to skewed content, which is sometimes all that anyone watches. As we have learned in the last decade, when it comes to factual information (and not just opinion- or editorial-based content), balance across media sources is not nearly as effective at preventing disinformation as balance within media sources.

One doesn’t fix a polluted information stream simply by diluting it with truth.

Obviously, any reimplementation of the Fairness Doctrine would have to be done carefully. First, remember that the original version applied to over-the-air broadcast on public airwaves; but with so much of today’s news content on cable TV, which is paid for by individual subscribers to private corporations, would this even apply? And, even if so, there is always the problem of unintended consequences. If we insist that Fox News allow time for opposing viewpoints, wouldn’t this also apply to MSNBC and CNN? And might this not exacerbate the problem of false equivalence? The saving grace here would be to remember that, as it was originally conceived, the Fairness Doctrine was concerned with opinion-based editorial content, not facts. If implemented correctly, we wouldn’t have to allow equal time for climate deniers (or election deniers) any more than we would for Flat Earthers every time we had a moon launch. Yet who would get to decide what is fact and what is not?

A second idea might be to revise Section 230 of the Communications Decency Act, which gives immunity to website platforms for any liability damages that may arise from third party content on their pages. In contrast to book, magazine, and newspaper publishers in the United States—which can be sued if they intentionally provide false information—the big tech companies are exempt. In their defense, Facebook, Twitter, and the like tend to say that they are “news aggregators, not publishers,” despite the fact that over 70 percent of Americans today get their news from social media platforms. Perhaps it is time for these companies to admit that they are media empires (or at least publishers) and so should be held responsible for the content they amplify on their platforms, even if it is written by others? Behavioral economists have shown that if you provide nudges and incentives for people to change their individual behavior, they often will. The same might presumably work for companies; if they could get sued for sharing disinformation—as other publishers can—just watch it dry up. As of this writing, the US Supreme Court has agreed to take up precisely this question in Twitter, Inc. v. Taamneh during its current term, which should be decided by the time you are reading this.

A third possibility, which the social media companies could implement even without any legal or regulatory incentive, would be to get more aggressive about policing not just false content but the known individuals who are most active in amplifying it. In a recent interview, Clint Watts—a counterterrorism expert, FBI analyst, and former member of the Joint Terrorism Task Force—recommended that the number one way to fight disinformation was to “focus on the top 1 percent of disinformation peddlers, rather than trying to police all false content. If you know who they are, removing the worst offenders or moderating their ability to deliberately broadcast or publicize false content will create an outsized reduction in public harms. We did this in crime and terrorism and other things. Just focus on those that are putting out the most and most prolifically.” Remember the study that identified the “disinformation dozen,” who spread most of the anti-vax propaganda on Twitter? Why not just deplatform all of them? Note that election disinformation dropped 73 percent just a week after Twitter and a few other platforms cut off Trump! It just makes sense. Taking away the microphone from the top disinformers—as ruthlessly as most social media companies police for pornography, beheadings, and terrorism—might have an enormous effect.

A fourth and final strategy might be to focus more attention not just on the behavior of the big three social media platforms—Facebook, Twitter, and YouTube—but also on the stack of other companies that run the Internet, without which the giant user platforms could not run their businesses. According to Joan Donovan—research director at the Shorenstein Center on Media, Politics and Public Policy at Harvard University, and an expert on disinformation—all the Internet companies sit on top of one another like a layer cake. “Online hate and disinformation spread because an entire ecosystem supports them,” she is reported to have said. So why not put more pressure also on these “gatekeepers”—the webhosts, web traffic controllers, content delivery networks, and financial service providers (Amazon Web Services, Apple’s App Store, GoDaddy, WordPress, Akamai, PayPal, Venmo, and the like)—without whom the big three platforms would be powerless?

Of course, some would hesitate to do any of this because they think that any attempt to regulate what gets said on social media is an intrusion on free speech. This, of course, ignores the fact that the First Amendment protects against government censorship of individual speech, not private companies, who can deplatform anyone they like. But such hesitation is also ridiculous because it makes it sound as if—in order not to interfere with free speech—we are required to do everything we can to give an immediate, free, and powerful platform to known liars. As if we not only should allow Ku Klux Klan members to get a permit to have a public rally but must volunteer to help them hand out their leaflets too.

In an article entitled “The First Amendment Is Not a Suicide Pact,” Jack Snyder writes:

Many Americans across the full spectrum of opinion—from the progressive left to libertarians to Trump-supporting conspiracy theorists—talk as if the First Amendment guarantees their right not only to voice their opinions in public, but to have instant, unfiltered, global access to an audience of millions, regardless of how ill-founded, incoherent, and misleading those opinions might be.

But that is a ridiculous view and Snyder exposes precisely why. Refusing to amplify disinformation is not the same thing as censorship. Just as you cannot falsely yell “Fire!” in a crowded theater, there should be reasonable limits on platforming for hate speech, election disinformation, pandemic disinformation, and the like, which is the twenty-first-century equivalent of yelling “Fire!” in a public space. But what about the marketplace for ideas? Isn’t a free flow of information how truth rises to the surface? Not really. Consider Wikipedia before it took its platform back from the trolls and wreckers; now that it has better content moderation—and is more reliable—some have even called Wikipedia a model for the internet.

Although it may sound cheering and patriotic, it is not necessarily true that the best solution to “bad speech” is “more speech,” on the theory that truth would inevitably win out over lies. Recent empirical research has shown that, at least with scientific disinformation, lies are quite salient, and once an audience hears disinformation, a predictable percentage will simply believe it no matter what correcting information might later be offered. Although there are steps one can take to mitigate this effect, we cannot debunk our way out of an infodemic. One doesn’t fix a polluted information stream simply by diluting it with truth. You have to remove the source of the pollution.

What is the incentive for social media companies to implement the simplest suggestion of all—tell the truth?

Yet perhaps the best solution of all doesn’t have to involve “censorship” so much as transparency. One of the most intriguing ideas to fight the amplification of disinformation on the Internet is to make social media algorithms available to academic researchers. In keeping with a recent proposal put forward by cognitive scientist Stephan Lewandowsky, why not let cognitive scientists and others study Facebook’s and Twitter’s algorithms to give a more independent assessment of their potential for public harm? Personal user information could be shielded. Other safety precautions could be taken. Instead, these algorithms are locked up in the hands of the tech companies themselves, so evidence of public harm comes to light only when there is a whistleblower.

And, absent such scrutiny, what is the incentive for social media companies to implement the simplest suggestion of all—tell the truth? As Clint Watts has recently argued, if the social media algorithms are that good at directing people toward salient information, why not take advantage of this power to promote more truth? Right now, so far as we understand it, the social media algorithms are tweaked to promote “engagement,” which has the side effect of giving a strategic advantage to disinformers. But what if they were reprogrammed to lead people toward better, more reliable information that is also available on the same platform? When you watch an anti-vax video on YouTube, why is it that the next one (or twenty) recommendations all drag you down the same rabbit hole? Couldn’t the algorithm be repurposed so that the next thing you saw was something that pushed you back toward more credible information? Yes, of course this is possible…but why would the tech giants go for this? Wouldn’t it mess up their business model? But if the alternative is regulation, forced transparency, or perhaps antitrust litigation to break these companies up, perhaps they might prefer a solution that they could implement themselves. There are a plethora of good practical steps that we could take to remove some of the most dangerous tools from the truth killers’ hands.

But if we do nothing? Or, almost as bad, if we leave it up to Congress to catch up to this problem in time to make meaningful change? There was a 2021 Senate hearing entitled “Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds,” which featured testimony from several disinformation experts who were hair-on-fire worried about the problem of algorithmic disinformation, as compared to Congress’s ineptitude and misunderstanding of even the most basic issues surrounding the problem. Although we are now mercifully a long way from Republican Senator Ted Stevens’s infamous 2006 description of the Internet as a “series of tubes,” there is still a tremendous lack of understanding of what tech companies do (not to mention the stakes we are up against), on both sides of the political aisle.

Senator Chris Coons, a Democrat and head of the Senate committee that sponsored the hearing, said that “there’s nothing inherently wrong” with how Facebook, Twitter, and YouTube use their algorithms for the purpose of user engagement, and made it clear that Congress was not weighing any legislation at this point. Contrast this with the apocalyptic stakes described by the experts who testified at the panel.

In her testimony, Joan Donovan put it this way: “The biggest problem facing our nation is misinformation-at- scale….The cost of doing nothing is democracy’s end.”

__________________________________

Excerpted from On Disinformation: How to Fight for Truth and Protect Democracy by Lee McIntyre. Copyright © 2023. Reprinted with permission from The MIT Press.



***
This article has been archived for your research. The original version from Literary Hub can be found here.