Does Twitter, under Elon Musk, Need Government Regulation? – Boston University
“Fix your companies. Or Congress will.”
Senator Ed Markey (D-Mass.) minced no words in his tweet to Elon Musk after revelations that the billionaire’s latest acquisition, Twitter, may not be able to police disinformation. (Markey’s plural “companies” included Musk-owned Tesla and questions about its vehicles’ safety.) The most serious breach occurred when Twitter greenlighted a blue check mark of authenticity for an impostor posing as drugmaker Eli Lilly, who tweeted, “We are excited to announce insulin is free now.” That fake news was retweeted thousands of times.
Markey (Hon.’04) has personal skin in this game. With his consent, a Washington Post writer impersonated the lawmaker on Twitter, waiting mere minutes before receiving Twitter’s (paid-for) blue check mark, supposedly verifying a tweeter’s ID. Prior to Musk’s ownership, those blue checks were granted only when Twitter verified a tweeter’s identity. The company suspended blue checks for new subscribers November 10 after hoaxers pretended to be everyone from George W. Bush to Nintendo’s Mario.
Under pressure last year to adopt stricter controls, Twitter removed tens of thousands of QAnon-associated accounts and banned former president Donald Trump for incendiary insurrection-related tweets. With Musk’s takeover, followed by mass firings and resignations at the company, is Markey right that government may have to step in? And how, without violating free speech?
BU Today asked two University experts, Gianluca Stringhini, a College of Engineering assistant professor of electrical and computing engineering, who studies mitigation of disinformation and other online maliciousness, and T. Barton Carter, a College of Communication professor of media science and an expert in communication law and technologies.
Q&A
With Gianluca Stringhini and T. Barton Carter
BU Today: How risky to the public are tweets like the phony Eli Lilly one?
Carter: False medical information can be very risky to the public. There are many examples of this, including the various false statements regarding COVID that were posted on various social media platforms, including Facebook and Twitter. Recently, the FDA posted a warning about Nyquil chicken, which was a recipe posted on TikTok that involved marinating a raw chicken in Nyquil. [Editor’s note: The FDA warned that “boiling a medication can make it much more concentrated and change its properties in other ways. Even if you don’t eat the chicken, inhaling the medication’s vapors while cooking could cause high levels of the drugs to enter your body. It could also hurt your lungs.”]
Stringhini: Twitter’s blue checks used to mean something, and people used to trust any verified account as reputable. With the changes introduced in the last weeks, now anyone willing to pay $8 can obtain a check mark and impersonate any entity. This opened up opportunities for any type of scam, and we saw many examples of this in the past week.
Before this change, a malicious actor would have had to compromise a real, verified account—for example by stealing their password—and send a tweet through that account, by leveraging their [the real account owner’s] reputation. A much higher bar for malicious parties, but not impossible to carry out. This happened, for example, in 2013, when the Associated Press Twitter account was hacked and posted a tweet announcing a terrorist attack against the White House. This tweet had repercussions on the stock market.
BU Today: Can Twitter police itself with the workforce it has now after the resignations and mass firings in the wake of Musk’s takeover?
Stringhini: Content moderation is a very nuanced problem that can hardly be automated. For this reason, social media platforms have relied on human moderators to decide what should be blocked. Laying off of a large fraction of Twitter’s workforce doesn’t help in that direction.
BU Today: Given the risks of misinformation circulating widely, is it appropriate for the federal government to consider some regulation of Twitter and what regulation would be feasible, legally? Professor Carter has talked about his preference for “structural regulation”—breaking up companies or applying antitrust limits on them—as opposed to content regulation.
Carter: Although structural regulation would help to dilute the power of these companies, it can’t solve the issue of harmful content on social media. It would require content regulation, but there are two major impediments to content regulation of social media.
One is Section 230 [of the federal Communications Decency Act], which gives these companies immunity for content posted on their platforms unless they created the content. Congress does have the power to change or even eliminate Section 230. However, the greater obstacle to regulating content on social media is the First Amendment. These companies have the same First Amendment rights as other media, so any restrictions on their content would have to fall into an exception to the First Amendment. Thus, obscenity could be regulated, as could false advertising or incitement. However, there is no general exception for false speech.
For example, a law prohibiting people from falsely claiming to have received any US military decoration or medal was struck down on First Amendment grounds. Congress could always pass restrictions on social media content, but a court would then have to find that the restrictions fell into an existing exception, or create a new one. Also, passing any laws regarding social media content would be difficult politically. Although many people believe social media companies should be regulated, some want laws requiring social media to engage in more content moderation to address issues of harmful speech, others want to prohibit social media companies from moderating content because they believe these companies favor some speakers over others, based on political viewpoint.
BU Today: From a technology perspective, how could the government “fix” Twitter, as Markey threatened?
Stringhini: For content that is not strictly illegal, platforms should self-police and enforce their own terms of service. When this doesn’t happen, the platform stops being useful to legitimate users, and people move to alternative ones. Governments have been floating the idea of requiring anyone opening an online account to verify their identity, but besides being extremely challenging, this goes against the need to keep anonymous or pseudonymous communication online—for example, to allow marginalized or oppressed people to safely express their views. I think that the previous way Twitter was doing things, allowing anyone to create an account, but requiring proof of identity if they wanted to be verified, was a good trade-off.
This article has been archived for your research. Find the original article here.