Friday, November 22, 2024

conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

Conspiracy

Internet and Extremism Experts Predict More Hate Speech and Conspiracy Theories on Musk’s Twitter

When billionaire entrepreneur Elon Musk completed his purchase of Twitter and pledged that “the bird is freed” last week, Felix Ndahinda saw a threat rising on the horizon.

Ndahinda has trained in international law and works in Tilburg, the Netherlands, as a consultant on issues pertaining to conflict and peace in the African Great Lakes region. He has already seen what a ‘free’ Twitter can do. For years, he has been tracking the social-media hate speech that swirls amid armed conflict in the Democratic Republic of Congo. Much of that incendiary speech has gone undetected by the systems that platforms, including Twitter, use to identify harmful content, because it is shared in languages that are not built into their screening tools.

Even so, Ndahinda expects that Musk’s pledges to reduce Twitter’s oversight of social-media posts would add to the momentum and influence of hate speech in the Great Lakes and beyond. “A permissive culture where anything goes will always increase the trends,” says Ndahinda. “It will embolden actors and increase the virulence in their hate speech.”

All eyes are on Twitter as Musk’s plans for the platform come into focus. For now, it is unclear how far he will go towards his early pledge to be a “free speech absolutist”, which has raised concerns that he might reduce oversight of offensive or potentially harmful tweets. But past research offers some pointers as to what the impact of looser restrictions on tweeting could be.

“It’s a very complex ecosystem,” says Gianluca Stringhini, who studies cybersecurity and cybersafety at Boston University in Massachusetts. “But if you go and get rid of moderation on Twitter completely, then things will become much worse.”

All in moderation

Currently, Twitter uses a combination of automated and human curation to moderate the discussions on its platform, sometimes tagging questionable material with links to more credible information sources, and at other times banning a user for repeatedly violating its policies on harmful or offensive speech.

Musk has repeatedly stated that he wants to loosen Twitter’s reins on speech. In the days following his purchase of the company, Twitter reported a surge in hate speech. By 31 October, the company said that it had removed 1,500 accounts related to such posts, and Musk says that, for now, its moderation policies have not changed.

How the company will proceed is still uncertain. Musk has met with civil-rights leaders about his plan to put a moderation council in charge of establishing policies on hate speech and harassment. Users who had been banned before Musk’s takeover of the company would not be reinstated until a process had been set up for allowing them to do so, Musk has said.

Some of the users who have been banned from Twitter will have retreated to lesser-known platforms with fewer regulations on what can be said, says Stringhini. Once there, their social-media activity tends to become more toxic and more extreme. “We see a community that becomes more committed, more active — but also smaller,” he says.

Normally, these platforms are where false narratives start, says Stringhini. When those narratives creep onto mainstream platforms such as Twitter or Facebook, they explode. “They get pushed on Twitter and go out of control because everybody sees them and journalists cover them,” he says.

Twitter’s policies to restrict hate speech and misinformation about certain topics — such as COVID-19 — reduce the chances that such tweets will be amplified, so loosening those policies would allow them to find larger audiences.

Bad business

“When you have people that have some sort of public stature on social media using inflammatory speech — particularly speech that dehumanizes people — that’s where I get really scared,” says James Piazza, who studies terrorism at Pennsylvania State University in University Park. “That’s the situation where you can have more violence.”

But judging from other social-media platforms with loose restrictions on speech, a rise in extremism and misinformation could be bad business for a platform with mainstream appeal such as Twitter, says Piazza. “Those communities degenerate to the point to where they’re not really usable — they’re flooded by bots, pornography, objectionable material,” says Piazza. “People will gravitate to other platforms.”

And regulations on the way from the European Union could make Musk’s ‘free speech’ rhetoric impractical as well, says Rebekah Tromble, a political scientist at George Washington University in Washington DC. The EU’s Digital Services Act, due to go into effect in 2024, will require social-media companies to mitigate risks caused by illegal content or disinformation. In theory, Twitter and other platforms could try to create separate policies and practices for Europe, but that would probably prove difficult in practice, Tromble says. “When it’s fundamental systems, including core algorithms, that are introducing those risks, mitigation measures will necessarily impact the system as a whole.”

Tromble expects that the Musk era at Twitter will begin with a period of chaos as Musk and Twitter users test the boundaries. Then, she says, it is likely to settle down into a system much like the Twitter of old.

Over the coming weeks, Stringhini expects that researchers will launch studies comparing Twitter before and after Musk’s takeover, and looking at changes in the spread of disinformation, which user accounts are suspended, and whether Twitter users quit the platform in protest at new policies. Tromble intends to monitor campaigns of coordinated harassment on Twitter.

Whether changes in Twitter policies will have an impact on real-world behaviour is another open question: researchers have struggled to definitively disentangle the effects of social media from the many factors in a changing social environment. For example, a 2017 study of more than 1,200 US Republican and Democratic Twitter users found no significant impact of exposure to accounts operated by the Russian Internet Research Agency on political attitudes and behaviours. “In much of our research, we’re measuring what kinds of narratives pick up and how they go viral,” says Stringhini. “The missing link is that we cannot really tell if this online messaging is really changing anyone’s actions and opinions in the real world.”

To Ndahinda, however, it is clear that the normalization of hate speech and conspiracy theories on social media could have contributed to violence in the Democratic Republic of Congo, even if academics have not yet been able to delineate its contribution clearly. “It is a very difficult thing to work out the casual link from a tweet to violence,” says Ndahinda. “But we have many actors making public incitements to commit crime, and then later those crimes are committed.”

This article is reproduced with permission and was first published on November 4 2022.

***
This article has been archived for your research. The original version from Scientific American can be found here.