Monday, November 25, 2024

conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

COVID-19

Facebook groups are using the carrot emoji to hide anti-vax content from moderators

Anti-vax groups on Facebook are using emojis to get around the platform’s algorithms that quash misinformation — and their emoji of choice is a carrot. 

Several groups that share unverified stories of people being injured or killed by COVID-19 vaccines have reportedly begun using the carrot emoji as code in place of the word vaccine, and members are encouraged to use other coded messaging to cover up the words “COVID” and “booster.” 

“Use code words for everything”, the description of one group with 250,000 members reads. “Do not use the c word, v word or b word ever” (covid, vaccine, booster). 

The group has been active since the pandemic, the BBC reports, and even rebranded as “banter, bets and funny videos” in August 2022 in place of a previous name surrounding the topics they actually discuss. 

Disformation researcher Marc Owen-Jones tweeted that he was invited to join the group, describing the emoji use as “very odd,” and providing screenshots of posts and the moderator’s own description as to how they are avoiding censorship. 

“Coding is important and carrots are to date not picked up by the AI censors,” the description says. “If anyone posts anything which will attract the AI censors, we delete it, no matter whether we may agree with it.” 

Facebook’s parent group Meta claimed that they had taken the group down, but the BBC reports it can still be found in their searches. Fortune has not found the group through search. 

Why are they using emojis? 

Facebook’s misinformation-tackling algorithms have been trained largely on the use of words and text, and therefore aren’t able to tackle emoji-use, especially since they can have multiple meanings. Members and moderators of such groups appear to have discovered that this is an easy way to evade censorship.

Researcher Hannah Rose Kirk explained how tech giants look for and push down misinformation using artificial intelligence in a blog post for the Oxford Internet Institute. She is part of a research team that created a tool called Hatemojicheck that seeks to identify those areas that the usual A.I. systems can’t detect. 

“Despite having an impressive grasp of how language works, AI language models have seen very little emoji,” she writes. “They are trained on a corpora of books, articles and websites, even the entirety of English Wikipedia, but these texts rarely feature emoji.”

Facebook maintains that it is tackling the spread of misinformation on its platform, and last year said it removed 20 million pieces of content with false claims regarding COVID-19 or the vaccine since the pandemic began. 

Tech giants such as Meta could find themselves in trouble in the U.K. if a planned new online safety bill comes into law. The law, which faces an uncertain future, would punish sites if harmful material isn’t dealt with quickly enough. 

Meanwhile, the EU is preparing its Digital Services Act which “aims to increase accountability for online platforms regarding illegal and harmful content,” and U.S. senators introduced the Kids Online Safety Act to Congress in February 2022. The U.S. act seeks to impose a duty on platforms to prevent harmful activity and behaviors.  

Meta did not immediately respond to request for comment.

Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.

***
This article has been archived for your research. The original version from Fortune can be found here.