Influence Networks in Russia Misled European Users, TikTok Says
The covert and coordinated campaign was disclosed in a new report that also addressed misinformation, fake accounts and moderation struggles.
Last summer, 1,704 TikTok accounts made a coordinated and covert effort to influence public discourse about the war in Ukraine, the company said on Thursday.
Nearly all the accounts were part of a single network operating out of Russia that pretended to be based in Europe and aimed its posts at Germans, Italians and Britons, the company said. The accounts used software to use local languages that amplified pro-Russia propaganda, attracting more than 133,000 followers before being discovered and removed by TikTok.
TikTok disclosed the networks on Thursday in an in-depth report that examined its handling of disinformation in Europe, where it has more than 100 million users, noting that conflict in Ukraine “challenged us to confront a complex and rapidly changing environment.”
The social media platform compiled the findings to comply with the European Union’s voluntary Code of Practice on Disinformation, which counts Google, Meta and Twitter among its other signatories. TikTok offered the detailed look into its operations as it tried to demonstrate its openness in the face of continued regulatory scrutiny over its data security and privacy practices.
As a newer platform, TikTok is “in a unique position to innovate in the search for solutions to these longstanding industry challenges,” Caroline Greer, Tiktok’s director of public policy and government relations, said in a blog post on Thursday.
The company did not say whether the accounts had ties to the Russian government.
TikTok, which is owned by the Chinese company ByteDance, has struggled with many of the same conspiracy theories, false narratives, manipulated media and foreign disinformation campaigns as its social media peers.
In its report, covering mid-June through mid-December 2022, TikTok said it took down more than 36,500 videos, with 183.4 million views, across Europe because they violated TikTok’s harmful misinformation policy.
The company removed nearly 865,000 fake accounts, with more than 18 million followers between them (including 2.3 million in Spain and 2.2 million in France). There were nearly 500 accounts taken down in Poland alone under TikTok’s policy banning impersonation.
Early in the fighting in Ukraine last year, the company said, it noticed a sharp rise in attempts to post ads related to political and combat content, even though TikTok does not allow such advertising.
In response, the company said it began blocking Ukrainian and Russian advertisers from targeting European users. The company also hired native Russian and Ukrainian speakers to help with content moderation, worked with Ukrainian-speaking reporters on fact-checking and created a digital literacy program focused on information about the war.
The platform restricted access to content from media outlets associated with the Russian government — such as Russia Today and Sputnik — and said it expanded its use of labels identifying state-sponsored material. Amid an uptick in livestreamed videos coming from Russia and Ukraine since the conflict began, TikTok said it stopped recommending such content to European users.
The report underscored how some attempts to mitigate misinformation have had limited effect. When users saw a pop-up label warning of unverified content, less than 29 percent did not continue trying to share it. Less than half a percent of the 145.5 million “learn more” tags seen by viewers exposed to potential Holocaust denial content translated into a click on the tag, which led to a page of authoritative resources.
TikTok said that in the coming months it would update its policies prohibiting deceptive synthetic content such as deepfakes, as a wave of generative artificial intelligence tools hit the market. It said it would focus on setting up fact-checking partnerships in Portugal, Denmark, Greece and Belgium and expanding its misinformation moderation teams. The company also said it was working on expanding researcher access to its data on disinformation and content moderation.
This article has been archived for your research. The original version from The New York Times can be found here.