Friday, November 22, 2024

conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

COVID-19

YouTube starts mass takedowns of videos promoting ‘harmful or ineffective’ cancer cures

/

The platform will also take action against videos that discourage people from seeking professional medical treatment as it sets out its health policies going forward.

p>span:first-child]:text-gray-13 [&_.duet–article-byline-and]:text-gray-13″>

Share this story

YouTube logo image in red over a geometric red, black, and cream background

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Illustration by Alex Castro / The Verge

YouTube will remove content that promotes “cancer treatments proven to be harmful or ineffective” or which “discourages viewers from seeking professional medical treatment,” the video platform announced today. The enforcement comes as YouTube is attempting to streamline its medical moderation guidelines based on what it’s learned while attempting to tackle misinformation around topics like covid-19, vaccines, and reproductive health.

Going forward, Google’s video platform says it will apply its medical misinformation policies when there is a high public health risk, when there is publicly available guidance from health authorities, and when a topic is prone to misinformation. YouTube hopes that this policy framework will be flexible enough to cover a broad range of medical topics, while finding a balance between minimizing harm and allowing debate.

In its blog post, YouTube says it would take action both against treatments that are actively harmful, as well as those that are unproven and are being suggested in place of established alternatives. A video could not, for example, encourage users to take vitamin C supplements as an alternative to radiation therapy.

YouTube’s updated policies come a little over three years after it banded together with some of the world’s biggest tech platforms to make a shared commitment to fight covid-19 misinformation. Although the video platform had previously taken action against vaccine misinformation such as pulling ads from anti-vax conspiracy videos, it strengthened its approach in light of the pandemic, removing videos with covid-19 vaccine misinformation in October 2020 and banning vaccine misinformation from its platform entirely in late 2021

The platform has also taken action against other videos deemed harmful under its medical misinformation policy, including those that include “instructions for unsafe abortion methods” or which promote “false claims about abortion safety.”

While the major tech platforms stood united in early 2020, their exact approaches to covid-19 misinformation have differed since that initial announcement. Most notably, Twitter stopped enforcing its covid-19 misinformation policy in late 2022 following its acquisition by Elon Musk. Meta has also softened its moderation approach recently, rolling back its covid-19 misinformation rules in countries (like the US) where the disease is no longer considered a national emergency.

***
This article has been archived for your research. The original version from The Verge can be found here.