Saturday, December 28, 2024

conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

Conspiracy

How Israel-Hamas war disinformation is being spread online

The video shows a young boy in a black T-shirt apparently lying in a pool of blood on the ground. Above him is a camera, with a man shouting directions near him. Two men in kippahs, the Jewish skull caps, and men in green military fatigues similar to Israel Defence Forces (IDF) uniforms are gathered around him.

The clip has been viewed about 2m times on X, formerly known as Twitter. It was shared by a verified user with the caption: “Video showing Israel attempting to create fake footage of deaths.”

Screengrab from video that went viral on TikTok and X

In fact, the clip is a behind-the-scenes shot from a Palestinian short film, Empty Place, which focuses on the vacuum left by Palestinians who fled due to the Israeli occupation. It seems to have originated on TikTok before finding its way to X – but while the original TikTok post now appears to be unavailable, on X it has continued to circulate and gain traction.

The scene from Empty Place

The user behind the X post that attracted 2m views later acknowledged the clip may have been used out of context.

It is far from a one-off. Since Hamas launched an attack on Israel on Saturday morning, X has been flooded with disinformation and misinformation that has heightened tensions across the globe. Disinformation refers to the deliberate spread of false information, while misinformation is when someone unwittingly spreads or believes the false information to be true.

Another video, originating from TikTok but now unavailable there, has racked up 2m views on X claiming to show high-profile Israeli generals captured by Hamas fighters. In actuality, the video was originally published by the official YouTube channel of the state security service of Azerbaijan last week, and shows arrested former leaders of the breakaway Nagorno-Karabakh government.

A doctored document suggesting that Joe Biden gave $8bn in assistance to Israel appeared on X last week and was viewed 400,000 times. The faked memo was an edited version of the US president’s July memo where he announced $400m in aid to Ukraine. There is no such document on the White House website or social media. The White House confirmed to NBC that the document was fake.

Russia has been a longstanding culprit for spreading disinformation on X, and appears to have been capitalising on the Israeli-Palestinian conflict. On Monday the former Russian president Dmitry Medvedev, the deputy chair of the Russian Security Council, tweeted: “Well, Nato buddies, you’ve really got it, haven’t you? The weapons handed to the Nazi regime in Ukraine are now being actively used against Israel.”

Another video, apparently showing Hamas thanking Ukraine for the sale of weapons it plans to use against Israel, was posted by an X account linked to the Russian mercenary group Wagner. It has since been viewed more than 300,000 times and amplified by far-right accounts from the US.

In February, the Pentagon’s inspector general reported that there was no evidence to date of weapons and aid to Ukraine being diverted to third parties, while Ukrainian intelligence this week accused Russia of placing “trophy” western weapons seized from battlefields in Ukraine with Hamas to undermine support for Kyiv.

Eliot Higgins, the founder of the investigative outlet Bellingcat, also flagged up a fake video purporting to be from the BBC that claimed to feature a Bellingcat investigation showing Ukraine smuggled weapons to Hamas. He said it was being pushed by Russian social media users but added it was unclear if this was a Russian government disinformation campaign or a grassroots effort.

At the centre of growing concerns over fake news related to the Israel-Hamas conflict is Elon Musk, the owner of X and a self-proclaimed “free speech absolutist” who has faced serious accusations of disseminating conspiracy theories and antisemitism on the platform, which he denies.

Since his takeover and rebranding of Twitter in 2022, Musk dissolved the platform’s Trust and Safety Council, which was responsible for addressing global content moderation and hate speech and harassment. The company said it had plans to reorganise this team, but they remain unclear.

In the past year, Musk has fired two heads of trust and safety on X, and is locked in a legal dispute with his co-lead of threat disruption at X, whose responsibilities included advising leadership on “vision and strategy for content moderation”.

X has introduced Community Notes, a programme for crowdsourced moderation that largely places the onus on users to correct facts instead of employed content moderators.

In recent weeks, Musk has been criticised over the spread of antisemitic content on the platform. While he said in a post that he was “pro-free speech, but against antisemitism of any kind”, the director of the Anti-Defamation League, Jonathan Greenblatt, accused him of “amplifying” messages from neo-Nazis. The league, said antisemitic posts on X increased sharply after Musk bought the site, prompting him to threaten to sue.

He has also engaged with or posted content targeting the Hungarian-American businessman and philanthropist George Soros, who has been a regular target for conspiracy theorists.

On Tuesday the European Commission issued a letter to Musk warning him over alleged disinformation on X about the Hamas attack on Israel, including fake news and “repurposed old images”. It is understood that X is cooperating with the EU’s request to provide information but it will be some time before Brussels takes any further steps.

Pat de Brún, deputy director at Amnesty Tech, said: “Social media platform companies like Meta and X have clear responsibilities under international human rights standards, such as the UN guiding principles on business and human rights, and these responsibilities are heightened in times of crisis and conflict.

“Social media firms are responsible for identifying and responding effectively to risks and taking effective measures to limit the spread of harmful content – the amplification of which can lead to human rights abuses. However, all too often, big tech companies have failed to step up in the face of such emergencies, enabling hate and misinformation to proliferate.”

X has been contacted for comment.

***
This article has been archived for your research. The original version from The Guardian can be found here.