Sunday, November 24, 2024

conspiracy resource

Conspiracy News & Views from all angles, up-to-the-minute and uncensored

Conspiracy

BBC expert on debunking Israel-Hamas war visuals: “The volume of misinformation on Twitter was beyond anything I’ve ever seen”

Shayan Sardarizadeh is a senior journalist covering disinformation, extremism and conspiracy theories for BBC Monitoring’s disinformation team as part of BBC Verify. Since Hamas attacked Israel on 7 October, he’s debunked dozens of misleading visuals on social media and published his findings on widely read threads on Twitter, now known as X.

As the decades-long conflict between Israel and Hamas enters a new phase, I spoke with Sardarizadeh to learn more about his work and about the kind of misleading visuals he and his colleagues have encountered in the last few days. Our conversation covered the spike in misinformation on several platforms, the pernicious influence of the latest changes on X, and the marginal role AI-generated visuals have played in this context. Our discussion has been edited for brevity and clarity. 

Q. You’ve covered previous wars. What would you say that is different this time? 

A. There are similarities [with previous wars]. Today millions of people get their news online. They don’t necessarily sit in front of a TV set. They go on their social media feeds to find out what’s happening, particularly with breaking news situations of this magnitude. 

You can find good information on social media. But I can excuse anybody for feeling confused if they’ve been looking online in the last few days because it’s been really difficult to sift through what is actually genuine footage from what’s been going on in Israel and Gaza, and what is either clickbait or unrelated footage or something that is being shared for clicks, engagement or any sort of nefarious intent.

Part of my job as a journalist at BBC Verify and BBC Monitoring is to sift through that type of content in an accurate and impartial manner and make it easier for audiences to see what is real and what is not. Then they can make up their own mind about what’s happening. But we want them to know if a video they’ve seen online is related to the conflict or not. 

Q. How do you prioritise the visuals you verify?

A. My top priority is verifying content that has gone viral. I can’t get through everything, so I try to get through as many viral examples as I can. I can’t obviously go through everything. But I’ve been doing this job for quite a while and I know where to go and what to look at. When you have something like this, the volume of content posted is above and beyond the capability of any human being to keep across completely. Then, checking content is not always easy – checking a video can take one hour, three hours or a couple of days. 

Q. Can you walk us through the way you verify a piece of content?

A. If something I haven’t seen goes viral, my first question is, why haven’t I seen this? If it is this popular, I should have seen it [through my own reporting], so I usually start with these types of videos. 

First of all, I try to determine if the video is recent. Has it been first posted after 7 October? That’s the number one thing that I do, and I do that by a process called reverse image search. I take screen grabs of the video (as many as is needed) and then I go on the Internet and use reverse search tools, which are free and publicly available to everyone. A few months ago, I posted a long Twitter thread on how to do this.  

The next step is trying to find examples of that video on other platforms such as YouTube or Facebook. If it is something that relates to an actual genuine incident, we see if any news outlets have reported it. 

[embedded content]

If it is something that is false or misleading, sometimes it’s easy.  You just take one screen grab, put it into Google reverse image search, and it comes up within 10 minutes. But sometimes it takes longer. You have to try different tools, sift through many web pages and not immediately get it. You might find a clue in a comment, on a Subreddit, or somewhere that gives you an idea. Once I found an example of that video posted online two months ago, then I established this was an old video.

Q. What would you do next?

A. Then I try to find out what the actual context of the video is and give that context to audiences. For example, I’ve found videos from the Ukraine conflict, for which I didn’t even have to go and look because I’d actually seen that video in Ukraine.

But I’ve also seen videos from the war in Syria, and from past conflicts between Israel and Hamas. And even if a video you’re seeing does show fighting between Israel and Hamas, it’s still important to say it’s not from this current conflict. That’s normally what I do. 

For me, the most important thing before I put any sort of fact check or any debunk on either any BBC piece or on my own social media account is to be 100% certain. If I’m not 100% certain about the context behind the video and the fact that it is actually unrelated to the conflict, even if I’m 80% certain, 90% certain, I still decide to leave it. You have to be 100% certain and then you have to show the audience why it’s false. Just saying something is false, just trust me, is not good enough.

Q. From your experience in the last few days, would you say that most of the false visuals that are circulating are actually from previous wars?

A. Yes. There’s been quite a lot of old content. We saw the same thing with Ukraine. In the first two months of the Ukraine war, there was a deluge of misinformation online and plenty of old videos. So that’s definitely the number one category. But there are also videos that are genuine but taken out of context.  

For example, on Wednesday evening there was a video taken in Haifa. There was a siren and people were rushing out of their houses, probably to safety. The video was genuine, but [the post claimed] these people running were Hezbollah militants who had infiltrated into Northern Israel. This was false. But the video itself was actually genuine. 

Q. Would you say that things like deep fakes or AI-generated visuals are pretty marginal to this conflict? 

A. Yes. I would say so from the examples I’ve seen, and I’m pretty sure I’ve seen all the viral ones. I have not seen a single deep fake. There have been a few AI-generated false images, but they were not that good. 

Q. We’ve seen influencers trying to use the reputation of established news organisations like the BBC or Bellingcat to share false content. Is this something that you’ve seen before? 

A. This is a major problem for us. The BBC is an established, trusted brand, and when you want to produce something with nefarious intent, once you put the BBC logo or the BBC branding on it, then you’re hoping that you can actually catch people’s attention and convince them of something that is actually not true.  

We saw three fake videos with our brand, logo and style during the Ukraine war being used to actually mislead people about one thing the BBC was not reporting. Two days ago, we saw another one saying that BBC and investigative outlet Bellingcat were reporting that the Ukrainian government had supplied weapons to Hamas. 

The BBC has not reported that. It’s completely false. Neither has Bellingcat. But the intent was to mislead people using the BBC brand and logo, as part of the information war.

Q. How can audiences know a video is actually produced by the BBC?

A. The easiest way to check those types of videos is to go to the video section on the BBC News website and to our social accounts and see whether that video has been posted. If that video has not been posted there, it hasn’t come from us. 

Q. Twitter, or X as it’s now called, has gone through many changes in the last few months, including boosting the content of paying users and giving them access to a revenue-sharing scheme. Do you think these changes have impacted the amount of misinformation we see on the platform? 

A. I do not have the data to tell you whether most of the misleading content we are seeing is coming from X Premium subscribers. But if you go through the examples that I’ve shared online in the last five days, you will see quite a lot of blue ticks. The reason is that posts from these accounts are boosted in people’s feeds, particularly in the ‘For You’ feed, which is what most people see. 

These paying accounts with access to revenue and social media algorithms are designed to make that type of content go viral. So, if you are a paying user and you’re not necessarily concerned about facts, you now have an incentive to post something from five years ago that’s really shocking. The post might get five million views and you might get paid for it. That’s a real problem although not for me to address.

Q. Are X’s Community Notes mitigating this problem?

A. In the first couple of days of the conflict, the volume of misinformation on X was beyond anything I’ve ever seen. But I think the platform recognised that and changed the way Community Notes works. 

For people who are not aware of this, Community Notes is a crowdsource system feature within X. Any user can actually sign up for it. After a period of time, they can rate any post and write notes on what they think is false or misleading about it. There’s a voting system amongst contributors. If people who have previously disagreed with each other agree a post is false, that suggests the vote is probably reliable.

Most of the time this system works OK. But one of the criticisms that I had was that you could see a false post that was really viral with suggested community notes, and it would take up to four days for that to appear in people’s timelines. They’ve now made the process much faster. That’s definitely positive.  

Q. What they haven’t done is throttle the distribution of something that’s not true. Is it better to diminish the distribution of something that’s false or is it better to show it with the note?  

A. As a BBC journalist I’m not in a position to make decisions for the platforms. My job is to see what’s being shared on those platforms and verify it. But I see myself as a defender of free speech, and I don’t want to live in a society where people’s rights are being suppressed.

What I do think is that it’s good to have some sort of fact-checking system. Meta has its own fact-checking system with professional fact-checkers that sign up to an agreement with Meta. Community Notes, in a sense, is similar, except it’s not done by professional fact-checkers but by X users themselves. 

As long as the platforms have systems, invest in them and take them seriously, and as long as the time gap between a false post going viral and people seeing that post is false is short, that to me is a good thing. 

Q. We’re seeing misleading posts on Telegram channels, and sometimes those posts are posted on Twitter without any references or links. How do you handle the content on these kinds of messaging apps? 

A. When a piece of content is posted on a platform, whatever platform, you expect it to travel across platforms, and it is part of my job to source content like that. Who filmed it? Where does it come from? What’s the full context? That’s the other problem: you see a piece of footage online, but you may not be seeing the full context of it. 

Our job is to establish the full context behind those five or 10 seconds you’ve seen. Has that video been edited? Is there a longer version? That can be quite complex and time-consuming. Content on a platform like Telegram is very difficult, even impossible to source. If we can’t source it, then we won’t report it. 

Q. We’ve talked a lot about visuals. Do you also verify text-based misinformation?

A. We prefer to work with visual evidence because anyone can type anything into a platform and say, this happened here or there. It’s not that we don’t care about text-based posts – we would like to know who’s posting it. 

Is it someone on the ground? Is it somebody that we trust?  Is it just an eyewitness? We would like to get in touch with them and establish what they’ve actually seen. Do they have any footage? 

So, yes, we do care about posts that are just text-based, but there’s a different threshold. With video and images, we can look at it ourselves and investigate based on what we’re seeing. With text-based posts, it’s a bit more difficult and it takes a bit more time. 

***
This article has been archived for your research. The original version from Reuters Institute for the Study of Journalism can be found here.