How YouTube helps flat-earthers organize
It’s been a big month for conspiracy theories. Last week, Rep. Adam Schiff sent a strongly worded letter to Google and Facebook about the way their platforms recommend anti-vaccination content to parents, potentially putting healthy populations at risk. This week, new reports are taking a deeper look the unintended consequences of YouTube recommendations, starting with the conspiracy theory that the earth is flat.
In the Guardian, Ian Sample has news of a recent presentation at the American Association for the Advancement of Science, in which a researcher from Texas Tech University discussed her findings from interviewing 30 participants in a recent convention of flat-earthers. The takeaway: YouTube brought them there, Sample says:
Of the 30, all but one said they had not considered the Earth to be flat two years ago but changed their minds after watching videos promoting conspiracy theories on YouTube. “The only person who didn’t say this was there with his daughter and his son-in-law and they had seen it on YouTube and told him about it,” said Asheley Landrum, who led the research at Texas Tech University.
The interviews revealed that most had been watching videos about other conspiracies, with alternative takes on 9/11, the Sandy Hook school shooting and whether Nasa really went to the moon, when YouTube offered up Flat Earth videos for them to watch next.
Flat-earth videos could fit the definition of “borderline content,” which YouTube said last month it would stop recommending to users as suggested next videos to watch. (YouTube also rolled out new disciplinary procedures today.) But as Kevin Roose writes in a sharp column for the New York Times, YouTube’s efforts may be thwarted by the fact that many of its most popular creators are thriving precisely because they create borderline content. Now that they have tens of millions of followers, how much does changing the recommendation algorithm really matter?
Roose focuses on star YouTuber Shane Dawson, who has 20 million followers and recently posted a smash hit, 104-minute documentary promoting various conspiracy theories. (Dawson has previously said, of the flat-earth theory, that it “kind of makes sense”.)
Innocent or not, Mr. Dawson’s videos contain precisely the type of viral misinformation that YouTube now says it wants to limit. And its effort raises an uncomfortable question: What if stemming the tide of misinformation on YouTube means punishing some of the platform’s biggest stars? […]
Part of the problem for platforms like YouTube and Facebook — which has also pledged to clean up misinformation that could lead to real-world harm — is that the definition of “harmful” misinformation is circular. There is no inherent reason that a video questioning the official 9/11 narrative is more dangerous than a video asserting the existence of U.F.O.s or Bigfoot. A conspiracy theory is harmful if it results in harm — at which point it’s often too late for platforms to act.
What makes this phenomenon insidious is that it can become dangerous even when no one even believes the conspiracy theory being floated, at least not initially. In 2015, The Onion published a satirical op-ed by an infant who claimed he wanted to eat “one of those multicolored detergent pods.” This was followed in 2017 by a satirical video from CollegeHumor titled “Don’t Eat The Laundry Pods. Seriously (They’re Poison.)”
If you were online at all last year, you probably know what’s coming next. From the Washington Post’s history of the Tide Pods challenge:
Last year, U.S. poison control centers received reports of more than 10,500 children younger than 5 who were exposed to the capsules. The same year, nearly 220 teens were reportedly exposed, and about 25 percent of those cases were intentional, according to data from the American Association of Poison Control Centers.
So far in 2018, there have been 37 reported cases among teenagers — half of them intentional, according to the data.
The Tide Pods challenge was just a joke, until it wasn’t. Until a certain point, videos about it were pure entertainment. So when did that change? Imagine that you’re working at YouTube. When do you flip the switch declaring the whole subject to be “borderline content”?
I don’t think this question is unanswerable. There was almost certainly a moment in the evolution of the Tide Pods story where it became clear that it had taken on a life of its own. But determining that moment in real time would require platforms to take on more of an editorial role than they have historically been comfortable with. (Forcing platforms to take such a role is, incidentally, one of the chief recommendations in the UK Parliament committee report I covered here yesterday.)
Generally I find it tedious when reporters slag platforms for “not admitting they’re a media company.” “Media company” is not a legal definition, after all, and Facebook has already acknowledged that it bears responsibility for what users post.
And so I don’t care whether tech platforms identify as media companies. But when it comes to policing the conspiracy theories that they help flourish, I do wish that they would act like media companies. When the next Tide Pods challenge arrives — and it will — a little editorial intervention could go a long way.
Democracy
FTC complaint accuses Facebook of revealing sensitive health data in groups
A new complaint filed with the Federal Trade Commission accuses Facebook of failing to prevent sensitive health data from being shared in its groups, Colin Lecher reports:
The complaint, filed with the agency last month and released publicly today, argues that the company improperly disclosed information on members of closed groups. The issue first came into the public eye in July, when members of a group for women with a gene mutation called BRCA discovered sensitive information, like names and email addresses of members, could be downloaded in bulk, either manually or through a Chrome extension.
Around that time, Facebook made changes to Groups that ended the practice, but said the decision was not related to the BRCA group’s concerns. The company also said at the time that the ability to view the data was not a privacy flaw, and noted that there was also an option for “secret” groups, which are more difficult to join but also have more limited discoverability.
Expanding transparency around political ads on Twitter
Twitter is bringing its ad transparency tools, which are based on ones developed by Facebook after the 2016 election, to the European Union member states, India, and Australia.
What Happens When Techno-Utopians Actually Run a Country
Here’s a compelling longread about how a populist web entrepreneur planted the seeds of democratic revolution in Italy, only to die as the party he helped found made alliance with the hard right. Before America had Trump, Italy had Berlusconi; we would do well to reflect on Italy’s recent experiments with direct democracy:
Five Star’s web portal now included a tool for subjecting important decisions to an online vote, and so the decision on whether to ally with UKIP was put to the movement: direct democracy in action. But in the days and weeks before the vote, Casaleggio published articles on the blog hailing Farage as a democratic crusader against a monolithic EU. “Farage Defends the Sovereignty of the Italian People,” read one headline. Another article, entitled “Nigel Farage, The Truth,” listed UKIP’s supposedly progressive credentials, such as being an “antiwar … democratic organization” where “no form of racism, sexism, or xenophobia is tolerated,” and which believes in “direct democracy.”
The post that finally teed up the online vote made it very clear that the proposed alliance with UKIP was the best and only solution. According to Zanni, this was Casaleggio Associates’ modus operandi when it came to online votes: Provide a “cosmetic” appearance of choice while pushing for a particular option. In the end, 78 percent of the members who voted opted to join Farage. After years of studying how to shape online consensus, Casaleggio had mastered the art.
Russia’s Network of Millennial Media
Bradley Hanlon and Thomas Morley write about the latest Russian influence campaign to pop up on American social networks:
The video is short, digestible, and catchy. A young Twitter personality dives into the latest news from Venezuela while trendy music plays in the background. According to the video, President Donald Trump and his “right-wing” allies in South America are supporting a “straight-up coup” in the country. She continues, “My god – one week it’s Syria, today it’s Venezuela, next week it’s Iran. Pick a damn country America!”
The video, which has over 300,000 views on Twitter and nearly 600,000 views on Facebook, has high production value and is clearly targeted at digital-oriented millennials. But this isn’t a BuzzFeed or Vice production. Instead a watermark for “Soapbox,” an online media company, sits in the right-hand corner. What many people watching and sharing the video probably don’t know is that – along with several other social media channels targeting young, digital consumers – Soapbox (@SoapboxStand on Facebook) is a product of Russia’s state-backed media.
Emoji are showing up in court cases exponentially, and courts aren’t prepared
Dami Lee explores the dilemma that emoji-based communication is posing for our courts:
Bay Area prosecutors were trying to prove that a man arrested during a prostitution sting was guilty of pimping charges, and among the evidence was a series of Instagram DMs he’d allegedly sent to a woman. One read: “Teamwork make the dream work” with high heels and money bag emoji placed at the end. Prosecutors said the message implied a working relationship between the two of them. The defendant said it could mean he was trying to strike up a romantic relationship. Who was right?
Emoji are showing up as evidence in court more frequently with each passing year. Between 2004 and 2019, there was an exponential rise in emoji and emoticon references in US court opinions, with over 30 percent of all cases appearing in 2018, according to Santa Clara University law professor Eric Goldman, who has been tracking all of the references to “emoji” and “emoticon” that show up in US court opinions. So far, the emoji and emoticons have rarely been important enough to sway the direction of a case, but as they become more common, the ambiguity in how emoji are displayed and what we interpret emoji to mean could become a larger issue for courts to contend with.
Elsewhere
Even Without Amazon, Tech Could Keep Gaining Ground in New York
In the wake of Amazon bailing on NYC, there’s been much clucking that the city squandered a once-in-a-generation chance to become a tech power house. But the tech jobs are likely still coming anyway, report Ben Casselman, Keith Collins, and Karl Russell:
Long before Amazon announced that New York had won a share of its second-headquarters sweepstakes, tech was a rising force in the local economy. Google, which already has thousands of workers in New York, plans to double its work force in the city and build a $1 billion campus just south of the West Village. Facebook, Apple, Uber and other companies are also expanding their presences, as is a rising generation of homegrown companies.
Even Amazon itself said Thursday that it planned to keep adding to its New York work force.
Even years later, Twitter doesn’t delete your direct messages
Today’s extremely on-brand Twitter story is this joint from Zack Whittaker and Natasha Lomas reporting that the company cannot reliably delete the messages that you, uh, delete. (OK, this story is four days old but I’ve got a lot going on right now!)
Twitter retains direct messages for years, including messages you and others have deleted, but also data sent to and from accounts that have been deactivated and suspended, according to security researcher Karan Saini.
Saini found years-old messages in a file from an archive of his data obtained through the website from accounts that were no longer on Twitter. He also reported a similar bug, found a year earlier but not disclosed until now, that allowed him to use a since-deprecated API to retrieve direct messages even after a message was deleted from both the sender and the recipient — though, the bug wasn’t able to retrieve messages from suspended accounts.
How loot boxes hooked gamers and left regulators spinning
A loot box is a digital good, most commonly found in a video game, that gives the player a random reward. Some items can only be found in loot boxes, leading players to buy many of them in the hopes of getting the thing they want. This is basically how I collected basketball cards in the 1990s — but is this maybe also illegal gambling? Makena Kelly takes a look:
Some countries in the European Union have already begun to act. Last September, the Gambling Regulators European Forum (GREF) put out a statement that was signed by regulators from 15 different EU countries that were concerned about the practice. Last May, the Belgian Gaming Commission decided that loot boxes fell under the jurisdiction of its gambling law, and studios like Blizzard, Valve, and EA all pulled loot boxes from their games in those countries. As the concern spread across Europe, it started to catch fire in the US, but that momentum has stalled, and the video game industry’s lobbying efforts over this $30 billion industry seem to have curbed any tangible progress to regulate the sale of loot boxes.
Launches
Slack off. Send videos instead with $11M-funded Loom
I always enjoy seeing social apps reborn as enterprise solutions. So here’s some sort of video-based … Slack competitor? Josh Constine reports:
If a picture is worth a thousand words, how many emails can you replace with a video? As offices fragment into remote teams, work becomes more visual, and social media makes us more comfortable on camera, it’s time for collaboration to go beyond text. That’s the idea behind Loom, a fast-rising startup that equips enterprises with instant video messaging tools. In a click, you can film yourself or narrate a screenshare to get an idea across in a more vivid, personal way. Instead of scheduling a video call, employees can asynchronously discuss projects or give ‘stand-up’ updates without massive disruptions to their workflow.
Takes
Global Britain can lead the world in confronting the dark side of big tech
Ben Scott, a former adviser to the secretary of state during the Obama Administration, says that Facebook succeeded in the rare feat of bringing together the major British parties in agreement about something:
Setting aside partisan battling for once, politicians of every stripe seem to agree that the titans of digital media should take responsibility for the harms their products can cause to public health, security, privacy and much more. And it would also be nice if they paid a fair share in tax.
Despite the relentless turbulence of the Brexit debates, select committees in Parliament (led by the Tories and Liberal Democrats) have conducted serious investigations in the last few months into the social problems caused by the tech industry. Both have proposed major changes to the law.
The Pentagon Needs to Woo AI Experts Away From Big Tech
Amy Stern says the Department of Defense is losing in its effort to woo and retain talent from tech giants:
The future of AI—and, by extension, the future of humanity—is already controlled by just nine big tech titans, who are developing the frameworks, chipsets, and networks, funding the majority of research, earning the lion’s share of patents, and, in the process, mining our data in ways that aren’t transparent or observable. Six are in the US, and I call them the G-MAFIA: Google, Microsoft, Amazon, Facebook, IBM, and Apple. Three are in China, and they are the BAT: Baidu, Alibaba, and Tencent.
The few government agencies built for innovation—the US Digital Service, the US Army’s Futures Command, the Defense Innovation Board, and the Defense Innovation Unit (DIU) initiatives—are brittle in their youth and subject to defunding and staff reductions as the revolving door of political appointees spins. In practical terms, there is too little strategic collaboration between the G-MAFIA and our government agencies or military offices—at least not without a lucrative contract in place. While the G-MAFIA usually lobby for huge tax incentives and breaks to do business, they also must agree to the arcane, outdated procurement requirement policies of the military and government. This doesn’t exactly accelerate AI in our national interest. If anything, it shines a bright light on the cultural differences between Silicon Valley and DC, and it slows down modernization.
And finally …
Instagram posts land former Trump confidant into deeper legal trouble
“Never tweet” has been a popular theme in this newsletter from the beginning. But this week Roger Stone gave us good reason to say: Never Instagram. From Makena Kelly:
Last night, former Donald Trump adviser Roger Stone posted an image on Instagram of the federal judge presiding over his case that displayed a crosshairs logo in the background near her head. Now, that same judge is calling for Stone to explain the posts in court this week.
The original post with the crosshairs near US District Judge Amy Berman Jackson’s head was deleted soon after Stone published it. After the post was perceived as a direct threat to Jackson by many social media users, Stone deleted it and posted the same image, cropping out the crosshairs.
Some Instagram posts really could use a filter. The kind that stops them from being posted in the first place.
Talk to me
Send me tips, comments, questions, and evidence that the earth is round: casey@theverge.com.
*** This article has been archived for your research. The original version from The Verge can be found here ***