How Facebook and YouTube reacted to the New Zealand massacre

Facebook, YouTube reveal how they fought uploads of New Zealand shooting

The world is still reeling from the mass murder of 50 Muslims in Christchurch, New Zealand on Friday. Notably, the 28-year-old terrorist responsible made his attack even more horrifying by livestreaming his murderous rampage via social media.

Consequently, a video record of his senseless crime quickly spread across the internet. Now, YouTube and Facebook have revealed how they combated the spread of the livestream in the aftermath of the atrocity.

‘Unprecedented Volume’

In an interview with The Washington Post, Neal Mohan, chief product officer at YouTube, said that the video service faced a deluge of videos related to the crime that was “unprecedented both in scale and speed.” According to Mohan, YouTube content moderators worked through the night to take down the videos but new clips were being uploaded every second at one point.

Some users attempted to avoid flagging by YouTube’s automated tools by altering the footage slightly. Eventually, the platform disabled some search features, like “sort by upload date,” to limit the visibility of the videos. They also expedited their removal by cutting off some additional human review features.

Beyond removing videos of the shooting, YouTube prioritized videos from sources like the New Zealand Herald and USA Today. It’s a practice the company has undertaken previously during breaking news events. A YouTube spokesperson also said the company, “terminated hundreds of accounts created to promote or glorify the shooter.”

20 Percent Failure Rate

As for Facebook, the firm said on Saturday it removed 1.5 million videos in the 24 hours following the attack. The company also noted it was able to remove 1.2 million clips at the point of upload. Mia Garlick of Facebook New Zealand also stated, “Out of respect for the people affected by this tragedy and the concerns of local authorities, we’re also removing all edited versions of the video that do not show graphic content.”

However, as TechCrunch notes, if 1.2 of 1.5 million videos were blocked at upload, 300,000 got through their system. As such, the corporation’s filters had a failure rate of 20 percent. Facebook and Twitter quickly shut down the shooter’s accounts.

Nevertheless, social media users around the world shared the terrorist’s livestream. TechCrunch reported finding multiple copies of the video on Facebook and Twitter 12 hours after the shooting.

Facebook hasn’t revealed the videos’ actual engagement figures, the number of views, shares, and reactions the clips received prior to their takedown. By relying on “vanity metrics,” some critics feel that Facebook is preventing the public from grasping social media’s role in the “distribution and amplification” of the shooting videos.

Free Speech vs. Political Extremism

No one disputes that the social media companies aren’t trying their best to deal with these issues as they arise. Still, as others have noted, the New Zealand mass shooting is a case study in the challenges and limitations of online content moderation.

It has been confirmed the New Zealand shooter was steeped in Internet culture before his slaughter of 50 people. In addition to writing a meme-laden manifesto and giving a pre-rampage shout out to YouTube star PewDiePie, he also maintained a presence on 8Chan.

While free speech is a sacred right in America and most western democracies, it’s also becoming increasingly clear that political extremists—whether this shooter or the American terrorist who murdered 11 Jews in a Pittsburgh synagogue last fall—are becoming radicalized online.

Negotiating our right to unfettered political expression while preventing militants from committing acts of violence remains one of the great challenges of the information age. And based on the tragic events of the last week, it may be an even bigger issue than we thought.