CNN reported on March 15 at 1:45 pm NZDT, an unidentified suspect attacked two mosques in Christchurch, New Zealand. Details are still emerging but it’s been confirmed that 49 people were killed in the attack and 20 others were left seriously injured. New Zealand Prime Minister Jacinda Ardern has said authorities have arrested four people in connection with the attacks.
Reports that a gunman live-streamed his killing spree have only compounded the horror. It has been alleged, but not officially confirmed, that an unidentified gunmen broadcasted a 17-minute helmet camera video of himself opening fire on worshippers via Facebook. As the attacks began circulating, clips from the harrowing livestream spread like wildfire through social media.
Authorities believe one attacker distributed a manifesto expressing anti-Muslim sentiment as context for the slaughter on Twitter.
New Zealand police were contacted about the livestream right after the massacre happened. They subsequently asked Facebook to take it down. In response, the social network quickly took down the alleged attacker’s personal accounts.
The major video hosting sites have seemingly scrubbed the full video from their platforms. Depressingly, users have shared excerpts from the lifestream on Twitter, YouTube, and Instagram.
In the coming weeks, countless politicians, commentators, and experts will weigh in on the shooters’ motives. However, one thing that shouldn’t be lost in the noise is the role Big Tech played in broadcasting an atrocity.
Social Media and the Broadcasting of Atrocity
The New Zealand massacre attracted global attention because its perpetrators staged it the same way a record company stages a surprise album release. In the immediate aftermath of the attack, New Zealand policed asked social media users not to share the gunmen’s stream. Accordingly, major news outlets like CNN and the New York Times refused to link or embed the alleged perpetrators’ video or manifesto. Unfortunately, that plea did little to change the fact the material in question is widely available.
Social media platforms currently aren’t adequately equipped to deal with events like mass shootings. Last year, Facebook deployed a new screening algorithm and was able to remove 99 percent of the content posted by ISIS and al-Qaeda. Tragically, those tools have proven ineffective at cracking down on content made by violent criminals and terrorists.
Given the increasing prevalence of killers broadcasting their crimes, they should be.
Why Tech Companies Aren’t Serious About Content Moderation
The New York Times noted that the New Zealand shooters’ purported posts have evaded content screening algorithms because users are altering them before sharing. Consequently, it seems that machine learning isn’t the solution to content moderation. Similarly, human content moderation might be more effective, but it’s a truly horrific job.
So, how can tech firms prevent evil people from using their platforms to publicize their atrocities and causes?
Until there is a significant improvement in nuance encoding of artificial intelligence (AI), a multifaceted solution is necessary. Because of the sheer glut of content posted online every second, automated screening is a must. Moreover, people who review algorithmically flagged content should be provided with robust mental health resources.
Social media services need to be held accountable for the content they host. It’s no longer acceptable for corporations to hide behind safe harbor and terms of service agreements since researchers have found media coverage inspires copycat mass shootings. As the New Zealand massacre shows, even when traditional media pull back on giving shooters attention, social media and online video give these people the platform and attention they want.
Multinational technology corporations are always quick to make public pledges to do better when their services are misused. However, they’ve been slow to make real changes. The reason for Silicon Valley’s inaction is these companies make billions of dollars from users violating their policies. By sharing personally identifiable information with third parties and copyright-infringing material, tech firms profit from looking the other way.
The Dark Side of Streaming
Relatedly, it’s long past time for there to be a serious conversation about the dark side of streaming. One of the most disturbing parts of the New Zealand shooter’s purported video is his reference to PewDiePie. While it would be asinine to ascribe any responsibility for the shooting to the Swedish YouTuber, it’s worth noting how massive his cultural imprint is.
PewDiePie, born Felix Kjellberg, has 89 million subscribers on YouTube. Attracting a vast audience has allowed the 29-year-old to earn what is estimated to be millions of dollars a year. Forbes reports other popular streamers like Markiplier and Ninja have become rich thanks to platforms like YouTube and Amazon-owned Twitch.
The wealth and fame attained by top broadcasters have prompted a streaming boom. In 2018, Twitch alone reportedly hosted more than 2.2 million unique monthly streamers. Sadly, less than 10 percent of those people actually made a profit from their passion. In fact, a recent The Verge piece pointed out many streamers spend years broadcasting without getting any views.
Likewise, a 2018 Vice article detailed the unsavory parts of China’s $4 billion livestreaming industry. While China’s streaming industry provides economic support and creative fulfillment for lots of people, it’s also having a negative effect on their quality of life. YY, the nation’s biggest livestreaming service, essentially forces broadcasters to form uncomfortably close relationships with patrons to get paid. It also requires them to produce so much content, they effectively do nothing else.
On a related note, webcamming is a billion dollar component of the international sex industry that is rife with exploitation. In addition to having a deleterious psychological effect on providers, sex-related streaming is also fostering new kinds of cybercrime.
Big Tech facilitates all of that exploitation, extortion, and psychological damage by providing platforms and support services.
The Need for Greater Accountability
Making laws to stop people from doing terrible things is a fool’s errand. People do awful things all over the planet every day even with a full understanding of the consequences. That’s just human nature. All we can do is create systems that enforce accountability.
The way things are now, big tech companies are never held accountable for the suffering they cause. These hugely influential corporations profit from massive unregulated platforms that highlight the actions of the depraved and influence others to try and top their body counts. As mass killings become increasingly commonplace, the tech corporations that shape our lives have repeatedly proven they’re not interested in self-regulation.
Although free-market advocates won’t like it, the only way to curb the spread of disturbing content is through legislation. Again, government intervention isn’t going to stop the cruel and destructive from committing atrocities. What new accountability rules and regulations can do is force Silicon Valley to stop rewarding terrorists for filling graveyards.