When it comes to human content moderation on Facebook, AI provides few solutions

0
553
Will Facebook consider AI content moderators?

By now, we have all probably read about the lives of Facebook content moderators in the U.S. The TLDR version: It’s awful.

The pay is terrible. Employees are forced to sift through a sea of online offal that humans regularly produce, digesting endless currents of pornography and traumatic content day in and day out. Unsurprisingly, this content takes a toll on employees’ mental health. Beyond that, the conditions are punishing, with intense micromanaging and missteps and errors leading to swift termination.

To cope with the hell of it all, employees offset the emotional toll by getting high on the job, having sex in the office (“trauma bonding”), and by telling dark, offensive jokes in the workplace.

Advertisement
Manage your supply chain from home with Sourcengine

It’s a wild report. The details of these employees’ working conditions are crucial in completely understanding and exposing the beast that is Facebook. It is also an opportunity to look at how technology, specifically artificial intelligence (AI), can step in and help the same content moderators. Or can it?

The Nuance Problem

For most big tech companies, artificial intelligence is always the solution, especially when it comes to content moderation. But can AI actually take over or relieve humans of their duty to scrub social media sites of digital excrement? In a word, no.

The way algorithms help human moderators currently by effectively being a stopgap. Algorithms can stop some content on their own and they can also filter content to human moderators to take a closer look. General categories trigger the AI. When confronted with “nudity” or “guns,” the AI will eliminate obvious offensive material (which is not always perfect).

Content that possesses a greater depth of nuance, however, trips up the algorithms and humans too. Imagine all the times you have had to explain the context and subtleties of a meme or piece of internet culture.

Now imagine a computer program trying to figure out why a woman holding Fiji water on a tray is funny or meaningful (it’s not, just an example). So it goes with the majority of content online. This extends beyond stupid memes, too. Algorithms have difficulty comprehending other complex forces like power dynamics, race relations, political systems, and cultural norms.

AI Makes Things Better Though, Right?

Though machine learning and AI technology are developing rapidly, it’s failing to keep up with humans and online culture. Indeed, algorithms are gaining a better grasp of context, but the day when AI completely takes over from human moderators is far off.

Where AI can make greater advances in content moderation is in the speed algorithms utilize to process text, images, and video. That’s about it, though.

In fact, experts are split about the ways that AI will develop in the coming years. Some argue that machines will completely take over content moderation while others remain skeptical, arguing that fully automated content moderation could lead to abuses of power and legal challenges.

How does Social Media go Forward?

Using AI in content moderation is a good and worthwhile pursuit. However, providing humans with decent wages, workplace support, and working conditions that do not induce acute PTSD symptoms in an office setting is, you know, also a decent solution.

The internet and technology move at such a fast pace that tech companies are faced with the problems of fixing issues at breakneck speeds. There is an incentive to either implement machine-driven solutions or treat humans like machines, neither of which are perfect answers. 

Fair and accurate content moderation on social media is a unique problem that no one in human history has ever encountered. How Facebook and its ilk go forward is anyone’s guess.