Facebook is using AI to speed up content moderation

0
671
Facebook faces huge fallout from Cambridge Analytica fallout

It’s no secret that Facebook is a hot mess when it comes to misinformation and harmful content. Moderating all of it manually is simply impossible. After all, more than 3.21 billion people actively use Facebook and the company’s other apps.

To help its moderation team keep up with the flood of misinformation, Facebook is turning to artificial intelligence (AI). The social media giant recently shared how machine learning is making content moderation easier.

Although AI has played a significant role in moderation for some time, Facebook is now using it to prioritize the content that human reviewers need to comb through.

Advertisement
Manage your supply chain from home with Sourcengine

Tweaking the System

When a Facebook user shares something that violates the company’s guidelines, other users and machine learning algorithms can flag it. In some cases, responding to the violation is simple and can be handled automatically. Other cases aren’t so cut and dry.

That’s why Facebook has a team of more than 15,000 human reviewers. The company has received criticism in the past for how it treats these moderators. Many ex-employees have come forward to discuss working conditions that have led to psychological trauma after being forced to sort through the worst of the worst.

Regardless, moderators are tasked with reviewing flagged posts to determine if they need to be removed. This process is typically carried out in something close to chronological order. However, Facebook wants to ensure that the most sensitive content is reviewed first so that it can be dealt with in a swift fashion.

That’s where artificial intelligence comes in.

The social media giant is using various machine learning algorithms to sort through flagged posts and re-order them within the moderation queue. It says that factors like virality, severity, and likelihood of rule-breaking are all taken into account.

As of now, it isn’t clear how those factors are weighted and how much each one of them affects the final decision. That being said, Facebook has made it clear that it aims to review the most harmful posts first.

This means a post that’s going viral will almost certainly be reviewed before one that isn’t. Likewise, posts dealing with things like terrorism, child exploitation, and self-harm will also be pushed to the top of the queue due to their real-world impact.

Urgent Need

To be clear, Facebook isn’t abandoning its human review process. The new AI will simply help prioritize the queue to help moderators deal with sensitive content more quickly. Ultimately, that is a good thing for everyone.

Facebook previously noted that it took action against 9.6 million pieces of content in the first quarter of 2020. That was a huge jump from the prior quarter, which saw actions taken against 5.7 million posts.

In the wake of the presidential election and in the heart of the COVID-19 pandemic, Facebook is dealing with more harmful posts than ever. The next time it releases numbers regarding how many posts are being moderated, they will likely be record-shattering. Hopefully, the new AI approach will help the social media giant regain control of its platform.

LEAVE A REPLY

Please enter your comment!
Please enter your name here