Christchurch massacre leads to new social media laws in UK

Two years ago, researchers found that 42 percent of Instagram users aged 12 to 20 had been subjected to bullying. As such, the platform has a bigger harassment problem than competing services such as Facebook, Snapchat, YouTube, and Twitter. On Monday, the social networking service unveiled two new features designed to aid the service’s younger users.

However, the Facebook subsidiary’s new anti-bullying tools don’t involve banning users or censoring posts.

AI-Enabled Comment Moderation

Instagram’s most notable new update is that it will now force users to reflect on their offensive posts before publishing them. The service is utilizing an artificial intelligence (AI) program to scan the comments made by its more than one billion daily active users. If flagged, the program will prompt subscribers with the question, “Are you sure you want to post this?” At that point, users can choose to alter their text or post it as is.


While the firm’s approach to cleaning up its platform may seem odd, Head of Instagram, Adam Mosseri, believes it’ll be useful. In a blog post, the executive noted the new feature’s interventions have caused users to use less “hurtful” language in early tests. Aware of the limitations of current AI technology, the company also allows users to report false positives.

Moreover, Mosseri said Instagram deployed the feature because it takes the burden of reporting bullying comments off of teen users. The entrepreneur noted the service’s younger subscribers tend not to push back against online bullying because they fear retaliation.

Instagram’s desire to protect its users without putting them in line of fire also informed the creation of its second new anti-harassment feature.

Restricted Interactions

Mosseri revealed the platform’s other new anti-bullying tool lets users restrict their interactions with others. Now, subscribers can hide individual comments from appearing underneath their posts. However, bullies won’t know that a user has concealed their hurtful commentary. Instead, their responses will appear as usual on their feeds.

Moreover, users can now prevent other subscribers from seeing when they are active on the platform and if they’ve reviewed their direct messages.

Instagram announced that it would roll out updates designed to make its platform more hospitable at April’s Facebook F8 developer conference. At the time, the company teased versions of its newly released tools as unreleased features that would recommend users to take periodic breaks from the service.

A Thoughtful Approach

In recent years, every major social media service has struggled to find ways to detoxify their platforms. However, companies have encountered significant backlash after banning famous but controversial figures or censoring posts. But, Instagram’s approach to the issue is more thoughtful than those employed by its competitors.

For one thing, the firm’s new moderation tool is brilliant. The corporation isn’t infringing upon individual users’ Freedom of Speech. Instead, it is forcing subscribers to consider the impact of their words. By requiring people to take a breath before posting, Instagram might make its platform less aggressive and tense.

Furthermore, the service’s introduction of restricted interactions suggests a significant change in the way social media companies approach harassment. The firm’s new tool limits the ability of bad actors to bully and harass others. As such, those users might decide to abandon the platform. Defying industry convention, Instagram is risking driving down its engagement rates to make its platform better.

If the company’s new features prove successful, it will be interesting to see if other social media giants adopt its methods.

Facebook Comments