Twitter investigating racial bias in photo preview algorithm

Twitter is currently reviewing its photo preview algorithm for bias.
Image: Unsplash | Morning Brew

The issue of racial bias in algorithms is one that doesn’t have an easy answer. Likewise, it isn’t always easy to catch. Twitter is learning that lesson firsthand. The social media platform is currently looking into why its photo preview algorithm seemingly chooses to show white faces more frequently than minority ones.

A multitude of Twitter users demonstrated the flaw over the weekend. On Sunday, Liz Kelley from Twitter’s communications team said that the company’s initial tests showed no signs of bias. However, she went on to say, “It’s clear that we’ve got more analysis to do. We’ll open source our work so others can review and replicate.”

Regardless, the dilemma brings forth an interesting question. How do we regulate algorithms (not just Twitter’s) that seem to contain bias? It is something that the tech industry will need to address moving forward.

Manage your supply chain from home with Sourcengine

Pick One

Cryptographic engineer Tony Arcieri was one of the first Twitter users to start testing the platform’s photo preview algorithm. He attached two photos including both Barack Obama and Mitch McConnell to a Tweet. The Twitter algorithm chose McConnell’s face as the preview both times.

When Arcieri inverted the colors, therefore making skin color a non-issue, Obama was displayed once and McConnell was displayed once.

Other users tested the algorithm with various approaches of their own, including adding more contrast to the smile and tweeting from different platforms. Scientist Matt Blaze found that the priority for image previews seems to depend on whether the tweet was sent from the official Twitter app or from a service like Tweetdeck. The latter appeared to garner more neutral results.

One user even discovered that the preview flaw seems to work with cartoon characters. When using photos of a yellow and a black “Simpsons” character, the algorithm chose to preview the yellow one both times.

Twitter first started using a neural network to automatically crop photos and generate an image preview back in 2018. The goal was to find the most interesting part of the photo for the preview pane.

It initially tried to use facial recognition to crop the images, but found that to be problematic since not every photo contains a face. It remains unclear how the current algorithm works. Perhaps third-party researchers will uncover more about it once Twitter makes the code open-source.

Bigger Issue

It’s clear that Twitter will need to conduct more scientific testing to determine if its algorithm truly does favor white faces in photo previews.

Even if further investigations do reveal bias, there isn’t a guarantee that Twitter can fix the issue. Algorithms designed by reputable programmers don’t purposely include bias. Determining why they make decisions in the way they do is almost impossible.

With that in mind, this issue serves as a great reminder of the dangers of algorithmic bias. Regardless of intent, it could do measurable harm before companies like Twitter are able to correct it.

Moving forward, companies must continue to innovate new ways of preventing algorithmic bias in the products they roll out.


Please enter your comment!
Please enter your name here