An algorithm can’t save Twitter’s white nationalism problem

Twitter has a white nationalism problem

Twitter has long been a destination where people run memes into the ground and where brands try to talk to you like they’re your best friends. But like most social media sites, Twitter has had its problems with far-right extremism and white nationalists.

At the end of last month, Motherboard reported that a Twitter employee said that Twitter has resisted implementing an algorithm to eliminate white supremacists from the site. Why? Because it would mean banning some Republican politicians, too.

Twitter has long adhered to its hateful conduct policy to weed out malicious tweets. But even with those efforts in place, harassment, violent tweets, and figures who espouse white nationalist viewpoints endure on the site. It’s a thorny issue that isn’t as clear cut as it seems.

Build your hardware projects with today

Twitter Is A Different Beast

As the Motherboard article points out, Twitter has largely eradicated the presence of ISIS on its platform. Going forward, though, the article questions why Twitter hasn’t experienced the same success when it comes to white nationalism. 

Twitter can’t put together an algorithm to target white nationalists because combating white nationalists is a different proposition. In an article for the New Yorker, Kate Klonick details the use of “hash” technology.

“Hashing works like fingerprinting for online content: whenever authorities discover, say, a video depicting sex with a minor, they take a unique set of pixels from it and use that to create a numerical identification tag, or hash,” she writes. “The hash is then placed in a database, and, when a user uploads a new video, a matching system automatically (and almost instantly) screens it against the database and blocks it if it’s a match.”

She continues saying that authorities have used “hash” technology to prevent the spread of extremist images like those used by ISIS and some white nationalist groups. However, on Twitter, where irony, sarcasm, and trolling win the day, many white nationalists express themselves accordingly. The complexity of language, especially when it comes to online discourse, makes it exceedingly difficult to detect sincerity and intent. When it comes to threats and malicious tweets, things get more complicated. 

What Could Twitter Actually Do?

Besides an algorithm that could potentially complicate an already complicated problem, Twitter has other avenues of recourse. Facebook made headlines last week when they took the step to ban far-right extremists from their platforms. Twitter has banned users like Laura Loomer and Alex Jones in the past. They could continue that course of action by kicking people off the platform who loudly express far-right extremist talking points.

Others have suggested limiting the reach of those who harbor far-right ideas, assuring their tweets don’t wind up in a great number of timelines. The other, probably easier route would be to just target and remove tweets that contain threats of violence or worse. Either way, fighting white nationalism on Twitter will not get any easier. And it will be fascinating to see how things unfold where algorithms and shortcuts are unable to cleanly deal with the depths of human behavior.