Clearview AI threatens personal privacy, searches Internet with faces of unsuspecting people

Clearview AI is threatening personal privacy like never before.

Of all the tech out there, one thing, in particular, makes people nervous—facial recognition. The AI-driven image scanning technology has grown more skillful in the past few years. However, it is also drawing more scrutiny than ever before.

People universally associate facial recognition with “Big Brother.” As such, the idea of a tool that can search the Internet for an individual’s private info with a casual photo of their face is terrifying. After all, at that point, consent becomes optional and privacy becomes a luxury, not a right.

Sadly, that is exactly what Clearview AI is. The tool comes from a startup shrouded in secrecy. Yet, government agencies, law enforcement, and more are quickly adopting it. Despite being untested and unsecured, the ultra-powerful tool could radicalize what it means to be anonymous. It also raises the question of whether saving privacy is still possible.

Manage your supply chain from home with Sourcengine

What is Clearview AI?

There are plenty of facial recognition algorithms out there. So, what’s the big deal about this one?

Clearview AI has been in the works since 2016. It began as a program that scraped countless sites on the web for images of people’s faces. News sites, educational sites, social networks like Facebook, Twitter, YouTube, Instagram, and even Venmo were targeted. Despite the fact that these companies ban scraping, Clearview did it anyway.

With utter disregard for these policies, it continues to use the images as a database for its facial recognition algorithms to consult. When a photo is uploaded into Clearview, the program converts it into a vector and compares it to stored images to find a match.

As the tool was perfected, its makers started seeking customers to sell it to. Naturally, they arrived at the doorstep of law enforcement. Unsurprisingly, the app became a perfect fit. It allows law enforcement officers and government agencies to identify people with a photo even if they aren’t registered in any official databases.

For example, police in Indiana used Clearview to identify a suspect based on a still image pulled from a smartphone video. The tool matched his face with a captioned video in which he also appeared, giving police his name.

However, Clearview also claims that its tool can identify more obscure persons of interest. A sales presentation says that it is even capable of identifying someone who appears in the mirror of someone else’s gym photo. It can also find people based on certain attributes like their tattoos. If that doesn’t sound like something from a dystopian world, then not much will.

Yet, that hasn’t stopped more than 600 law enforcement agencies from using the database of more than three billion images. Oddly, Clearview has declined to provide a list of which agencies utilize its tool.

Behind the Mask

Hoan Ton-That isn’t a household name like Steve Jobs or Jeff Bezos. However, it belongs to the developer of Clearview AI. His eponymous company is small and attempting to stay out of the spotlight.

Yet, with the massive impact its technology can have, that won’t be possible for long. The love/hate sentiment surrounding facial recognition will thrust Clearview into the middle of the debate.

Until now, Ton-That had largely been devoid of any major software accomplishments. He previously released an iPhone game and a social media network that both fizzled out.

The Australian developer’s most recent innovation is the only thing that matters. More importantly, the shady nature of its development and deployment will take center stage in the facial recognition debate.

Adding to the concern is the backers behind the Clearview project. One of whom, David Scalzo, said, “I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy. Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future of something, but you can’t ban it.”

With such lax attitudes guiding the powerful tool, the erosion of privacy has already begun. Although Scalzo’s point that privacy is becoming increasingly harder to control is valid, that outlook only leads to its further destruction.

Questionable Accuracy

A huge reason for Clearview’s unlikely rise to fame is that Big Tech companies have largely avoided releasing a tool like it. In 2011, Google said that facial recognition is the one technology that it isn’t pursuing because it can be used “in a very bad way.” Meanwhile, San Francisco is one of many cities to ban the use of facial recognition by law enforcement.

Much of this scrutiny is due to the fact that technology can be eerily inaccurate. One study found that the facial recognition systems used by British police are wrong up to 96 percent of the time. Likewise, different analyses have shown that many algorithms misidentify people of color at higher rates than others.

Currently, Clearview claims that its tool is correct 75 percent of the time. However, its secretive nature has kept third-parties from analyzing these numbers.

Clare Garvie, a Georgetown University privacy researcher says, “We have no data to suggest this tool is accurate… The larger the database, the larger the risk of misidentification because of the doppelgänger effect.”

Meanwhile, Al Gidari, a privacy professor from the Stanford Law School, says, “It’s creepy what they’re doing, but there will be many more of these companies… Absent a very strong federal privacy law, we’re all screwed.”

Many industry experts claim that Clearview is the final straw for facial recognition. Until now, Big Tech companies and others have self-policed to keep the technology in check. That strategy is obviously not working anymore.

In the words of Woodrow Hartzog, a professor of law and computer science at Northeastern University, “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”

Grim Outlook

With a tool like Clearview out there, privacy is facing its biggest threat in history. Humanity has reached a turning point. We, as a society, must decide if we are willing to put the convenience of facial recognition ahead of personal privacy.

The decision that is made will be one that is both irreversible and world-changing. With Clearview unwilling to back down and people more concerned than ever, it will take a bold move to solve the facial recognition crisis we are standing on the brink of.


Please enter your comment!
Please enter your name here