Clearview AI open about storing billions of our photos

0
376
Clearview is very open about its policy of keeping images of people.

Discussions about facial recognition technology have escalated in recent months. The Manhattan-based company, Clearview AI, deployed a research tool last year for law enforcement groups to use to capture perpetrators and protect victims. After a New York Times article unveiled the extent of Clearview’s reach, the startup has faced a lot of criticism.

Tech leaders like Google and Facebook sent cease-and-desist letters, as Clearview downloaded images in a way that violated company policies. New Jersey lawmakers banned the use of Clearview’s platform until experts take a closer look at the technology. And the general public is concerned about what the company has in its possession.

Now, we know for sure that Clearview has over three billion photos of people. CEO Hoan Ton-That has kept his cool under pressure, believing deeply in the value of what his company brings to the table. By selling access to Clearview’s database, he thinks law enforcement agencies are better equipped to catch criminals and vindicate the innocent more quickly.

Advertisement
Manage your supply chain from home with Sourcengine

Clearview AI’s Controversial Practice

Perhaps what is most concerning to people is the fact that Clearview AI keeps photos in its databases even after they have been deleted or rendered inaccessible by their original sources. For example, even if users delete their social media profiles or make them private, Clearview still keeps any content it downloaded while in the public domain.

Today, over 600 agencies use Clearview AI in the U.S. and Canada. Several banks also use the platform. No major players, like WellsFargo or Chase, use Clearview today. Ton-That says Clearview’s technology is 99% accurate even for people of color, which has long been a challenge for facial recognition technology.

The company’s website responds to specific concerns that the public has raised over the past few months. Clearview states outright that it only searches the public web. The company has no access to private or protected information. Additionally, Clearview highlights that its technology is intended for searches, not surveillance. Law enforcement groups can search for people based on uploaded images after-the-fact, rather than track digital footprints across the internet.

An independent group of experts also vetted Clearview’s approach. They validated that the startup complies with federal, state, and local laws. Despite the checks and balances, back in January, an unidentified person filed the first lawsuit against the company through the Illinois District Court. The individual is trying to obtain class-action status. The ruling could become a landmark case for the future regulation of facial recognition technology.

Facial Recognition Not New In Our Society

Facial recognition technology is already an important part of our lives today. Many of us unlock our smartphones by merely looking at our devices. Airports use it as a replacement for scanning boarding passes. In healthcare, organizations like DeepGestalt use facial recognition to detect genetic disorders that would otherwise go unnoticed.

However, companies like Clearview AI have brought a new dimension to the mainstream conversation. More people are now aware that facial recognition technology could be used on them without their explicit consent. The debate going forward will be around whether or not this is an abuse of personal privacy, even on the open web.

LEAVE A REPLY

Please enter your comment!
Please enter your name here