Facial recognition fears grow: The modern surveillance crisis

China's social ranking system not so crazy after all

As facial recognition technologies continue to develop, so does the spread of its questionable, privacy-violating applications. The latest notable example of such practices comes from England.

Earlier this month, it was revealed that the owners of London’s King’s Cross estate were using live facial recognition technology to scan and surveil visitors without their notice or consent. Even worse, after initially denying their involvement, both the city’s Metropolitan Police and the British Transport Police admitted that they supplied the King’s Cross estate with images of individuals to help support its facial recognition database.

These actions stand in direct violation of the guiding principles of England’s Surveillance Camera Code of Practice. The code clearly dictates that “there must be as much transparency in the use of a surveillance camera system as possible, including a published contact point for access to information and complaints.”

It also states that such systems must have clear justifications to warrant their implementation.

Whether someone is for increased public surveillance or not, it’s evident that some of the tenants at King’s Cross are willfully being ignored. In response, political leaders and organizations throughout the U.K. have raised their voices in protest—and are calling for action.

A Unified Cry for Action

After this latest violation came to light last week, a letter began circulating between the U.K.’s politicians and several campaign groups.

Written by the aptly named privacy group, Big Brother Watch, the letter calls for the halt of all facial recognition applications until proper discourse on the technology’s safe and ethical handling can take place.

At the time of writing, over 18 British politicians and 25 campaign groups have added their signatures to the letter. Some of the biggest names on it include three Members of Parliament: David Davis, Diane Abbott, and Jo Swinson. A few of the campaign groups onboard include Amnesty International, Liberty, and the Police Action Lawyers Group. Additionally, 14 technology experts and academics have signed the letter, along with four barristers, bringing the total signature count to 61.

Of course, right now, this is only a request to open a conversation in the U.K. on regulating facial recognition. There’s no guarantee that the letter, or the concerns driving it, will be taken seriously by the government anytime soon. In fact, considering that most countries using facial recognition aren’t officially addressing these issues yet, it probably shouldn’t be expected.

The Rational of Facial Recognition Surveillance

There are some arguments that could be made for the validity of the King’s Cross estate using facial recognition.

After all, a Metropolitan Police spokesman has explained that these systems are meant “to assist in the prevention of crime”. Keeping the repeated terror attacks on international landmarks and populated hubs in mind, this kind of surveillance-based prevention makes sense. An ability to flag and observe those prone to cause harm could indeed improve emergency response times and save lives.

But the blatant lack of transparency demonstrated here, and the public lies that followed, make this a more sinister story. It shows a flagrant disregard for procedure, ethical conduct, and respect for the privacy of all individuals.

According to the British Transport Police, images of convicted and known offenders from the King’s Cross estate’s general area were only shared between 2016 and 2018. They still maintain that “This was a legitimate action in order to prevent crime and keep people safe”. At the same time, they go on to say “Understandably, the public are interested in police use of such technologies, which is why we are correcting our position.”

However, it’s a common trend of late to see organizations “correcting their data-privacy position” only after being called out. Then again, that’s only natural. When data-privacy laws remain loose, and surveillance technologies are developed and implemented faster than they are regulated, disingenuous apologies and repeat offenders will always run rampant.

Surveying what Surveillance People are Getting Away With

The current pattern of “do it first, then apologize later” is popping up surrounding facial-recognition applications all over the place. Oftentimes, the “apologize” part never even comes.

Just take a look at what’s been happening in the U.S. Recently, it was discovered that the FBI has been covertly supplying ICE with access to state facial recognition databases to aid the agency’s controversially aggressive deportation initiatives. Using those biometric records, and the driver’s licenses of undocumented workers who legally obtained them, ICE effectively targeted thousands of people.

Since there are no federal laws surrounding the use of facial recognition, there are also no consequences for those agencies. However, it’s not just the FBI that’s overstepping its ethical boundaries. Major companies, like Amazon, are also engaging in their own brand of malicious surveillance tactics.

This July, under its subset home security company, Ring, Amazon was helping police departments across the U.S. spread their surveillance systems into private homes. The company accomplished this by encouraging police to distribute its Ring cameras via free giveaways. Of course, during all of this, the public was conveniently never notified of the partnership between Amazon and the police.

There are some serious privacy concerns to raise when a company—that’s been known to actively listen in on its customers—essentially sneaks cameras into residences. Those concerns are amplified considering the way that local law enforcement helped it accomplish this. Furthermore, as Amazon also maintains a cooperative history with ICE and Palantir, the implications for undocumented immigrants become significantly bleaker.

Regulating Facial Recognition Technology Into the Future

At this point, there’s plenty of outcry growing against the increasing use of facial recognition technology.

Like the letter circulating around the U.K., over 500 Amazon employees and shareholders signed a correspondence asking for ethical regulation. However, as seems to be the pattern with facial recognition offenders, the company never acknowledged the document with any seriousness.

Still, it is worth noting that some progressive municipalities and organizations are taking what action they can. For instance, both San Francisco, California and Somerville, Massachusetts outlawed the use of facial recognition technology throughout their public agencies. Meanwhile, startups like D-ID are pioneering new ways to counter facial recognition efforts and retain some semblance of privacy in an increasingly surveilled future.

Basically, unless faced with direct legal consequences, it’s evident that self-serving entities are going to continue violating privacy rights. That’s why, at the very least, these letters stand as an important gesture (even if they’re not so effective). If facial recognition applications keep moving forward unchecked it’s impossible to tell how far these privacy problems will spread.

Everyone enjoys talking about the surreal, even dystopian sounding, social currency system that China’s nationwide facial recognition is empowering. However, unless some regulations are set on this rapidly spreading technology, who’s to say that similar systems won’t be implemented into the U.K., the U.S., or another unwary nation within a decades’ time?