Researchers tell Amazon to stop selling bad facial recognition tech

0
170
Amazon under fire for Rekognition facial software

With facial recognition systems moving into the public spotlight, it is becoming evident the technology may not be ready. When news broke in July 2018 that Amazon’s Rekognition facial recognition software falsely matched 20 Congress members to mugshots of criminals, the tech world became uneasy.

Now, top artificial intelligence (AI) experts from Google, Facebook, Microsoft, and more have made a determination about the faulty software.

Data scientists are now urging Amazon to stop selling their tech to law enforcement until it can operate more reliably.

Advertisement
Manage your supply chain from home with Sourcengine

Unified Front

On April 3, 26 researchers signed an open letter voicing their concerns about the use of Amazon’s Rekognition program. The letter states that multiple studies have found flawed algorithms in the Jeff Bezos led company software. Apparently, it makes a disproportionate amount of errors when analyzing faces of dark-skinned and female individuals. Of course, this discrepancy is unacceptable considering the implications of this technology.

Researchers argue these shortcomings have the potential to cause racial discrimination. Furthermore, the group believes its use will result in innocent people being mistaken for criminal suspects. The problem is that law enforcement departments across the country are already using the technology, despite its flaws.

Morgan Klaus Scheuerman is a Ph.D. student from the University of Colorado at Boulder and one of the researchers who signed the letter. On the issue of using AI for facial identification she says, “Flawed facial analysis technologies are reinforcing human biases.” However, she also argues companies selling the software may not be aware of the problems their technology can cause.

Need for Verification

This is certainly not the first time Amazon’s facial recognition technology has faced public scrutiny. Last year, civil liberties groups criticized the program regarding its inability to accurately identify people of different races.

Despite this, the company defends its software by saying they have received no report of law enforcement misusing it. While it sounds nice in theory, this statement is somewhat empty considering there are no laws in place to regulate the use of AI-based technology.

Meanwhile, tech giant Microsoft has called for government regulations on AI facial recognition Similarly, Google refuses to sell biometric identification tech. Unfortunately, there is no clear answer on how to regulate it. But according to Scheuerman, “These technologies need to be developed in a way that does not harm human beings.”

Researchers hope their open letter can spark dialogue about the subject. Moreover, they call for technical frameworks that outline how to regulate the use of AI facial recognition by law enforcement.

As AI continues to flourish, there will inevitably be some form of legislation for how it can be used. When it’s enacted or how much damage will be done in the meantime is still unknown. Fortunately, thanks to the efforts of the 26 researchers, society has moved one step closer to solving the moral dilemma.