The rapid development of Internet of Things technology, low-power wide area networks, and artificial intelligence (AI) are making the dream of smart cities a reality. Soon, buildings, automobiles, and street furniture will be able to communicate to facilitate seamless transportation, decreased environmental damage, and lowered municipal maintenance costs.
However, it’s becoming increasingly clear that there’s a dark side to the advent of smart cities. Namely, the deployment of AI-enhanced surveillance tools in urban areas has the effect of eroding privacy. The Next Web recently published a feature detailing how facial recognition programs are being launched nationwide without oversight or transparency.
The piece also outlines how AI tools have the capacity to turn emerging smart cities into dystopias.
False Positives and Wrongful Arrests
The oppressive potential of AI monitoring technology was on full display at the 2017 Notting Hill Carnival. The Metropolitan Police Service created a database of 500 suspects it hoped to arrest at the event. The authorities connected a cutting-edge facial recognition system to area CCTV cameras to achieve their goal.
Police representatives claimed the biometric technology solution they were using had a 95 percent accuracy rate ahead of the festival. In practice, the facial recognition application made 35 errors that resulted in five police interventions and one arrest. Liberty, a local human rights group, dubbed the service’s biometric program “painfully crude” and the Metropolitan Police discontinued its use the following year.
However, Britain isn’t the only part of the world where crime-busting AI has fallen short. Last year, a deep learning-powered surveillance system pegged appliance company CEO Dong Mingzhu as a jaywalker. However, the executive did not break any traffic laws. The AI program actually captured her image on a bus as it went through an intersection.
The local police department in charge of the system promised upgrades would be forthcoming.
Similarly, AI experts from Google, Facebook, and Microsoft recently asked Amazon to stop selling its flawed facial recognition program to law enforcement agencies across the country. The data scientists claim the tech giant’s biased software will lead police to make wrongful arrests. Despite the outcry, the firm hasn’t stopped selling its biometric program.
Accordingly, AI-driven wrongful arrests might become a common feature of life in a smart city.
The Perils of Government Facial Recognition
It’s worth noting that local law enforcement agencies aren’t the only authorities interested in utilizing AI-enhanced facial recognition surveillance. Last month, it was reported United States Customs and Border Protection (CBP) was rushing to get biometric scanners installed in the country’s largest airports. The government wants the airport facial recognition network to serve as a kind of digital border.
Once up and running, the AI-enabled security solution will create a vast database of foreign nationals coming in and out of the U.S. The government system photographs travelers, determines their identity, and stores biometric info of non-Americans for 75 years. However, the facial recognition program deletes scans of American citizens after 12 hours. But privacy advocates worry the government will come to lift that restriction.
Notably, the CBP deployed the airport scanning system without public hearings or even notification. As such, it’s not inconceivable that the federal government could decide to create a secret domestic facial recognition database. If a federal biometric surveillance program does go online, densely populated smart cities will become miniature surveillance states.
While such an outcome might seem farfetched now, it’s worth remembering that something like it already exists in China. The Asian superpower recently made international headlines by implementing a social credit system in some of its smart urban centers.
The program uses biometric data to restrict the access of low scoring citizens to certain products and services. As a result, the AI-empowered system prevented more than 6 million people from buying plane tickets in 2017.
Additionally, the airport scanning program also has unsettling implications for the commercial use of facial recognition data.
The Future of Biometric Marketing
When the CBP began implementing its biometric profiling program, it allowed airlines to independently set up their scanning equipment. The agency also allowed them to use the facial recognition data collected as it wished, including selling it to a third party. Therefore, smart cities might become suffused in interactive personalized ads like the shopping centers depicted in “Minority Report.”
Indeed, facial recognition-powered targeted advertising is already a reality. In 2018, a British firm won a contract to install 10,000 biometric scanning targeted ad boards throughout South Korea. Similarly, Snapchat has inked advertising deals with Cadbury, Gatorade, and Taco Bell to develop branded lens’ that hocked their products.
Currently, industry experts are estimating that facial recognition marketing spending will reach $7.6 billion by 2022. In 2017, one in 4 billion people traveled the world via airplane. Correspondingly, the country’s biggest airlines could generate massive revenues by selling biometric data to the world’s biggest brands. To get a return on their investment, large corporations will deploy facial recognition-enabled ad boards in every smart city.
Facebook has made billions using its wealth of multifaceted user data to sell micro-targeted advertisements. However, the corporation has also faced widespread scrutiny for sharing consumer information without consent. The loopholes in the airport face-scanning system mean that airlines can collect and sell personally identifiable information with impunity. Indeed, travelers will give way rights to their biometric data by stepping into an airline terminal.
Avoiding the Inevitable
It should be noted that the dystopic smart city outcomes outlined above probably won’t come to pass in Europe. The reason being the European Union’s General Data Protection Regulation (GDPR) prohibits the unauthorized exploitation of personal data. Notably, the GDPR contains specific language regarding the ethical use of AI tools in profiling.
Unfortunately, the United States and China don’t have national laws regarding the uses of AI. In particular, America has no legal recourse for citizens who don’t want their biometric data commodified. AI is such a threat to personal privacy even Mark Zuckerberg wants the government to issues new rules on its usage.
Because of their cost savings potential, environmental impact, and transportation optimization capacity, smart cities are an inevitability in the U.S. However, there’s a strong possibility our urban centers will become digital panopticons without government intervention. While the implementation of wide-ranging regulations is always controversial, the privacy and liberty of the American people are at stake.
AI tools will have a transformative impact on society and its one that should be directed by the people.