Last week, Vox posted a thought-provoking piece regarding the development of lethal autonomous weapons (LAWS). The article noted recent advancements in uncrewed aerial vehicles (UAVs), facial recognition technology, and artificial intelligence (AI) systems development have set the stage for the creation of real-life “Terminators.”

Stuart Russell, a University of Berkeley computer science professor, told the publication the world would likely have autonomous combat drones before it has self-driving cars. “People who work in related technologies think it’d be relatively easy to put together a very effective weapon in less than two years.”

As opposed to their fictional counterparts, the piece also notes that real-life Terminators won’t look remotely human. Instead, they’ll be UAVs that can fire on targets without a human at the controls.

Advertisement

Furthermore, the article explores why the development and deployment of LAWS is a more nuanced topic than it might seem. But the question of whether or not the U.S. military should develop self-directed military drones isn’t that difficult to answer.

The Case for Lethal Autonomous Weapons

The best argument for the usage of LAWS is their potential utility on the battlefield. As the Vox piece points out, AI-enabled UAVs would provide significant advantages over standard military drones. For instance, contemporary military drones are vulnerable to attackers who can disrupt their connection to their human operators. However, a next-generation drone that is equipped with a list of targets and a face scanner wouldn’t have that vulnerability.

Indeed, the human-machine connection is a long-established flaw of military drone tech. NBC News reports Russia has interfered with U.S. military operations in Syria by deploying drone jammers. Furthermore, American intelligence analysts believe the Russian military used jammers to prevent outside surveillance of its 2014 invasion of Crimea.

Another advantage of using LAWS is that they don’t develop emotional problems after engaging in combat. American military expert Paul Scharre argues since killer robots can’t get angry or desire revenge, they won’t commit war crimes. The former U.S. Army Ranger also noted, “Future weapons […] could outperform humans in distinguishing between a person holding a rifle and one holding a rake.”

Furthermore, LAWS used in tandem with human operators could make warfare more humane. For instance, if a UAV with machine learning capability were programmed not to attack noncombatants, it wouldn’t be capable of intentionally slaughtering civilians. Accordingly, a killer robot with robust algorithmic safeguards could never be misused by a human pilot.

Similarly, a human operator could keep a self-directed military drone from making a potentially fatal misidentification.

The Case Against Killer Robots

The long-running “Terminator” franchise revolves around a murderous future AI called Skynet that sends autonomous hunter-killer androids into the past to kill human resistance leader John Connor. In five films and one TV series, Skynet’s weaponized drones fail because they can’t effectively counter human ingenuity.

In the real world, Terminators would fail because they could mistake the cyberpunk messiah for a bus ad.

The best argument against LAWS is that the technologies needed to make them work safely and effectively aren’t even close to being ready. For instance, British police deployed AI-enhanced facial recognition to detect wanted criminals in public places. Unfortunately, U.K. authorities found their AI-enabled face scanners had a 96 percent failure rate. Using killer robots with that level of effectiveness in the battlefield would be disastrous.

Furthermore, developing autonomous UAVs will be challenging. The biggest problem with using machine intelligence programs on the battlefield is that they lack the capacity for nuance.

As Paul Scharre points out, a human soldier knows it’s wrong to kill a child who is being used by an unscrupulous enemy. Conversely, a killer robot programmed to obey the laws of war wouldn’t consider age when detecting a possible combatant.

It’s also worth noting that cutting-edge autonomous vehicle programs still can’t manage tasks that humans can complete with ease. As an example, Tesla CEO Elon Musk recently told his shareholders his cars have trouble navigating parking lots. As such, it’s easy to imagine an AI-enabled military drone crashing because it didn’t notice a skyscraper during an urban engagement.

The Digital Insecurity Problem

In addition to the issue of insufficient technological development, digital insecurity poses another reason the United States shouldn’t develop LAWS.

Years ago, the National Security Agency (NSA) developed a digital tool called EternalBlue that can infiltrate Microsoft-based systems with remarkable efficiency. However, in 2017, malicious operators hacked the NSA and published EternalBlue’s source code. As a result, hackers are now using the cyber-espionage tool to hold U.S. cities hostage.

Since the U.S. Department of Defense couldn’t secure an infrastructure-crippling cyber weapon, it probably wouldn’t be able to lock down a military drone AI. Consequently, China, North Korea, or Russia could steal the design for a U.S. autonomous military drone and fabricate their own.

Indeed, a Chinese defense contractor began selling knockoffs of the MQ-9 Reaper drone in 2018. However, purchasers of the CH-4B “Rainbow” didn’t deploy them because their design was fatally flawed. The notion of a single counterfeit autonomous military drone experiencing a catastrophic system failure is a nightmare scenario. Moreover, the idea of a fleet of self-directed military drones going rogue is downright apocalyptic.

Concerning real-life Terminators, the United States should take the same approach it did with chemical and biological weapons. Ultimately, the risks associated with developing weaponry that cannot be adequately controlled or contained are greater than the potential gains.

Overall, America can’t stop hostile foreign powers from developing autonomous military drones. But as a leader in the field of AI, we owe it to future generations to never give them cause to look to the skies and feel a sense of terror.

Facebook Comments