Autonomous driving vehicles — vehicles that operate without the need for human interaction — are unquestionably the wave of the future. There remains however, the ongoing difficulty of perfecting a fully automated driving system that can ensure public trust. Due to some recent incidents calling into question the technology’s limitations, consumers are left speculating whether they are ready to trust their lives (and potentially the lives of others) to a machine.
There’s been a lot of talk about cars that drive themselves automatically, and the fatal accidents they’re bound to cause. But would it surprise you to learn that driverless cars have been safely operating in public streets since 2009?
Google’s self-driving car project, Waymo, has been using artificial intelligence to invent and advance autonomous vehicles for nearly a decade now, and as the reports show, they are much less accident-prone than human drivers. When looking at the statistics alone, over 90% of accidents in the United States are due to human error, and that isn’t counting the many that go unreported.
Meanwhile, Google’s testing shows that after one-million miles of autonomous driving, their cars were only involved in 12 minor accidents. Of those accidents, the self-driving car was never the at-fault party.
(To provide a frame of reference: The average U.S. driver is involved in one accident every 165,000 miles)
But as the report carefully mentions within its first sentence, when it comes to autonomous vehicles, “we’re still learning”.
The Uber Incident
Following a fatal accident involving an experimental and driverless Uber, there have been increased discussions that surround safety and autonomous vehicles.
The fatality occurred this March, when one of Uber’s driverless cars hit a woman walking her bicycle across the street. Despite the car’s sensors and artificial intelligence system, it failed to determine her presence accurately and lethally struck at 43 mph. This has been this first reported instance of a driverless car killing a pedestrian in public.
Experts have since looked into why the crash happened, and better ways to prevent it from occurring again. But understandably, this tragedy has raised serious concerns about the dangers self-driving cars may pose and challenges Google’s early optimism.
After all, when the trial period for these driverless cars has already involved a range of minor accidents and one serious fatality, then are they truly worth it?
Driverless Cars and more Fatal Accidents
If asked the question of whether self-driving vehicles are justifiable, many experts will respond somewhere along the lines of:
Yes absolutely, even though this type of fatal accident will undoubtedly happen again.
It may sound shocking, but the unfortunate truth is that a process of trial-and-error is how most of the safety improving technologies we enjoy today ever came to be in the first place. From fatal accidents involving autonomous vehicles to the casualties caused by early traffic lights, developing safe tech is a process.
That isn’t to say that the deaths caused by young technologies are less devastating, or should be taken any more lightly. But it’s important to recognize that it usually takes unforeseen accidents before we can discover hidden flaws and oversights.
What this means is, we are likely going to be hearing more stories akin to Uber’s incident in the coming days. But the lives lost now will significantly reduce the overall rate of fatal accidents in the future. Considering how vehicle collisions are one of the top causes of human death worldwide, this trade-off could be massively beneficial for people everywhere in the long run.
Artificial Morality and The Trolley Problem
After recent events, a longstanding question regarding morality within artificially intelligent technology has resurfaced—especially in conversations about driverless cars.
In the studies of ethics and philosophy, this question is commonly referred to as The Trolley Problem.
To summarize, the trolley problem refers to a situation in which a choice must be made between saving a life over another.
For Example: A runaway trolley is out-of-control on a track and heading toward 20 helpless people. You have the ability to change the trolley over to a second track, but there is another person stuck there. This creates a dilemma of choosing one life over many, and calls into question the comparative value between human lives.
The decision may first seem simple, but suppose the singled-out person happens to be a close friend or family member. This is where the morality aspect of the problem comes into play. Does your prior relation to that person change your decision? Would you readily let a person you care about die for a larger crowd of strangers?
Keep in mind, this is all before considering the time-sensitive nature of the impending collision. Meaning, just like a driverless car in motion, you have hardly any time to work out your decision.
As you can see, the trolley problem raises a series of challenging questions with no easy answers. This scenario perplexes most human participants, no less vehicles with artificial intelligence that have been programmed to take the least destructive route.
Reasonably, there’s a vexing apprehension surrounding the idea of autonomous vehicles choosing who lives and who dies. If a collision is unavoidable, how does the car decide where the best place to crash is? Or if there’s a utilitarian choice between two different pedestrians, say, a uniformed police officer and a civilian, how should their roles influence the vehicle’s decision?
Perhaps better than most, it’s this trolley problem that highlights the kinds of difficulties we face in the pursuit of perfecting self-driving cars.
Autonomous Vehicles: A Brighter Future
It is easy to see, misgivings aside, that self-driving cars are here to stay.
Despite their relatively recent entrance onto the roadways and controversial reputation, they’re our best chance to reduce automobile collisions overall. The promise of a safer future is something many find worth striving for, even if that means making sacrifices as a society by enduring future accidents.
What is most important moving forward, is that we continue doing exactly what we have been. Asking the hard questions and tackling problems with self-driving vehicles head-on as they arise. We must keep calling for accountability and action when an accident occurs, and do everything we can to learn from our mistakes to build a better, and safer, way to travel.
We haven’t yet come close to seeing the last of our issues with driverless cars, but we’re barreling down the path of progress—and that’s something truly spectacular to witness.