AI in our lives is becoming more prevalent in today’s world. Pretty soon, AI will have our lives in its hands as it will be driving for us, completely autonomously, for the first time. Driverless cars are coming faster than you think and there are a ton of benefits compared to human drivers. From 2035 to 2045, it’s expected that consumers will regain up to 250 million hours of free time from behind the wheel, while $234 billion in public costs will be saved by reducing accidents from human error. What’s more, driverless cars can eliminate 90% of all traffic fatalities — saving 1 million lives due to human error every year. But if AI is driving, they are going to have to make the same hard decisions we have to make on the road — how will they go about this?
“Driverless cars ‘must decide quickly, with incomplete information, in situations that programmers often will not have thought of, using ethics that need to be encoded all too literally,” said Noah J. Goodall, the Senior Researcher at the Virginia Transportation Research Council.
Autonomous vehicles will have to make split-second decisions with incomplete information in order to save lives, and they will also have to use judgment about which lives to save. So who should AI save in case of a crash? In a global study, most human people preferred swerving over staying the course, sparing passengers over pedestrians, and saving as many lives as possible. But it’s a little more complicated than that.
Participants were most likely to spare the life of a child, and least likely to spare animals or pets and criminals. 76% of people felt that driverless cars should save as many lives as possible in the event of a crash, but very few were willing to buy a vehicle programmed to minimize harm – they prefer cars that are programmed to protect passengers above all other people and property. Driverless cars will save lives in the broader spectrum, but programming them to do so could slow their adoption and cost an unnecessary amount of many extra more lives.
Real-life applications of AI driving can grow to shocking levels of complexity as the real world is rarely predictable. In an accident causing injuries but no fatalities, should AI distribute injuries evenly, harming more people but each receiving less severe injuries? Consider the likelihood and severity of potential injuries, and take into account the quality of life effect of any resulting injuries? Or should AI have to make the decision whether it’s better to hospitalize 5, or kill 1 person? As AI advances, it becomes more and more responsible for more moral and ethical decision making.
Find out when AI has gone wrong and how ethical AI is developing here.