Artificial intelligence (AI) is advancing, though perhaps not as fast as the hype cycles would have us believe. But we are starting to see AI being applied to tasks typically done by humans, including more intelligent chatbots that can answer first- and even second-level customer service requests, smart software bots automating business process tasks and pilot projects involving autonomous vehicles.

It is these vehicles that have captured the popular imagination. Proponents envision a world in which deaths caused by speeding, distracted driving, fatigue, drunk driving and other human errors can be dramatically reduced. This is a world of ride-sharing, a world where elderly and disabled populations gain mobility, a world where many people, particularly in urban centers, won’t even buy cars anymore.

On the other hand, skeptics point to recent, well-publicized accidents involving autonomous vehicles and argue the technology will never succeed in replacing a human behind the wheel.

These discussions always seem to come back to the scenario in which AI-based technology is being asked to make split second, life-or-death, ethical decisions. The brakes fail, and the AI has to choose whether to dodge an elderly person or a mom pushing a baby carriage in the car’s path. The gas pedal sticks; should AI save the passenger or save others? There are an infinite number of doomsday scenarios we can create in our minds.

But what’s the reality? Where are we today in terms of the technological and ethical aspects of AI-based, Society of Automotive Engineers (SAE) Level 5 fully autonomous vehicles?

The technology is the easy part

First off, from a purely technological perspective, autonomous driving is making progress. Similar to the way humans become better drivers as they accumulate more time behind the wheel (just think about your first driving lessons versus your driving ability today), AI-based autonomous driving systems have been racking up thousands of miles of real-world driving, and all of that accumulated data drives continuous improvement in the self-driving systems. The better that AI-based systems become, the more they will behave exactly like humans would behave — only better, because AI-based systems can steer more accurately, brake more smoothly, and navigate better than people can in an emergency situation because AI systems don’t panic. They don’t freeze under pressure.

The real question may not be whether autonomous vehicles are ready for humans, but whether humans are ready for autonomous vehicles. At this point, people are not prepared to see autonomous vehicles on the road. Whether it’s a driver on the highway that sees a driverless car pull up next to him or her, a pedestrian crossing a busy street or a bicyclist riding with traffic, people don’t yet trust that an autonomous vehicle will behave in the same way we’ve become accustomed to seeing human-driven vehicles behave.

So, for at least the next 3 to 5 years, the deployment of autonomous vehicles will be limited to use cases such as trucks traveling in the right-hand lane of major interstate highways and ride-sharing vans on college or corporate campuses, an environment where people can get accustomed to the driverless vehicles.

Addressing the ethics

When it comes to the ethics associated with autonomous vehicles, the key issue is: Who should be making those life-or-death driving decisions? Should the algorithm be written by the auto manufacturers and be standard in every vehicle? Or should it be the insurance companies? Or should it be the government? Or should individual drivers be allowed to set policies for their own vehicles?

While it’s still early days, the general outlines of a framework for tackling these issues are starting to emerge, with governments setting the rules, carmakers applying them and insurance companies extending their incentive-based model to the world of self-driving vehicles. For example, insurers could reward safe drivers who set their vehicles to follow speed limits and add surcharges to drivers who might turn off self-driving features and get a speeding ticket.

Today, in the United States, more than 20 individual states have enacted rules that permit autonomous vehicles, but those rules also require a human safety driver behind the wheel. When manufacturers that are conducting pilot projects want to move from one level of automation to the next, they need to apply to the state for permission.

Late in 2017, the U.S. National Highway Transportation Safety Administration (NHTSA) released new federal guidelines entitled: Automated Driving Systems (ADS): A Vision for Safety 2.0. These regulations represent the latest guidance for automated driving systems to industry and the states.

And the German Ethics Commission on Automated and Connected Driving recently published the first set of ethical guidelines on self-driving cars, which other countries are expected to use as the basis for their own rules. The commission identified 20 key propositions, including:

  • Damage to property must not take precedence over personal injury. In hazardous situations, the protection of human life must always have top priority.
  • In the event of unavoidable accident situations, any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible.
  • In every driving situation, it must be clearly regulated and apparent who is responsible for the driving task: the human or the computer.

As with many other breakthroughs, such as drones for example, the technology bursts on the scene and the regulations quickly catch up. With self-driving cars, we expect to see human safety drivers behind the wheel for the next 3 to 5 years. Over time, the technology and the regulations surrounding safety and ethical issues will continue to mature and evolve.