With the rise of Artificial Intelligence, a technological race started between car makers towards autonomous cars, raising the excitement of the public. But we are still far from using them.

We are just a million miles away from fully autonomous cars

In the last few years, self-driving cars has become the main challenge the automotive industry is trying to achieve, sometimes even eclipsing technologies that could actually be valuable in the short-term, such as connected cars or multimodal transportation. Indeed, this technologies appears as key to the next generation of car companies that chose to focus on a "car-as-a-service" strategy.

For the last months, we’ve actually been in a transition phase, with semi-autonomous cars driving on the watch of humans. In total, 20 manufacturers have already obtained permits to test autonomous cars on California roads and, once these tests will be deemed conclusive, fully autonomous vehicles are expected to be officially released to the market by 2020. But their real adoption might be taking much longer: a report from consulting firm McKinsey estimates that self-driving cars will only make up over 15% of all vehicles sold in 2030. Estimations, indeed, are remaining extremely delicate to perform as authorization by the regulator are likely to take years - if not a decade.

 
Autonomous cars

Why can’t we just test the safety of an autonomous car?

One of the problems that delays self driving cars entering market is that it is extremely hard to determine the distance they must be tested before they can be considered safe. As of today, Tesla’s semi-autonomous cars have covered 780 million miles with 70,000 vehicles while Google’s autonomous cars have been cumulating 2 million miles traveled with 60 vehicles. But recent reports are talking about hundreds of billions of miles to perform a relevant statistical safety comparison. Dr. Gill Prat, CEO of Toyota Research Institute explained in January that “up to now, our industry has measured the on-road reliability of autonomous vehicles in the millions of miles but to achieve full autonomy we actually need reliability that is a million times better. We need trillion miles reliability.”

 

In the meanwhile and in order to determine if self-driving cars are harmless for passengers and pedestrians, constructors like Tesla seemed to have made the choice to rather use the statistical argument of comparing the chances of a crash happening because of human drivers and because of machines instead. Another of their arguments is also that they are now able to achieve a secure driving in a high percentage of situations, without mentioning that the last few may contain the most challenging and unpredictable scenarios, which cannot remain unsolved to the commercialization. But researchers remain unyielding and keep arguing that right now there is no single test that could ensure passengers and pedestrians complete safety. Therefore, it is hard to estimate how much time would be needed to finish the training of the car.

Will we ever achieve the transition phase without proper training?

Moreover, as we are getting closer to the fully-automated car, the transition phase might take longer than expected due to the “human factor” of the technology. The truth is, as automation gets better and better, the human becomes less in the loop while still supposed to be ready to take over the car if unexpected difficulties happen. But being able to monitor a situation for a very long period and then quickly get back on the loop to take control is really hard even for highly trained people.

The study of automation in aviation is revealing to this question. The machine is in charge during most of the flight but it needs the pilot on the few critical moments. Pilots are continually trained and tested to be able to face all kind of scenarios, studying in simulators twice a year. And sometimes, even they cannot get mentally back on track quickly enough to execute the optimal procedure. What is also important to consider is that in aviation, the reaction time needed is few minutes. In automobile, it is up to few seconds.

As years of study showed, the irony is that the more automated a field is and the more skilled and trained the human operator has to be. When applied to self-driving cars, extended vigilance skills and proper handling of the systems will necessarily be required to passengers.That will make the transition period extremely difficult to go through as these cars won’t be easily and quickly accessible to regular drivers.

Following the popular Tesla accident that happened last year, many regulators stated that semi-automation should simply not be allowed until drivers’ proper training, because - as for now - nobody can be held responsible for a lack of vigilance. That is why the National Highway Traffic Safety Administration comment “Autopilot requires full driver engagement at all times” does not sound as conclusive as it must have been. One solution would be to run the automation on cars that will silently train themselves while under the surveillance of drivers who will be making all the decisions. It is the ultimate goal of Tesla’s shadow mode but for now, its only purpose is to measure the safety of the autopilot.

No self-driving car without insurance

Furthermore, the whole enterprise is resting on the hope that regulators and populations are going to accept the statistical argument on human vs. automated car induced accidents. But we cannot fully believe they will in the next few years. Autonomous cars raise a lot of delicate questions that will need to be answered before any permit might be granted. The most delicate one is passengers’ insurance. Regulators need to settle a social security framework before any commercialization. Who will be held responsible in the event of an accident?

The UK has recently released measure that could be used as an example for the rest of the world with the new Vehicle Technology and Aviation Bill. In it, the government says insurance for autonomous vehicles will need to cover both manual and machine control. Insurers will  ensure that victims involved in a collision with an automated vehicle will have quick access to compensation without having to file a private liability claim against the car manufacturer. Insurers would be then free to try to recover the cost of damages pay-outs from them. While the idea behind the measures is to accelerate self-driving cars adoption in the UK, it appears doubtful that insurers will accept to cover them with those conditions. In response to this problem, some automakers said they are ready to take full responsibility in the events their self-driving car gets in a crash. But without the support of insurers, the cases where accidents are caused by an exterior factor remains blurry. And it is not an issue regulators are ready to undertake because of progress.

The difficulty of machine ethics

In the tech bubble, the common conception is that people are going to quickly adopt the automation spirit, and promote self-driving cars. But their tolerance to machine mistakes cannot be taken for granted. Even if we manage to reduce the risk of a car crash to a hundredth or a thousandth, robots errors are still unacceptable compared to humans’. Programmers don’t have the right to be wrong since they have the time to analyze and prevent any bad outcome. Even though we know it is impossible to anticipate everything.

When we look at rollercoaster, chances for a crash to occur are silly but consequences for the park are huge. Public expect from a robot approved by regulators to be absolutely perfect otherwise they turn against the manufacturer. How many errors are acceptable? Can we respond in terms of proportion or unit? In 2015, almost 3,000 motor vehicle deaths were listed in the US every month. Are we going to accept even 3 monthly kills?

On top of that, the lost of human lives in the case of a human and machine driving scenarios are unlikely to be the same. Autonomous cars will change the reasons for accidents and therefore the identity of the victims, which raise questions on its legitimacy to do that. Extreme examples like no-wins scenarios of MIT’s Moral Machine (choosing between putting in danger the people on the road or in the car) are often impossible to satisfyingly solve, even for consequentialists. A moral distinction also exists between killing and letting die, as studies on classic dilemmas like the Trolley Problem show. It is the main struggle that machine ethics researchers are still dealing with.

It would be unreasonable to let programs decide by themselves without frameworks, especially in an insurance context. And it is uncertain that few years will be enough to educate populations and reassure regulators. Thus, regarding the commercialization of self-driving cars, technology is only the visible part of the iceberg. Driver’s licenses still have many days ahead of them.

 

 

By Ramy Ghorayeb
Strategic Analyst