Smart city

Zenuity working to prove self-driving vehicles are safe

  • 14 Mar
    2018
  • 2 min

Volvo-Autoliv JV Zenuity is both developing technology and building precise arguments to convince people that self-driving vehicles are really safe.

Volvo has just announced the launch of an in-house venture capital fund called Volvo Cars Tech Fund, whose purpose is to invest in startups with the potential to help the Swedish carmaker keep up with new ideas and developments in the world of road transport. A clear signpost to Volvo’s intentions in this field is provided by Zenuity. The mission of this joint-venture company, set up last year in tandem with automobile safety equipment specialist Autoliv, is to develop technology to ensure that autonomous vehicles are safe – at least demonstrably safer than a traditional vehicle with a human driver at the wheel. In an article on the subject published on the news site of the Institute of Electrical and Electronics Engineers (IEEE), Zenuity vehicle and driver safety experts Jonas Nilsson and Erik Coelingh set out the criteria for assessing whether a driverless car can be considered safe. They argue that an autonomous vehicle must be able to handle any situation within its scope, including negotiating obstacles that may appear on the road, so as to avoid dangers.  

They also stress that, quite apart from actually constructing a safe car, passengers have to be persuaded to trust the technology. Their approach combines testing in realistic situations with what they call ‘divide and conquer’ – breaking problems and objections down into four components: the human-machine interface, perception, decision-making and vehicle control. These are then tested using the most appropriate procedure – whether computer simulation, a run on a test track, or in real conditions. The idea is to pare down the huge complexity of events to a series of simple questions regarding on-board safety, to which answers must be found: questions such as whether the car can tell if its sensors are blocked by snow; when hardware has failed due to cold temperatures; and, if so, can the vehicle adjust its decision-making accordingly? The basic aim behind being able to explain logically and precisely the processes that allow the carmaker to verify that passengers in these artificial intelligence-driven vehicles are in good hands is to reassure future users. This would appear to be a necessary step, given that – according to a survey conducted by audit and consultancy group Deloitte – close to 74% of US residents polled do not yet believe this type of vehicle is safe.

By Sophia Qadiri