Recent advances in artificial intelligence systems and the increasing role AI is coming to play in our lives are now arousing enthusiasm tinged with apprehension. A number of AI experts are currently trying to steer AI in a positive direction for the human race.

Is there any way to ensure that AI will remain safe?

Who’s afraid of… artificial intelligence? Well, in fact, lots of people are. Just one example among many is a recent survey conducted by YouGov on behalf of the Bristish Science Association, in which a third of the respondents said they saw artificial intelligence as a threat to the human race. Some 60% of those polled also feared that AI will have a negative impact on employment. It is of course hard to fault these people for their views, given that the way this topic is presented in the media is often guaranteed to arouse public anxiety. For instance, articles announcing imminent layoffs of workers due to progress in AI are legion.

Many well-known entrepreneurs and scientists have expressed the fear that AI will soon surpass the capabilities of the human brain, with potentially disastrous consequences for the future of humanity. “The development of full artificial intelligence could spell the end of the human race,‟ warned eminent British physicist and cosmologist Stephen Hawking in 2014. That same year Elon Musk added a warning during an interview at MIT: “I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that.‟ Last year, Bill Gates expressed similar concerns, writing: “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

Elon.jpg

The risks are already there

While some people are coming out with these apocalyptic statements, AI is regularly pulling off new feats, to the point where the process seems unstoppable. Within the space of about a decade, this technology has shown itself capable of beating human players at chess, at the TV game show Jeopardy!, at the complex board game Go, and very recently even at the bluffer’s game par excellence – poker. In addition, AI is coming to play an increasing role in our daily lives. Our virtual assistants, from Siri to Cortana, are gaining in functionality and becoming more widely used, humanoid robots seem to be overtaking science fiction, and self-driving cars are appearing on the horizon.

Far from being some hypothetical threat to the survival of the human race in a fairly distant future era, some commentators are already pointing to a range of issues that the growing prevalence of artificial intelligence is already raising now. One such expert is Andra Keay, Managing Director of Silicon Valley Robotics, who has warned: “The more we start to develop robots that resemble humans and behave like humans, that will soon raise the issue of identity theft. What will happen if anyone and everyone is able build a robot that looks like you and get it to behave exactly as they want? Moreover, the more robots resemble people, the more we trust them. However, we don’t know who’s behind any given robot. Often it’s large corporations, which are not acting solely with philanthropic intent.‟

AI.jpg

Enabling machines to be aware of their own actions

Many people also have the feeling that there is an AI race going on right now. Everyone is concentrating on creating the most perfect software programme possible, without taking the time to count the consequences. So what about deciding to call a temporary halt in order to think hard about what is happening? This is what San Francisco-based non-profit AI research company OpenAI, founded by Elon Musk and Sam Altman – which also numbers Peter Thiel among its sponsors – is suggesting. OpenAI, whose stated goal is to steer artificial intelligence in a direction that will benefit the entire human race, makes all its patents and research results accessible to all worldwide.

In France, Grégory Bonnet, an assistant professor at the University of Caen in Normandy, leads a research group comprising scientists, philosophers and sociologists that specialises in the ethics of AI. Their goal is not so much to create an ethical AI as to enable machines to justify their actions after the fact. “The philosophers and sociologists we’re working with never stop telling us that ethical issues are always linked to a particular context. So they argue that it’s very hard to draw up a set of broad ethical rules that can be applied everywhere, that work for any context. If you’re trying to define ethical rules for self-driving cars, for example, the situation in India is completely different from France, given that the actual driving is regulated very differently from one country to the other. This is why we would prefer not to lay down any broad general principles. On the other hand we hope that an ‘autonomous agent’  would always be able to understand the decisions it had taken, and be able to say: ‘I acted this way based on such and such an ethical principle’, or ‘because any other decision would have had such and such a consequence’, etc.‟ Thus Grégory Bonnet’s team is pushing for a highly pragmatic approach – i.e. making autonomous machines responsible for their own actions, so that their actions can be explained and the systems’ underlying algorithms subsequently corrected if the robots do not behave as predicted.

Laying down rules for robot creation

Meanwhile other commentators want to introduce an ethical design system to ensure that robot development proceeds in an ethical manner. Alain Bensoussan, a lawyer specialising in the digital technology field, has been involved in setting up an AI ethics committee. The aim is to draw up a set of rules and principles with which robot-builders will be required to comply. Explains Bensoussan: “In my view, robots ought to be benevolent as a matter of course. Clearly, we must not design robots that are capable of violence towards humans, as laid down in one of Asimov’s Three Laws of Robotics. Then we need to draw up simple, straightforward rules, which can vary according to the type of robot in question, determining how a given robot should behave in given circumstances. Ensuring an ethical approach to design also means making the algorithms transparent and being able to monitor them and trace them. Robots that can kill must not be allowed. Last but not least, robots must be built and programmed to be sincere, honest; they must not be able to mislead their human interlocutors.‟  Initiatives like these should help to reassure us on the subject of artificial intelligence.

 
By Guillaume Renouard