Archive

Archive

Blog.Telekom

Caroline Neuenfeld

0 Comments

Artificial intelligence doesn't drink

Autonomous vehicles are moving from fiction to fact. An "Ethics Commission for Automated Driving," established by the Federal Minister of Transport and Digital Infrastructure, has been looking at the ethical issues arising in connection with autonomous vehicles, and it presented its report a few days ago. Apart from certain very concrete problems, such as questions of data sovereignty and of liability, the basic issue is as follows: Are we willing to accept the risks that are inherent in all complex technological developments, or should we reject them and accept the resulting loss of progress?

It is obvious that the artificial intelligence used in self-driving vehicles will not replace human drivers overnight. Nonetheless, the impacts of automated transports will radically change our society. 

According to studies, automated vehicles will bring great increases in welfare. They will be safer than today's vehicles, and they will be cleaner and more fuel-efficient and thus will reduce air pollution. Most importantly, however, they will save countless human lives. This is because autonomous driving is likely to be much safer than human-controlled driving. Every year, some 1.25 million people die in traffic accidents worldwide. And 90 percent of all traffic accidents, statistics say, can be attributed to human fallibility. Artificial intelligence, on the other hand, is always rational and consistent. It never gets tired, impatient, hectic, or distracted, and it doesn’t drink.

But no complex technology can ever be infallible. Autonomous vehicles, a complex technology, will thus also not be 100% safe.

The Ethics Commission looked carefully at the issue of the "dilemma situations" in connection with autonomous driving. It concluded that autonomous systems are of no real additional use in inescapable life-or-death situations.

But how realistic and practical are such hypothetical considerations? Very few drivers, including frequent drivers, ever enter situations that force them to make terrible moral decisions, such as having to choose between driving into a group of adult pedestrians or driving into a group of playing children.

What's more, whether a human driver or artificial intelligence has to make such a choice: there can never be a "correct" choice. But whereas a human driver's choice in such an inescapable, no-win scenario would be accepted and vindicated, a robot driver's choice would immediately be scrutinized for potential fault and liability. In other words, we judge artificial intelligence by different standards.

Behind this lies a deep mistrust of artificial intelligence. But are there rational grounds for such mistrust? Every person who drives a car down a street poses a potential risk for the other road users around him. And even very careful drivers can suddenly encounter unforeseeable events. A playing child suddenly runs into the street. A bicycle rider suddenly takes a spill. As drivers, we trust in our instinctive ability to react correctly and quickly in such situations. Self-driving vehicles, on the other hand, make decisions on the basis of data that are collected by numerous highly precise sensors, cameras, and radars.

Our mistrust of artificial intelligence behind the wheel also results from valid questions, however. Do we want to entrust life-and-death questions to the hands of software programmers? How can programmers be sure that the codes they developed cover every thinkable or unthinkable type of danger scenario? How can autonomous vehicles be programmed to make socially optimal decisions in the case of unavoidable accidents? Should a vehicle protect its passengers at all costs, or should it seek to minimize risks for other road users?

The answers to these ethical questions are important because they will have great impacts on society's acceptance of self-driving vehicles. Would we buy a vehicle if we knew it was programmed to drive into a tree in order to protect a pedestrian that suddenly, and unexpectedly, appeared on the street?

We're still at the very beginning of this overall development. But self-driving vehicles are becoming a reality, and they may become the norm in a not so distant future. The industries participating in this effort have a responsibility to prepare society. They need to take concerns seriously, and to raise awareness – transparently, openly and honestly – with regard to the benefits and the possible risks.

And they have a responsibility to continue improving self-driving vehicles – to make them ever more intelligent and safe, and to produce vehicles that minimize, or even eliminate, the risks.

Technology knows no ethics. What we do with it depends on us. It’s good and appropriate to discuss the ethical issues arising in connection with new and innovative technologies. To avoid doing that could be likened to driving a car while wearing a blindfold. Technological innovations such as artificial intelligence are no end in themselves. They bring progress that serves human beings, and brightens our future.

Letting ethical concerns prevail, such as those arising in "dilemma situations", will not prevent a single traffic death. But the development of autonomous vehicles will indeed save lives. The sooner we accept this, the sooner our roads will become safer.

FAQ