Archive

Archive

Company

Interview with Melinda Lohmann, University St. Gallen

We spoke with Melinda Lohmann, assistant professor at University St. Gallen, about the illusion of objectivity, about transparency and legal frameworks for artificial intelligence (AI).

Video interview with Prof. Dr. Melinda Lohmann

Professor Lohmann, may I ask you to complete the following sentences? 
My greatest hope with artificial intelligence is ...

Lohmann: ... that it helps us to solve the urgent problems of mankind. I am thinking of the health sector or road traffic, where 1.3 million people die every year as a result of accidents. And when you consider that 90 percent of these accidents are caused by human error, we have enormous potential for innovation in this respect. 

And the greatest fear?

Lohmann: The greatest fear... If you take the concerns about super intelligence seriously, the idea that we are trumped by a very intelligent artificial intelligence, then it is a threatening scenario that we should take seriously. In the short to medium term, I am worried that we are reinforcing certain social inequalities through algorithmic decision-making systems made by people and operated with data that contain these prejudices.

Artificial intelligence is based on algorithms. Are the decisions made by algorithms fairer than those made by humans?

Lohmann: Yes, that is an important question. I think we are subject to an illusion, an illusion of objectivity. We think because it comes from a machine, a system, it is objective. That is not true, of course. These systems are made by people who in turn have their own prejudices and limitations. And the systems use data that can also contain certain prejudices. That's why I think you have to be careful if you trust too much in this illusion of objectivity. But I believe that if you set a certain course, then we could develop systems that are really more objective. In any case, more objective than we humans are. 

Do we even notice when an algorithm decides? And against this background: What do you think of an obligation to label like e.g. "Artificial intelligence at work" or "This decision was made with AI"?

Lohmann: I think that's a very good idea, very welcome, because I think it's important that consumers in particular know when they interact with artificial intelligence, and I also think it would be a sensible measure to increase confidence. 

Artificial intelligence is being used in more and more areas of our daily lives. Who is actually liable if the robot learns wrongly or makes wrong decisions? 
Do we need something like a robot right?

Lohmann: Yes, that's an exciting research question, that occupies me a lot and it all depends on the context. What is important is to remember that we always have a legal framework. So a new technology does not emerge in a legal vacuum, but the existing legal framework has to be taken into account. And for self-propelled vehicles, for example, we have a fairly sensible liability regime in most European countries, with liability independent of fault. I think that there is only a need for selective adjustment. But, as you have just mentioned, systems that are capable of learning and that may be used in a setting where such a liability system does not exist need new rules, and we are already thinking at European level about what such a legal framework might look like. 

Who actually knows today what robots can do in ten years? Do we already need a legal framework?

Lohmann: Yes, that is the fundamental question. When we lawyers deal with technical phenomena, we always lag behind. Technology is always ten steps ahead. That's why it makes sense to think proactively about these questions. At the same time, we must not lapse into blind actionism. It makes no sense to regulate already now and to regulate excessively, because we simply cannot yet estimate certain developments. But it is important in any case that we stay tuned. 

Thank you very much for the interview.

Lohmann: Thank you very much.

Das Bild der Justitia illustriert die Frage: Sind Algorithmen objektiv?

Are algorithms objective?

Are decisions made by algorithms more objective than those made by human beings? Melinda Lohmann, University of St. Gallen, says “no.”

FAQ