Video-Interview with Prof. Dr. Sarah Spiekermann about the development of artificial intelligence, its limits and our conception of man.
Prof. Spiekermann, may I ask you to complete the following sentences: When I think of artificial intelligence, my greatest hope is...
Sarah Spiekermann: ...that artificial intelligences do not make decisions for us, but only help us to learn more about ourselves, about our world, so that we make better decisions ourselves.
And your greatest fear?
Spiekermann: My greatest fear is the misconception that artificial intelligence is intelligent, in the human sense, because it is not. And that we are beginning to think that artificial intelligence is so intelligent that it can patronize humans. And that is my greatest fear that we could be patronized by technology.
If we now look at the development of this technology, what should we consider in order to use artificial intelligence positively, i.e. for the benefit of man?
Spiekermann: We have to let the development of all complex information systems go through an ethics by design approach. We need to do value-based system design. This means that we ask the question at a very early stage in the development of technology: Why do we want this technology? What values do we set free in a positive way for people, for society? Where do we have to be careful not to undermine existing values? If you do this very early in the development process, you can get a lot of good things off the ground.
So how do we make technology ethical? Can there even be ethical algorithms?
Spiekermann: No, there can be no ethical algorithms. Ethics cannot be built into software. You can try to, but all approaches in this direction are actually very suboptimal compared to human beings as an ethical decision-making system.
Where do you see the limits to the use of artificial intelligence?
Spiekermann: A hard limit is that artificial intelligence must not be used to make decisions about people. Man must always have the last word.
And what does it actually say about us if we believe machines can do much better than us? What does that say about our own image of man?
Spiekermann: Yes, unfortunately we have a much too bad picture of ourselves and you can also historically understand where that comes from. It has a long history in modernity and intellectual history. It starts with Hobbes, who already stated that the human being is wolf to other humans and one can actually see in history that despite humanism our image of man is in principle very questionable. And we must get away from this bad image of man and learn again that we are social, political animals, as Aristotle said, and that we are trustful beings, that we are highly intelligent, intuitive systems and that we win self-esteem, this appreciation for ourselves. This is central so that together with artificial intelligence we can actually see progress.
So if we really use artificial intelligences in a beneficial way, does it make us happier? Does it make us happier people?
Spiekermann: Only if we can succeed in building systems that help people know and learn, we can become happier. By knowing more and learning more, because human happiness depends on its ability to know and interact with the world. And this knowing interaction with the world, for example being able to cook, being able to explain something, these are the things that make us humans happy. And perhaps machines could contribute to making this knowledge, which we have sometimes lost a little bit in the last decades, present to us again.
Thank you very much for the interview.