Interview with Tom Hillenbrand, author


Mr. Hillenbrand, human beings often see themselves as the pride of creation. But our world today is full of war, hunger and persecution. In that light, shouldn't we be trusting our important decisions to some algorithm that could make them without any prejudice or ulterior motives?

Tom Hillenbrand: Well, yes and no. I would agree that the data we are now able to collect, with all our sensors, drones and other tools, are very valuable indeed. If we want to make the world a better place, then we certainly need to be able to proceed on a basis of quantifiable data. There is no doubt about that. What we need to remain aware of, however, is that our algorithms are made by people. And that the ways in which algorithms make decisions and classify things are thus all based on human assumptions. And that human assumptions can turn out to be very wrong, thus making algorithms do crazy things. In sum, we're probably going to have to save ourselves, unfortunately; there's just no getting around that.

In your novel, anything that is technically possible is permitted, and socially and politically accepted. Do you see a future in which everything that is technically feasible will actually become a reality at some point?

Tom Hillenbrand: Probably not everything! Drohnenland is of course a dystopia, truly a worst-case scenario. On the other hand, I think we're reaching a point where we as a society need to be talking – with regard to all our data and all our surveillance resources – about what it is we want and what it is we don't want. After all, total surveillance, with prediction of people's actions, is not something we would need another 50 years to develop and perfect. Give us just three, four or five more years, along with the right algorithms, and we'll be watching people amazingly closely.

The only ingredient that's missing right now is the political will to go in that direction. I can't believe that we – at least here in Germany – want that kind of surveillance, but we're not having enough discussion about this issue. I think we definitely need to talk about how much we want to allow and accept.

You've been talking about the near future, and about predictive algorithms that could call attention to likely events. Such likely events could include crimes, meaning the algorithms would be able to predict crimes and thus help us prevent them. While that sounds like a desirable situation, it also brings up a dilemma: Suppose an algorithm told us that a murder was going to be committed, and it identified the culprit in advance. Would it then be acceptable to punish that person prophylatically, i.e. in advance, while he was still innocent? If not, does that mean we would have to allow the murder to take place?

Tom Hillenbrand: Under our current ethical understanding, we can only punish people for things they have actually done. Such predictions – like any predictions – come with a probability of their turning out to be correct. In our example, even if the probability of his committing the murder were 96 or 97 percent, there would still be a chance of his not committing it.

That is basically how we would look at the matter from an orthodox legal perspective. Needless to say, however, in cases where a murder occurred that could obviously have been reliably predicted – and thus prevented – our tabloids would raise a mighty hue and cry, protesting "bloody murder" as it were.

But I think we're getting closer to these sorts of situations. In connection with our efforts to fight terrorism here in Europe, for example, we're currently talking about setting up European-wide databases of potential terrorists, i.e. databases that would include just about anyone who seemed suspicious. And such predictive algorithms would of course be a great asset for such databases. Without them, tens of thousands of police officers would have to be watching the potential threats. Computers would certainly be much better at that kind of surveillance. As we consider this option, we have to ask ourselves the following question, however: is it ethically justified to keep such people under complete surveillance, even though they haven't done anything?

That is a question we'll need to be discussing very soon – probably within the next one or two years. In other words, we're not talking about a distant future.

And the discussion will not be easy.

Tom Hillenbrand: It will be very difficult, in part because it will have to confront the obvious populist answer: "Better to lock up one too many than one too few." The answer under a constitutional rule of law is of course very different: "It's always a great miscarriage of justice to lock up an innocent person." Over time, the populist approach would likely have the effect of eroding away our entire legal system as we know it today. So we need to be thinking very carefully about these things. It will be a difficult process – these issues are very complex and difficult to fathom.

Young woman meets Robot

Digital responsibility

Experts discuss about chances and risks of digitization.