Can a digitized world ever be safe or are the risks too high?
Isaac Ben Israel: It doesn't have, I think, anything to do with digitized world. The world is not safe, with or without digitization. It's not safe because there are always bad people, sometimes hostile countries, sometimes terrorists, sometimes criminals. There are always bad guys, and bad guys will always use the new technology, including computer technology, for their own benefit and not for the benefit of society.
The way I look at cyber‑threats is really it is the dark side of computer technology. If you use computer technology for the benefit of society, it's the bright side. If you do it for your own benefit against the benefit of society, it's the dark side. Cyber‑security is about reducing the dark side of computer technology. There's no way to make it fully safe. Like any other threat in this world, we should treat cyber‑attacks, cyber‑threats, the same we treat crime. That is, we have to fight against crime, we have to do a lot to lower the level of crime rate, but we never expect that the rate will be zero.
History has shown us that highly developed societies can come to a rapid end. Are we nearing such an inflection point?
Isaac Ben Israel: As I said, cyber-security is really about the dark side of computer technology. And computer technology has some risks. The most important one maybe is the growing use of artificial intelligence, deep machine learning, algorithms that learn and therefore change themselves. And by giving the machine such a capability, we may lose control of the machines. There are movies on this, "The Age of Machines" and "Terminator" and a lot of known movies. The problem is real. I mean if we don't constrain ourselves in using and applying those algorithms that have no other control while they are sent to work, we may at the end create what we call a golem, a monster that may eat us back.
What must be done to ensure that we do not reach such a point, and remain on a positive trajectory?
Isaac Ben Israel: In certain areas, perhaps the most convincing example may be life science, certain areas we limit the research. I mean, we have so‑called Helsinki committees. Before we start certain research related to life science, we ask ourselves what are the risks. I mean, we don't want someone at a university to develop a kind of virus that somehow will leak throughout the laboratory and at the end will kill humanity. Therefore, before we start, we have to ask ourselves what are the risks, what are we doing in order to prevent such cases. I think we should adopt certain measures to computer technology as well.
It doesn't seem ‑‑ I mean, biological viruses, everyone understands that they kind of represent some danger to humankind. Computers we think are only making our life better and faster, et cetera. But because of the introduction of artificial machine learning techniques, the same way as we do for biological viruses, we should also take care for virtual, non‑biological viruses. It's more or less on the same level.
What would be your advice to other states or the experience in how Israel has dealt with this?
Isaac Ben Israel: Well, I can tell you a lot about how Israel dealt with this problem, but it doesn't mean that I can apply this to other states. We at Tel Aviv University ‑‑ I run at Tel Aviv University the Interdisciplinary Cyber‑Research Center. And we call it interdisciplinary because we believe that although cyber‑problems usually have technological solutions, the problems are never technological. You have to take into account legal problems, the psychology of individuals, social behavior, business considerations, politics, and things like this. If you don't understand these softer dimensions of the problem, you don't understand the problem and therefore you don't know what technology should be applied.