An article by Thomas Kremer, Board member for Data Privacy, Legal Affairs and Compliance.
Today, criminal hackers (black hats) are waging war against security experts (white hats) on the internet and both sides are increasingly using AI to achieve their goals. The science fiction classic "Transformers" predicted this to an extent: peaceful Autobots are fighting against power-hungry Decepticons. Both sides are autonomously thinking machines – artificial intelligence. But in contrast to the movie, the real battle isn't taking place in major cities, but rather in cyberspace. And many factors will be important in determining whether this reality will have a happy ending.
Artificial intelligence (AI) is a broad field. Machine learning is the decisive key word when it comes to cyberattacks and cyberdefense. Algorithms are trained on large datasets to learn how to identify patterns and adapt accordingly. An example: after all, you've surely been asked to prove that you're not a robot when you've entered a password. This two-factor authentication is intended to increase security. Captchas ask you to identify letters and numbers from an obscure pattern, for example, or click the tiles that contain cars or some other object. Sometimes captchas really drive me to distraction, but it usually works out in the end and you can rest secure in the feeling that you're a human after all. But nowadays, the captcha result with this method isn’t so unambiguous anymore: if you feed an artificial intelligence with user responses to captchas, the AI bot will learn to identify the characters or images autonomously over time and use it for its purposes.
Attackers also use pattern recognition to find vulnerabilities in IT systems. To this end, they analyze published vulnerabilities, learn which features characterize them, and then exploit this knowledge to find previously undiscovered vulnerabilities. The malware used to exploit these undiscovered vulnerabilities is also getting more intelligent. Smart malware can detect when it is discovered in an attacked system and can then hide and change its attack pattern. In other words, it learns from the system it is attacking.
Malicious social bots can copy a user's identities and act like that user in social networks. Hackers can use these social bots to launch more targeted phishing attempts, which are ultimately more successful. And they can also be used in election campaigns, for example, to spread fake news and manipulate democratic processes.
There is reason to hope
These are frightening scenarios, but also on side of the coin. But artificial intelligence can also help defend against cyberattacks. Security experts have a decisive advantage over the hackers: the ABCs of AI – Algorithms, Big data, and Computing power. For the algorithms to learn, they need the right data and enough computing capacity. That's why it's so important – and I repeat this constantly – for cybersecurity experts to form networks. Networks consisting of companies, government agencies, and institutions, in which data from successful attacks is shared so everyone can learn from them. On an international and eye level. Cybercrime doesn't respect national borders.
But defense isn't the only goal in the war against hackers. We want to "get ahead of the situation," as my experts from our Cyber Defense Center call it. That means we want to identify planned attacks before they take place. Make predictions. And we use artificial intelligence to do this at Deutsche Telekom.
With our "Fraud and Security Analytics System", we analyze network data to use prevent abuse of telephony and detect cyberattacks. We analyze data that we get from our Deutsche Telekom network and have to collect for our business operations anyway. For example, connection data for billing. Up to a billion data records are created each day, which we feed into our analytics system respecting our high data privacy standards. The system's algorithms identify patterns and anomalies. In cooperation with Telekom Laboratories (T-Labs) and our long-standing research partner at Israel's Ben Gurion University, we are using AI to develop new algorithms that are enabling us to spot fraud and cyberattacks more quickly, or even predict them. We can then deploy countermeasures before the attack even takes place. This offers a major chance for more security in a digitized world.
We need rules for using artificial intelligence
Artificial intelligence is still in its infancy. The examples described above are just the beginning. That's why it's so important for us to talk about the rules right now. A central aspect for me: We need a clear legal framework. After all, a particular legal challenge for learning systems will involve clarifying the question of responsibility. Who will be responsible for decisions, particularly wrong decisions? The more autonomous a system can act, the more difficult it will be to assign clear responsibility to the operator, manufacturer, or user.
AI will enable cybercriminals to create even more targeted, more efficient attacks. That's why the current regulatory focus on operators of critical IT infrastructures – in the domains of information and communication technology, healthcare, and energy, for example – is no longer sufficient. In the Internet of Things, increasingly AI-based IT systems will be used whose criticality could arise in the moment. Take autonomous driving, for example: if a car is hacked, it could quickly create chaotic traffic conditions and a risk to all road users. This entails growing responsibilities for hardware and software vendors, to close and report security vulnerabilities and take preventive technical protection measures.
AI decisions are based on data. It must be ensured that this basic data cannot be manipulated. A high level of data quality is an essential prerequisite for proper decisions by the AI. But the legislature must also ensure that analysis of the available data is allowed. The European General Data Protection Regulation provides a good framework for this. But we must prevent the achievements of the European data privacy law from being capped for some market players through regulations such as the ePrivacy Regulation. This does not lead to more, but rather less security.
Last but not least, we also need ethical dealings with artificial intelligence. That's why I'm pleased that the German government's AI strategy will be based on European values. But we also have to step up as a company here. We have to follow ethical rules in our development and use of AI. That's why we at Deutsche Telekom have developed guidelines for dealing with artificial intelligence. I hope that many companies will follow us here and not just wait for the lawmakers.