An article by Dr. Claus-Dieter Ulmer, Global Data Privacy Officer and Senior Vice President Group Privacy
Data fuels AI applications (artificial intelligence) meaning that proper data processing is critically important. The consequences could be far-reaching and swift if something goes wrong or the data is used more extensively than users realize.
Here are just a few examples: The German computer magazine c't reported that when a customer called up his data stored by Alexa, he received another user's data. Online retailers tailor offers to customers based on past purchasing behavior. What's more, to an extent prices also vary from user to user. China has gone much further with its 'Citizen Score' system. If you drive through a red light, for example, your name and photo immediately appear on a big screen at the intersection, branding you as a traffic offender. All of this is underpinned by artificial intelligence. And that is why, when discussing artificial intelligence, we must consider data protection.
In May last year we became one of the very first companies to provide guidelines for the development of ethical AI. Now it is a matter of putting these guidelines into practice. When drawing up the guidelines, the Group Privacy department was asked in particular to develop practicable data protection specifications for Deutsche Telekom. Our aim was to clarify the requirements of the General Data Protection Regulation and make them more tangible. The resulting guidelines (pdf, 410.9 KB) give product managers and developers clear instructions on how to take account of data protection when developing AI systems.
Two points were particularly important to us:
- How to ensure transparency on the use of AI and user rights.
- How to make AI decisions transparent and monitor and control AI systems.
The guidelines, which have now been published, ensure that we remain true to our 'Privacy and Security by Design' principle even with artificial intelligence. As a result, data protection and data security are considered upstream, during the development of new AI systems, guaranteeing they comply with data protection rules. In our view, AI systems only comply with data protection rules if:
- safeguards prevent unlawful processing of any data (sources);
- the use of artificial intelligence is transparent to all parties involved;
- there is recourse against alleged wrongful decisions;
- the decision-making processes used by AI systems are monitored regularly; and
- safeguards are in place to ensure all decisions taken by AI comply with the Group's Digital Ethics Guidelines.
This presupposes a clear definition of responsibilities for AI systems and that a specific contact person is always appointed. In addition, AI solutions must be designed and developed with the customer in mind. This is the only way for them to be useful and to avoid undesired developments. Users of AI solutions must also put in place an appropriate and effective process to monitor the decisions taken by their AI systems. Immediate action should be possible if an AI system fails a monitoring check. Of course, these principles also apply to any AI systems that Deutsche Telekom purchases from third parties. Such systems must also comply with our ethical and legal requirements otherwise their products cannot be used.
In this way, we can be sure that artificial intelligence will not become a data protection offender. This technology offers many opportunities and our goal is to harness them to the benefit of our customers and employees. That would not be possible without safeguarding data protection. The new guidelines set us on the right path. We are pleased to put the guidelines up for discussion and welcome constructive technical feedback.