Verena Fulde


We need a “Digital Ethics” policy: Deutsche Telekom defines its own policy for the use of artificial intelligence

The impact of artificial intelligence (AI) should not be underestimated – it can have positive as well as negative effects. The relevance of AI in our daily lives may be hardly recognizable to many people, but in reality just about everyone encounters it, for example, in navigation systems, Google searches, purchase recommendations and in apps that tell us which bus or train to take.
Deutsche Telekom also relies on AI: as a chatbot in service, for maintenance support at T-Systems and in products such as Connect App and Smart Speaker.

Digital Responsibility

As with all new technologies, AI opens the door to many opportunities, but it also involves challenges as well. Just think about the adoption of prejudices, or the question as to whether a bot should be recognizable as such. Another example is the black box problem (you put data in and get a computed result, but you don’t actually know how AI came up with the result).

Deutsche Telekom addresses such questions and many others in public discussions of relevance to the growth of digitalization. The catchphrase is “Digital Responsibility.” 

That is why Manuela Mackert, our Chief Compliance Officer, and her team contacted all units at Deutsche Telekom that use AI and contribute to the development and design of this technology. These groups are comprised of specialists from Technology and Innovation, as well as Telekom Innovation Laboratories, IT Security, Data Privacy, Finance and Service, not to mention T-Systems. Ms. Mackert requested that these groups develop a digital ethics policy for governing the use of AI.

What do the guidelines mean for us?

These guidelines are based on our business models and are binding for us all – they define how we at Deutsche Telekom should use AI and how we should develop our AI-based products and services in the future. The basic idea is that AI is initially just a tool which is inherently neutral. It's up to us to use AI in positive ways. 

In order to ensure that this new technology is indeed applied positively, we have conducted many extensive discussions within the Group and with other business enterprises, experts and institutions involved in AI in Germany, the USA and Israel. Our contacts have included Facebook, Microsoft, Amazon, Google, Stanford, the Open AI Initiative, Allen Institute and the Partnership on AI.

We are proud to be one of the first business enterprises worldwide to define self-binding AI policies and principles that will guide our actions. 

For example, these guidelines require that a person of responsibility be named for every AI system and every AI function – right from the start. We are also committed to full transparency. That’s why we make clear what types of customer data we are using, and we let our customers know precisely when or if they are communicating with an AI system. We maintain full control and are in a position to stop or switch off our AI systems at any time.  
And these are just a few examples. All of the nine guidelines are available here.  

Of course, we do not claim that our guidelines are some kind of silver bullet, but we do want to develop our policy further through continuing discussions that will help us achieve a broad consensus.

We look forward to your feedback and insightful discussions.

Young woman meets Robot

Digital responsibility

Experts discuss about chances and risks of digitization.