A contribution by Manuela Mackert, until July 2021 Chief Compliance Officer (CCO) and Head of Group Compliance Management at Deutsche Telekom AG.
Responsibility and ethics in artificial intelligence must be thought through from the beginning and therefore 'programmed' and controlled by humans.
The aim of any technological development should not only be to optimize processes and achieve economic efficiency, but also to improve people's living conditions and maintain their autonomy. This also and especially applies to artificial intelligence (AI). Not only algorithms and the large algorithmic families, the so-called neural networks, but also data and transmission paths play a central role.
In addition, people must develop an understanding of the content of the possible effects of the new intelligent systems, because without knowledge there can be no digital (data) sovereignty.
Artificial intelligence has made the leap into reality - and is already increasingly influencing our lives today. Be it navigation, purchasing decisions or online games. For our European value system (European Human Rights Charter) and our ethical standards to be able to keep up with these developments, they must be part and parcel of all self-learning algorithms and decision-support systems. And this already during the development phase.
AI connects and expands the scope of action of all. In other words, we no longer only consume content, but actively shape it through our actions and practices. This means that each individual bears responsibility for his or her behavior and attitude in the digital world. Due to the increased use of algorithms in social media, it is no longer sufficient to know only how to deal with the media, but it is also a MUST to understand the technology behind it. If digitization is to serve the good of society, we must support and intensify these skills through a broad range of education for everyone, from kindergarten children to – increasingly – senior citizens!. Without knowledge – no digital sovereignty: One who does not understand the technology cannot deal with it in a self-determined way.
We at Deutsche Telekom took these considerations as an opportunity and in the spring of 2018, we became one of the first companies in the world to create and adopt self-binding guidelines on digital ethics in dealing with artificial intelligence (a self-commitment). As a navigation tool, digital ethics is intended to help us transport the values of the analog world into the digital world. In order to maintain security of action in everyday life, we need a framework in which we can move. Our guidelines are based on Deutsche Telekom's business model and it was important to us to define the guidelines in such a way that they can be integrated into the work of developers, programmers, engineers and project managers. For us, digital ethics is not a mainstream or a marketing aspect. We have a vested entrepreneurial interest in ensuring that our products and services are trustworthy. For this reason, we are actively involved in shaping and monitoring the implementation of the guidelines within the Group.
It is important that the programmers and developers who provide these technologies do so responsibly and within a specified framework. Self-learning algorithms and decision-supporting systems require a defined scope of action, maintained by the developer, in which they can make decisions. Our aspiration is that the developers must be able to understand how and why the AI system for which they are responsible came to a certain knowledge or decision. The human being is and remains responsible. If danger arises that they will no longer be able to do so, it must be possible to stop the system - we need an emergency button switch for all AI systems. No matter whether the AI should detect tumor tissue in humans or necessary maintenance work on networks. Our AI guidelines, for example, stipulate that it must be defined who is responsible for which AI system. From today's perspective of experts, this is an almost impossible undertaking in practice. As a company, however, we are facing up to this challenge in order to make the responsibilities within the framework of the development and use of AI consciously transparent.
In this context, it is also important to take the entire organization along with you: For example, we have developed a Digital Ethics eLearning-program, participated at the AI Roadshow (national and international) and held numerous speeches on digital ethics in AI, both inside and outside of the company. We have conducted and will continue to conduct face-to-face training sessions with Data Scientists and to successively expand these with new findings. In the end, AI is 'only' a software developed by a human being. It is therefore essential for us to start with the human being, with our employees, in order to determine what we understand as human-centered technology.
In order to ensure that our AI guidelines are also complied with in our operational business, we have developed a professional ethics for our developers (programmers, software designers and data scientists). It 'breathes', in other wordsit offers the opportunity to be adapted to the specific requirements according to the scope of the AI system, but the ethical and value-based framework always remains the same. Furthermore, the Digital Ethics requirements have been included as a test step in our Privacy and Security Assessment, as well as an internal seal of approval for DT's AI projects. We advise and support AI projects directly in person or based on examination matrices. Furthermore, we have implemented our AI guidelines in the Supplier Code of Conduct in order to also take the supply chain into consideration, since AI products and services can never be viewed in isolation. This means that our AI suppliers are committed to complying with the requirements of our AI guidelines.
Through our operational strength of implementation, characterized among other features by our business and technology understanding, and our early publication of our AI guidelines, we have set a trend on a national and international level. Our expertise is sought after by business associations and political organizations, and we actively contribute our technical expertise in the interests of our company.
In addition, we need an active interdisciplinary discourse with civil society, other companies, politics-makers and educational institutions in order to better understand their concerns and needs. We took this as an opportunity to open the Forum for Digital Ethics in March of this year. Here we want to involve society in shaping the future world of AI through the exchange of knowledge by way of an understanding of technology and socio-political debate and learn from each other in a participatory way. We see our guideline "Share and enlighten" as an obligation for regular communication through conferences, speeches at conferences, expert interviews and much more. We live our digital responsibility by sharing our knowledge and demonstrating the possibilities of new technology and a trustworthy technology culture without neglecting its risks. These must be made transparent in a responsible manner so that we can decide for ourselves how we want to deal with intelligent technologies such as AI.