Archive

Archive

Company

0 Comments

How to teach an artificial intelligence ethical behaviour?

How are Google, Facebook, and the other tech giants answering this question and what can we learn from them?

The key message for anyone who doesn’t have the time to read the entire blog post: ethics and artificial intelligence (AI) are key topics for all companies, at all levels of management. Just ask Facebook about Cambridge Analytica or Google about their experiences with Google Duplex.

An article by Manuela Mackert, until July 2021 Chief Compliance Officer (CCO) and Head of Group Compliance Management Deutsche Telekom AG.

There is no question that artificial intelligence harbors enormous positive potential, but at the same time, also makes major demands of programmers and for ethically correct usage. We at Deutsche Telekom aren't the only ones who have realized that AI needs ethical "guardrails".

As such, I flew to where the big players are located: Silicon Valley in the U.S. and Silicon Wadi in Israel.

The subject of ethical AI has attracted much attention lately from the digital industry, politics, and the media. After all, it's ultimately all about how we can use AI to give our customers an excellent service and product experience, while at the same time avoiding losing their trust and ensuring the security of their data. That's why guidelines are so important. We were one of the first companies to establish binding internal guidelines and submit them for debate.

In the drafting of these guidelines, my team and I intentionally sought interaction with other companies and organizations at an early stage. We do not claim to have the ability to develop a "philosopher's stone" on our own, to say nothing of already having done so.

How do the big players think about ethics and AI?

Visiting Amazon. An impressive building – inside and out!

The first appointment was with Amazon. An impressive building – inside and out!

We wanted to know what the major players in this field – Google, Facebook, Microsoft, and Amazon, as well as organizations like Open AI, the Partnership on AI, and even universities and startups think about ethics and AI. How much weight do they attach to this topic and how are they rising to the challenges?

Before my trip to the U.S. and Israel, I admit that I wondered how open these companies would be to my questions and how honest our exchange would be. Needless to say, I was positively surprised.

All of our appointment requests were granted, high-ranking representatives everywhere opened their doors – as the saying goes, this subject obviously has "management attention". It isn't just relevant for programmers, but for top corporate managers, too.

We conducted intensive talks and an open, constructive, honest interchange, which we plan to continue. Here are a few examples:

There was broad consensus that the ethically correct handling of AI isn't just "nice to have"; it's decisive for gaining and retaining customers' trust in a company and its products.

I found my talks with the developers of Microsoft's voice assistant Cortana and chatbot Tay in Israel to be particularly fascinating and insightful. You might remember how quickly Twitter users turned Tay into a racist before Microsoft took it offline. This example is a sobering reminder of how fast a company's reputation can be damaged and how much we still have to learn when it comes to using and deploying AI.

The lessons learned by Microsoft were that – just like with humans – behavior cannot be predicted with 100% reliability. That's why they developed a kind of emergency off-switch that deactivates the bot immediately if it gets confused and, for example, starts to use inappropriate language.

In this clear-cut case, the definition of the purpose of an emergency off-switch is fairly simple. That's not always the case, however, which means there is also debate about where such control points could be placed. We also have to define these places internally.

Microsoft has developed Aether (which stands for "AI and ethics in engineering and research") to develop a master framework for implementing ethical AI. This committee is chaired by Harry Shum and Brad Smith, heads of AI research and the Legal department, respectively.

"The last mile will always remain human"

Manuela Mackert and Amit Keren visiting Google.

My "travel companion" was Amit Keren from Deutsche Telekom's partnering team at our Israel office. He opened many doors for us and set up a majority of the appointments.

And Google has trained several thousand employees in the matter of fairness for machine learning. Facebook is developing "fairness flow" software, to check AI systems with regard to diversity and prejudice, and is also seeking interaction and feedback from outside the company to determine how to best handle digital ethics.

Nearly all companies are thinking about establishing "safety ethics teams". This means the decisions made by the AI must not be trusted blindly; humans must always have the final say and bear responsibility for them. This is something we also intend to implement at Deutsche Telekom. As Claudia Nemat put it, "The last mile will always remain human."

Another pleasant result – and one I didn't expect – is that the others also want to learn from us and see us as a counterpart on equal footing in Europe. Stanford University, for example, wants to intensify its efforts in the area of digital ethics and asked us for support in preparing dilemma cases for testing smart technologies.

And to continue to focus public awareness on AI together, we strive to become a member of the "Partnership on AI" and, if the organization thinks we are up to it, establish a European arm. I can hardly wait.

Best regards, 
Manuela Mackert

Digital Responsibility

AI Guidelines

Deutsche Telekom defines its own policy for the use of artificial intelligence (AI).

FAQ