Archive

Archive

Blog.Telekom

Verena Fulde

0 Comments

Would you trust an algorithm? (Part 2)

In the first part of my blog, I looked in more detail at the question of how "human" the so-called artificial intelligence (AI) is, and that creativity is not a distinguishing feature to us humans, since machines have already become quite creative themselves.

Neuer Inhalt

robot with butterfly

But it is difficult to say whether it is a good or bad for computers to be creative – and, thus, to be somehow human. Perhaps a comparison involving human beings can help. When do we trust a person? When we are familiar with his/her views and values, and we've found him/her to be reliable. In other words, when we feel we can be sure about his/her intentions.
At what point could we apply these criteria to human-machine relationships?

To begin with, we need to remember that computers are not per se better or worse than we are. This is because they can only process the things they "see." They can only act on the basis of the data "fed" to them, i.e., the data with which they are "socialized."  These limitations emerged very clearly in connection with Microsoft's "Tay," an AI chatbot. Tay started out as a friendly interlocutor, but within a few short hours users had turned "her" into a racist, by sending her racist messages. So we share responsibility for the things we create.

Surely, trust and transparency are other important aspects that determine the degree to which AI is accepted. To engender human trust, an AI machine has to output decisions that seem reasonable and transparent. The examples of artificial intelligence we've seen lately, especially within the category that goes by the name of "deep learning" – i.e., computers that are capable of learning, and of programming themselves accordingly in the process – come across as "black boxes". This is because no one – not even their programmers – completely understands how they arrive at the decisions they arrive at.

Increasing criticism is being focused on this state of affairs. An article in the journal "MIT Technology Review" recently quoted Tommi Jaakkola, a professor at MIT (the Massachusetts Institute of Technology), as follows: "Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method."  

When the context is a game, then idle curiosity might cause us to wonder how such and such a result was arrived at. But when the context is things that affect us personally and that can have real impacts on our lives, idle curiosity might not be enough to dislodge useful answers. Surely, we have a right to know, don't we? For example, if I were refused a certain medical treatment, I would like to understand why the other method is better for my healing chances.

The EU says we do, and in Article 22 of its General Data Protection Regulation (GDPR; Regulation (EU) 2016/679), it upholds the right to challenge decisions made solely by algorithms. This is a first step in the right direction, even if its implementation will entail a number of challenges – you can't simply gaze into an AI system and read out the answers you're looking for, in plain language.

Support in dealing with such challenges is coming from unexpected quarters. Last year, DARPA (Defense Advanced Research Projects Agency), the U.S. military's research agency, launched a program for "Explainable AI" (XAI). In short, some promising efforts to improve human-machine relationships are being made.

It will be fascinating to see where these efforts take us.

Young woman meets Robot

Digital responsibility

Experts discuss about chances and risks of digitization.

FAQ