Archive

Archive

Company

0 Comments

Birds of a feather flock together – or do they?

An article by Manuela Mackert, until July 2021 Chief Compliance Officer (CCO) and Head of Group Compliance Management Deutsche Telekom AG.

We look for our alter egos, our clones, our spiritual twins. We think the more similar we are, the better we'll fit together. This assumption was a dogma of relationship research for more than half a century. But as it turns out, it's completely wrong. 

Psychologists have disproved the suggestion of happiness that the principle of similarity promised. So, the more similar we are, the more likely we are to reject the other, one could conclude. 

Paul Eastwick, a psychology professor at the University of Texas, believes that people interpret their observations and information on similarity or difference based on their individual gut feelings. The more two people like each other, for example, the more similarities they notice and the more important they think these similarities are. As such, the similarities are perceived, not actual, which means they are not a prerequisite for attraction, affection, or love. Instead, people fall in love first and then believe they are similar as a result. 

The fascination of the similar: fear of the imitator

Similarity fosters feelings of security and happiness. Is that why developers are building humanoid robots? Because they imitate us? Like the robot from Boston Dynamics that opens a door and lets another robot go through first? 
Studies show, however, that we reject humanoid machines. In contrast, we accept robots that don't have human faces, arms, or legs, and that don't imitate our physiognomy with rigid movements and cold gestures. 

Yet this science fiction scenario is pervasive. Scientists and developers are warning of the risks that people might be making themselves superfluous. We are still dominated by the fear of the similar and the frightful moment in which we recognize that the being before us is a humanoid android that talks and behaves like us – and we expose it as mimicry, as a fake. 

What makes humans human?

This inevitably raises the question of what separates humans from robots. What attributes and capabilities are needed for "human existence"? And once again, the distinction seems to be more important than what unifies and integrates us. 

Does that mean we need a process, a quick test, to make humans identifiable? Tests that can tell what makes up a person, in contrast to a machine? Could it be irony, empathy, and experience? How can we measure these factors and how can we distinguish them from voice output, algorithm, and data storage? 

And are humans, with all their weaknesses, the better model at all compared to a perfect machine? Which evolutionary development has an expiration date? Leaving the old behind has always resulted in something new – first and foremost, the fear of new things. 

Visionaries like the late physicist Stephen Hawking warned that computers will overtake us in 100 years, while also Tesla founder Elon Musk never tires of reminding us of the destructive power of Artificial Intelligence (AI). Warnings to keep AI at bay are nothing new. More than 60 years ago, science fiction master Isaac Asimov first presented his "Laws of Robotics" and warned against the coexistence of AI and people on the same planet.

Intelligent governance by robots

But the same Elon Musk also says that autonomous driving, a focus of Tesla's research, won't work without AI. A self-driving car has to think ahead. 

We all know the ethical issues associated with AI. Can AI make ethical decisions? Should the car steer into a group of preschoolers instead of running over an elderly woman? Or should it take a third option: driving the car into a wall, thus endangering the driver and passengers? 

What is the algorithm that analyzes the available data like? Which data is needed – and how much? What responsibilities do the developers have? And the purchasers? Can AI be ethical? Or must the developer be the guardian of ethics (and if so, with which code of ethics)? What role do lawmakers have to play? And what about society? How much ethics can we actually tolerate? 

Why are the answers to these questions so important when it comes to AI? 
We don't ask these questions for ourselves. We don't ask for answers or demand an ethics test before giving someone a driver's license. Why not? Because we can do it better? According to Germany's Federal Statistical Office, the country's police recorded more than 302,000 accidents involving personal injuries and 3,108 traffic deaths in 2017 alone. 

Do we humans believe that decisions based on data and algorithms are worse and less moral than human reactions – which can be impaired by alcohol, distractions, and fatigue? Or do we fear the exact opposite: that autonomous decisions by AI make our human weaknesses even more apparent? 

Ultimately, we want to have an ethical world. Ethics should serve as the infrastructure of AI. However, we are fully aware of what makes us human – we repeat our mistakes and don't always derive the proper responses from our experiences. We are unpredictable. We are alive, and life means change. We know that our climate is changing at a dramatic pace; we know about the famines that are triggered by war and the injustices that have caused millions of people to flee their homes. And yes, we also know the consequences. What remains is the "but".

Might intelligent governance by robots – free of emotion, instead based strictly on facts – not be an "unpredictable" reign of terror after all, but instead the solution to our problems? But we're not prepared to pay the price, namely, the loss of our self-determination, individualism, and of the freedom to make decisions, no matter how incorrect they might be. 

Any reasonable person should really applaud this ideal world. But would this puritanical order still be a world with culture, dignity, and joie de vivre? Would this world still have profound, exciting human relationships, with human warmth and care? If people weren't allowed to make non-ideal decisions any more because they might be driven by emotions, would we still be human? Or would we just be an object: the poorer machine? An algorithm can't calculate that. We sometimes harm our fellow humans – yet we accept our humanity and thus our fallibility.

People underestimate themselves

Are the prophecies of doom, the warnings and our fears signs of our tendency to underestimate ourselves? We fear the new, the unknown. And nowadays, that often means AI and robots. These new things have the power to replace the existing order. But what about the power of humans to create new things, to evolve and reinvent ourselves? Why do we underestimate our own capabilities so much when we talk about AI? Humans are creative and have emerged stronger from every change. In his book, Johan Huizinger describes humans as "homo ludens": playful, creative people whose curiosity is the factor that formed our culture. If we truly give AI intelligence and train it based on ourselves as role models, it will be playful and curious, too. AI will learn to appreciate us as a mental sparring partner, instead of trying to exterminate humanity.

Professor Oliver Bendel, author and expert in business informatics, believes that AI will relieve humans of work and, to a degree, of thinking as well, but cannot replace our ingenuity. "AI can't be Shakespeare", he said in the July 2017 issue of National Geographic magazine. Instead, he sees AI as a mirror that helps people see themselves better than ever before.

But what can we learn from AI? What attributes will augment us as humans? We can leave the memorization and processing of information to the AI and use the time we are given to get more curious again, to question and scrutinize. While knowledge in our parents' day largely involved the memorization and retrieval of information, it is expressed in the young generation as creative search engine queries. This collaboration model gives us new freedoms. We have to learn how to learn again; how to ask questions and how to challenge the results. AI will give us the time to do so. As humans, we are responsible for maintaining the dignity and sovereignty of humanity. We created the smart technology of AI. That's why we have to define the ethical framework as well. We have to be the leading part in this collaboration and implement control mechanisms that ensure the last mile remains human. 
If we place this clear demand on ourselves, we can mitigate, or even eliminate, concerns and fears about this new technology. After all, good cooperation between man and machine will give us – people and society – benefits and added value: just think of possible advances in cancer research or the fight against Alzheimer's, for example.

Against this backdrop, we have formulated the following in our AI guidelines: 
We know and believe in the human strengths like inspiration, intuition, sense making and empathy. But we also recognize the strengths of AI like data recall, processing speed and analysis. By combining both, AI systems will help humans to make better decisions and accomplish objectives more effectively and efficiently.
 

AI

Artificial intelligence. Everything OK?

Optimists hope that AI will solve all problems. Pessimists fear that AI will take over power. Who is right?

FAQ