When it comes to robots, opinions quickly diverge. There is agreement that they can indeed be helpful. They lift and carry heavy loads, weld and paint without requiring breaks, and require neither clean air nor radiation protection. We're grateful when we can send a robot into a burning building, to rescue those in need, instead of a human firefighter.
In contrast, the physical appearance of robots is a controversial subject. Should they have an appropriate machine-like appearance or should they look more human?
David Hanson, the inventor of Sophia, probably the most famous humanoid robot, has a clear opinion. He believes the socialization of robots will be a decisive factor. We should consider them to be members of the family and treat them accordingly. As he sees it, this approach is also a pragmatic form of self-protection against beings that might become more intelligent and superior to us one day. If that happened, would we really want them to give tit for tat and treat us as badly as we used to treat them? Words like "master" and "slave" should be set aside quickly. "Most robots and AI don’t look human. My concern is also that they won’t grow up in a human family, in fact. They won’t really understand us, so making robots look human allows us to teach them to understand us better, for more valuable AI that can truly help us."
Resolutely following this line of thought, David Hanson has plans to roll out "Little Sophia", a small humanoid robot. He wants them to teach children how to code and arouse interest in robots. A Kickstarter funding round is currently underway.
Other scientists are also exploring the question of how robots should be designed, to enable the best possible human-machine interaction. According to robotics psychologist Martina Mara, for example, it isn't beneficial for robots to be especially cute. With large eyes and a snub nose, for instance. She recommends that they be clearly identifiable as machines. Robots would have clear functions, she says, and would help de-emotionalize the societal debate – away from fearmongering science fiction scenarios and toward thoughts of which features robots need to have to help us people.
The philosopher Julian Nida-Rümelin is against humanoid robots, "so we're not even tempted to start projecting (author's note: we'd have a human-like counterpart), which could ultimately endanger all further development." Robots need to be built in a way that lets them perform their functions well. That way, people would never get the impression that they could be human in any way, he said on German radio. As I interpret it, when the function requires stable footing, then it needs to have eight legs, not two. In other words, a spider, not a person.
Robotics ethicist Oliver Bendel is just as critical. People tend to develop feelings for the machines quickly, he says on Swiss Television. "We love robots. But they don't love us. They might act like it, but they don't care about us at all." (See the full interview here in German.)
Realistically speaking, we have to admit that we're still far from developing truly human-like robots, or androids. Therefore, we could consider the debate over their appearance as a mere thought exercise, as art for art's sake. I disagree with that. It's important to give thought to it now and define a framework.
The European Union agrees. Their draft ethics guidelines state: "Androids can be considered covert AI systems, as they are robots that are built to be as human-like as possible. Their inclusion in human society might change our perception of humans and humanity. It should be born[sic] in mind that the confusion between humans and machines has multiple consequences such as attachment, influence, or reduction of the value of being human. The development of humanoid and android robots should therefore undergo careful ethical assessment."
In my interpretation, this means no matter how humanoid future robots might be, they must always be identifiable as machines. Deutsche Telekom arrived at a similar formulation in its self-binding guidelines for using and dealing with AI, which were published last year. Guideline number 4 involves transparency. It also states that people must be able to tell when they are communicating with a machine. As such, it's also a clear "no" to humanoid robots, even if technological advances allow us to build perfect androids some day in the future.
I think that's the right call.