Archive

Archive

Blog.Telekom

Caroline Neuenfeld

0 Comments

Hello, my name is Bot. James Bot.

I like to know who wants to sell me something over the phone. Who will repair my car or laptop. Who wants to inspect my driving documents. Anonymity isn't my cup of tea. Identity means accountability and responsibility. That's also true of robots' increasing intelligence.  

Symbolfoto Bots

The idea that powerful, intelligent robots will take over and subjugate the world has fired the imagination of authors and directors since the earliest days of science fiction. Robots have long since made the leap from book covers and movie screens to real life. Fortunately, however, physically invading robots remain the stuff of blockbuster movies and fantasy. On the other hand, robots are now used virtually everywhere as helpers.

We live in a world of robots. More than half of all data traffic on the Internet is generated by automated programs, just under 52 percent.

That is not a bad thing as such.

"Good" robots have many potential uses. In many sectors and companies they are an efficient way of dealing with routine tasks. Things that people can't or don't want to do. Internet search engines just wouldn't run without them. In chats and forums they carry out checks and clean up content. They are the worker bees of the Internet.

But just like any technology, robots can be used in harmful ways. They jam our email accounts with spam, collect massive numbers of email addresses for commercial purposes, spy on software gaps and sometimes even carry out devastating DDoS attacks. Last year's top rogue bots included the Mirai Botnet

We know all of this and are going to have to learn to live with it. There will never be hundred percent security in cyberspace.

But what we don't know more and more often is whether we are communicating with a human made of flesh and blood or a bot. “James” on the phone may well be a machine. That’s what is eerie about bots and, unfortunately, not science fiction: more and more bots are highly successful at passing themselves off as "people". Their behavior in part reflects the functions the programmer embedded in their software. And in part consists of algorithms and an ability to learn in reaction to a wide array of input.

The existence of projects like the Botometer is proof of how successful programmers have now become at developing human-like bots. This software can figure out whether a Twitter account is run by a real person or a code. A study by the University of California and Indiana University estimates that up to 15 percent of all Twitter accounts are controlled by bots, not people.

Should we care whether we are communicating with a person or a software? Or whether information in mass circulation, for example in the run-up to an election, was generated by propaganda bots or by professional journalists? Absolutely! Once again, it is a matter of reliability and trustworthiness, truth and deception in the Internet. And definitely a question of accountability and responsibility.

This applies in many cases, not just to political propaganda bots intended to influence the democratic decision-forming process, and thus to erode the foundation of our democracy.

What about bot-generated product reviews, whether positive or negative, designed to influence what we buy, that can potentially deceive customers or sabotage products? And what about users who ask sensitive questions, for example about their health or finances, on Internet portals and need to know who they are talking to? Let's take a specific example: will a question about an insurance claim be answered by a human expert there to help customers exercise their rights and get their money, or by a bot programmed to prevent or minimize the claim?

Many websites use captchas to differ between people and machines, and to determine whether, for example, a questionnaire has been filled out by a bot. We've all see those annoying requests: Please indicate how many of the following images show cars or street signs. Or type in correctly some twisted, light-gray letter and number combinations. 

All this does is shift the problem to service providers and users. And in case of doubt, not a single bot pretending to be a flesh-and-bone human is going to be stopped.

Transparency about who is behind Internet content - man or machine – is absolutely essential. It's far more important than just "nice to know". It’s about the foundation of the democratic process, about users' trust, about the success of digital business models.

So what should be done?

The IT industry's job, as with other aspects of cyber security, is to look for new ways of identifying and preventing "fake humans a.k.a. bots".  But perhaps lawmakers will have to take action in light of this new threat in cyberspace.
A threat not of a physical invasion of robots, as imagined by science fiction, but of an invasion of machine-generated information whose origin is concealed from users.

Why shouldn't there be a statutory obligation to declare when processes and information have been generated automatically: "I, Robot"?

Of course, self-regulation is always preferable to statutory obligations. But it is difficult to imagine that those who profit from a socially and economically undesirable technological development are readily going to give it up. Society and government will have to come up with something against this new threat. Otherwise, in the end, fiction authors' and directors' apocalyptic visions of robot invasion will be proven right, without physical violence but using the power of deception, illusion and deceit instead.

FAQ