Are decisions made with the help of algorithms fairer or more objective than those made by human beings? Melinda Lohmann, University of St. Gallen, says “no.” She claims that we are influenced by an “objectivity illusion” when it comes to algorithms. “We think that something is objective simply because it comes from a machine.”
Robots can be controlled through computer programs in the form of algorithms, which are the basis for artificial intelligence (AI). Algorithms define how a particular task is to be performed. They provide computers with instructions. They tell machines on assembly lines how specific parts should be put together. Or they evaluate application forms and recommend the best candidates. But are these decisions fairer or more objective because they are made by an algorithm and not by a human being?
Melinda Lohmann, University St. Gallen, warns us that we could be fooled by something she calls the “objectivity illusion.” Just because a decision is made by a machine it does not necessarily mean that the decision is objective. Quite the contrary: Lohmann fears that very soon some social inequalities could even be strengthened through algorithmic decision systems. Why? Because these systems are made by people who themselves are biased – either consciously or unconsciously. Thus AI works with data that could already be impacted by prejudice. Nevertheless, Lohmann is optimistic: “If we take the right steps, I believe we could develop systems which are indeed more objective. And I mean more objective than we are as human beings,” she says with confidence.
Transparency builds trust
Another decisive factor is transparency: You have to know whether or not a decision has been made with the help of an algorithm. This should be made absolutely clear whenever institutions or business enterprises rely on algorithms. If asked about requiring mandatory labeling for AI, Lohmann says: “I think it is a very good idea – because I believe it is important for consumers to know when they are interacting with artificial intelligence, and I think that would be a meaningful step for bolstering trust.”
Lohmann is not alone here: In February the Bertelsmann Stiftung foundation published a study (German) that examined what we as Europeans actually know about algorithms. According to this study, three quarters of those surveyed believe that it should be made easier for people to recognize algorithmic decisions. The study revealed that a better understanding of the workings of algorithmic decision-making promotes a positive attitude toward AI and, at the same time, raises the awareness of the potential risks involved – which in turn leads to more widespread acceptance of algorithms.
Do we need a set of laws to govern robots?
In her role as assistent professor of economics, Lohmann focuses on the legal infrastructure of relevance to artificial intelligence. Who bears liability if a robot learns incorrectly or makes the wrong decision? In most cases existing laws apply. But with learning systems the situation could be quite different. That is why steps should be taken at European level to start thinking about how a legal infrastructure in the age of AI could be established. The Committee on Legal Affairs of the EU Parliament has been discussing this for some time now. Plans call for adopting a law that would comprehensively define how robots and AI should be governed. And even though we cannot know what capabilities robots will have ten years from now, we should nevertheless start looking into these matters today – carefully, thoughtfully and with a sense of proportion. “We should not be overcome by blind activism. Overregulation would not be meaningful or effective now because we simply cannot know what kinds of technological developments await us in the future. However, it is very important for us to stay on the ball and constantly monitor these developments as they happen,” says Lohmann.
The complete interview with Melinda Lohmann is available here.