Company

Machine Intelligence: Blessing or Curse? It Depends on Us!

An Article by Dirk Helbing (ETH Zurich and TU Delft)

Artificial Intelligence (AI) can help us in many ways. Particularly when combined with robotics, AI can make our everyday life more comfortable (e.g. clean our home). It can perform hard, dangerous and boring work for us. It can help us to save lives and cope with disasters more successfully. It can support patients and elderly people. It can support us in our everyday activities, and it can make our lives more interesting. I believe that most of us would like to benefit from these unprecedented opportunities. So far, however, any technology came along with side effects and risks. As I will show, people may lose self-determination and democracy, companies may lose control, and nations may lose their sovereignty, if we do not pay attention. In the following, I describe a worst-case and a best-case scenario to illustrate that our society is at a crossroads. It is crucial now to take the right path.


For a long time, research in Artificial Intelligence (AI) has made frustratingly little progress. However, such progress is exponential. For decades, it is slow and hard to notice. But then, everything happens very quickly and at an accelerating pace. According to Ray Kurzweil, a technology guru in the Silicon Valley working for the Google Brain project, computers will surpass the capacity of the human brain before 2030 and the capacity of all human brains before 2060. Such forecasts have long been considered science fiction. But now there are deep learning algorithms, and AI can learn by itself, making explosive progress.

Since a number of decades already, computers are better at playing chess. In the meantime, they are better in almost all strategic games people like to play. Now, IBM's Watson computer is able to win game shows - and not only this. Watson is also better in coming up with many medical diagnoses. Moreover, about 70percent of all financial trades are performed by autonomous computer algorithms. Soon, we may use self-driving cars that drive better than humans. Algorithms come also increasingly close to human abilities in recognizing handwritings, listening to languages, translating them, and identifying patterns. As 90percent of today's jobs are based on these abilities, it will soon be possible to replace all routine jobs by computer algorithms or robots, which will perform better, never get tired, never complain, and will not have to pay social insurance or taxes.

In my contribution to John Brockman's book "What to think about machines that think",[1] I summarize the situation as follows: "The explosive increase in processing power and data, fueled by powerful machine learning algorithms, finally empowers silicium-based intelligence to overtake carbon-based intelligence. Intelligent machines don't need to be programmed anymore, they can learn and evolve by themselves, at a speed much faster than human intelligence progresses."

Jim Spohrer at IBM thinks: While AI will be our tool in the beginning, robots may soon be our teammates, and then our coaches. Therefore, I believe that AI will eventually establish new intelligent species, new forms of "life". However, will highly advanced AI continue to serve humans, will it enslave us, or will it be disinterested in us (as the deep learning expert Jürgen Schmidhuber assumes)? At present, nobody can answer this question. Steve Wozniak, the co-founder of Apple, put it like this:[2] "... I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently. [But:] Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don't know ..." The technology visionaries Bill Gates and Elon Musk have recently raised similar concerns about superintelligence. What makes them so nervous?

In "What to think about machines that think" I analyze: "Humans weren't very good at accepting that the Earth was not the center of the universe, and they still have difficulties accepting that they are the result of chance and selection, as evolutionary theory teaches us.

Now, we are about to lose the position of the most intelligent species on Earth. Are people ready for this?"

In short: "No, we are not ready for this, but we need to get ready as quickly as we can." Let me start with a worst-case scenario, before I discuss a best-case one.

A worst-case scenario

In the past, whenever people have raised concerns that AI may take over the world, experts said we could always pull the plug, and this would solve the problem. Unfortunately, this is not true. First of all, many parts of our economy and society would not work anymore, if we turned intelligent machines off. This includes our money and our communication system, increasingly many critical infrastructures, and their protection. Second of all, in some sense, AI has already escaped in the real world. Since Google made their machine learning software "Tensorflow" open source,[3] anyone can use it as he or she likes, including criminals and terrorists. As a consequence of this AI proliferation, we can expect to see a further rise in cybercrime and increasing cyberwar threats. Note that cybercrime already causes economic losses of 3 trillion dollars annually, and it is growing exponentially.

Nevertheless, my main concern is not that AI might take over the world. It is rather that a few people might try to use AI technology to take over the world. Let me shortly summarize the related developments here: In the 1970ies, Chile was experimenting with a third political system besides communism and capitalism: the cybernetic society, inspired by the work of Norbert Wiener (1894-1964). In this system, factories had to regularly report their production numbers to a control center, which told them how to adapt their production. Given the state of information technology at this time, the system worked surprisingly well. In particular, it helped the government to cope with a general strike. However, the CIA supported a military coup in the country, which ended Chile's government on September 11, 1973, with the suicide of its president Salvador Allende.

It seems that, in the meantime, new kinds of cybernetic societies have been built, which use mass surveillance data. Singapore, for example, considers itself a social laboratory.[4] Justified by terror threats, large amounts of personal data are collected about every single citizen. These data are now being fed into AI systems, which learn how every citizen behaves. China's brain project is an example for this. In other words, using our personal data, digital doubles of ourselves are being created, which are thought to replicate our own decisions and behaviors. However, this replication is not perfect, and for such reasons, methods have been developed to manipulate people's decisions and to control their behaviors.

These approaches build on the work of Burrhus Frederic Skinner (1904-1990). He put animals such as rats, pidgins, and dogs into so-called "Skinner boxes" and applied certain stimuli to them. By means of rewards (such as food) and punishments (such as electrical shocks), he managed to condition animals to perform certain kinds of behaviors. Today, we are literally all experimental subjects of companies such as Google and Facebook,[5] who are performing millions of automated experiments with us every day. Our Skinner box is the "filter bubble"[6] created around us, i.e. the personalized information we receive about the world. In this way, we are exposed to certain kinds of stimuli. AI systems learn how we respond to them and how these stimuli can be used to trigger certain behavioral responses.

In other words, the trend goes from programming computers to programming people.[7] This manipulation is often so subtle that we would not even notice it. What we perceive as our own decisions is now, in fact, often decided by others and secretly imposed on us. This technology, developed to personalize advertisements and make them more effective, has now become a tool of politics, too. "Big nudging",[8] the combination of the "nudging" approach from behavioral economics with "big data" about all of our behaviors, is being used to influence public opinions and election outcomes.

However, the nudging approach is not powerful enough to reach a healthy and environmentally friendly behavior of a nation's population.[9] For such reasons, more effective feedback mechanisms such as personalized prices are being developed. The "citizen score",[10] as it is currently implemented in China, is the first step. Here, everything people do would get plus or minus points: the shopping behavior as well as the links clicked in the Internet. The political opinion is evaluated as well as the behavior of one's social network.

The citizen score will determine credit conditions, the jobs one can get, and whether one is allowed to travel to certain countries or not. In other words, the citizen score is being used to create a modern "caste system". In case of resource shortages, the score would determine who is entitled to get some of the scarce resource and who is not. In other words, the "citizen score" is a mechanism that allows one to play "judgment day" based on arbitrary criteria chosen by "big governments" or "big business". 

Most likely, big nudging and citizen score technologies have not only been deployed in Singapore and China. According to "nudging pope" Richard Thaler, no less than 90 countries have established "nudging units" in recent years. So far, little is publicly known about these units. It must be assumed, however, that they involve powerful IT infrastructures, which are fed with personal data collected by mass surveillance and profiling activities of private companies. Such infrastructures are used to run autocratic countries such as Saudi Arabia.

The likely goal of big nudging and citizen scores is to control a society similar to the Singaporean or Chinese model. The fundamental idea is that of a data-driven cybernetic society, which is controlled by a "benevolent dictator". Such an approach is certainly not compatible with democratic principles and constitutional rights. Should the Singaporean or Chinese model be applied in the aforementioned 90 states, democracies worldwide might be in great danger.[11]

The problem is that a digital power grab is easily possible and irreversible. For example, whoever has access to a big nudging infrastructure may be able to determine the result of an election.[12] Furthermore, terror attacks or other events that traumatize the public may be used to restrict democratic principles. This happened in France, where mass surveillance is already being used as an instrument to keep the own citizens in check and to suppress the opposition, as criticized by a high-level UN committee.[13] The example of Poland shows as well, how easy it is to demolish democratic institutions such as the constitutional court and the freedom of the press. We can see similar developments in Hungary, where the constitution is about to be changed,[14] and Turkey. In Turkey, both the opposition and the Kurdish minority are already being suppressed.

Given the incredible potential to inflict harm and violate basic human rights created by the confluence of technologies as discussed above, we urgently need initiatives to implement the following measures as quickly as possible:

  • The above instruments should be democratically controlled by the parliament and not an exclusive tool of the chancellor or president, the government, the military, or secret service.
  • It also makes sense to give opposition parties access to such information systems in order to ensure a reasonable balance of power. (Remember that complex systems such as our society require pluralistic perspectives to be well understood and controlled.)
  • The use of these tools should be based on democratic mandate and scientific principles. They should be operated by interdisciplinary teams of leading scientists (including psychologists, sociologists, economists, computer scientists, and complexity scientists). These groups need to be open to international exchange and report about their activities at public international conferences.
  • Ethical oversight should also be ensured.
  • Personal data should be anonymized and breaches of privacy punished.
  • Transparency about on-going activities would be important. It must be recorded who uses the system in what way, and the uses need to be regularly reported to the public in comprehensive documents.
  • Opt-out (at least from scoring and big nudging) should be offered to ensure informational self-determination. (Note that this will also promote trustworthy uses of these methods.)
  • If social experiments have caused undesirable side effects, victims should be properly compensated.

Secret services would probably want to have separate access to these information systems, but some principles should nevertheless apply:

  • The use of these tools should be recorded. Large-scale nudging should be forbidden. Individual-level nudging in a limited number of cases may be acceptable.
  • Mass surveillance should be also forbidden. Deanonymization should be limited to a small number of people and democratically controlled.

Private companies would have to follow the new European General Data Protection Regulation, and the government would have to enforce compliance not only of big IT companies, but also of the often relatively unknown companies trading with our personal data.

Democracy certainly deserves a "digital upgrade",[15] but it would be a disaster for the future of our planet, if democracy would be extinct, i.e. if we didn't have a competition between different political systems anymore. It has been found that, in the long term, only democracies can live peacefully with each other, thanks to the effectiveness of their principle of balancing different interests (through subsidiarity, federal organization, separation of powers, and citizen participation). It should be remembered that these institutional designs as well as human rights and our justice system are the lessons learnt over hundreds of years, including two World Wars.

Why am I certain that a democratic, data-driven approach will be superior? Because we have had a similar historical case: the competition between centralized communist regimes which were controlled from the top down, with federally organized capitalist systems which were controlled to a greater extent from the bottom up. Capitalism won because innovation mainly happens from the bottom up. (Most of the richest people on Earth have made their money within just a few decades with entirely new business models. Some of these businesses were started in garages by students who did not finish their degree.)

In fact, comprehensive analyses of empirical data show that the transition from autocracy to democracy can yield a boost in economic growth. A transition in the opposite direction, however, leads to a loss of sociopolitical capital in the medium term and a loss of economic growth in the long term.[16] Thus, the price of losing democratic principles is high.

I am also involved in a scientific project analyzing virtual (gaming) worlds. Here, we find as well that the worlds using automatic penalty mechanisms similar to the citizen score are not only less attractive, but also less innovative.

I certainly don't want to criticize Singapore or China, here. It is possible that their governance models are the best solutions for their societies in their current historical situation. However, I seriously doubt that this governance approach would be suited as models for the rest of the world. They should not be copied by democratic states. We need to elaborate a different model for them (see the best case scenario discussed below).

Note that Singapore's success does not only rest on its data-driven approach. It was also a tax heaven, and it imports innovations from the US, Germany, Switzerland etc. to a much larger extent than other countries. Without these imports, it would be much weaker in terms of innovation. I am certainly not questioning the import of innovations from elsewhere, but I want to point out that it takes more liberal settings in other countries to produce these innovations in the first place. Singapore knows this,[17] and that's why the country is now trying to grow spaces that allow for some degree of "creative disorder".

Many times, China has also been proposed as a political model to copy. However, despite China's impressive development, the average living standard is still lower than in many democratic countries. Moreover, it is currently faced with acute environmental problems and huge market turbulences. So, it increasingly feels the limitations of centralized control, and it knows that it needs to become more pluralistic and democratic to enable further development.

Finally, it is noteworthy that none of the IT superpowers such as the USA, China, or Singapore has a city in the top ten list of most livable cities in the world. Thus, how can we expect that these governance models would lead to societies with the highest quality of life? If companies like Google could create paradise on Earth, why then is San Francisco not the most attractive city on the planet? Instead, the most attractive cities are all located in countries, which make sure to balance the interests of all stakeholders, including civil society.

In conclusion, the self-determination of people is currently at stake, which is a big concern. Big nudging, citizen scores, and implants could lead to digitally enabled slavery. However, this is not only endangering the freedom of people. It is also endangering the sovereignty of companies and entire countries. This is not just due to mass surveillance and espionage. AI systems can be used to spot the weaknesses of IT systems and people, by sending certain stimuli and recording the responses. In this way, it is not only possible to learn how to manipulate people (as discussed above). It is also possible to learn how to control IT systems and critical infrastructures. In fact, even autonomous AI systems can be externally steered: since they respond to their information inputs, this can be used to manipulate their outputs. As a consequence, whoever has the most powerful AI system might be able to control all other AI systems and, in this way, all the companies, institutions and people manipulated by them. The explosive evolution of technologies such as quantum computing, memristor technologies, and light-based LiFi communication, implies a race for global dominance.

In other words, technologies intensify the race to control the world and its resources. Today, 62 people are said to control as much capital as 50 percent of people on this planet.[18] The following people lead the ranking: Bill Gates (Microsoft, USA), Amancio Ortega (fashion, Spain), Warren Buffet (finance, USA), Jeff Bezos (Amazon, USA), Carlos Slim Helu (telecommunication, Mexico), Larry Ellison (Oracle, USA), Mark Zuckerberg (Facebook, USA), Charles and David Koch (oil and various products, USA), Liliane Bettencourt (L'Oreal, France), Michael Bloomberg (finance data, USA), Larry Page (Google, USA), Sergey Brin (Google, USA). We see that business with data, software and information and communication technologies is outpacing most classical business models, and I expect that we might see a further rapid concentration process, until the world is controlled by very few people.

In fact, while I am generally not opposed to free trade agreements, it is to be expected that the impending TTIP and TISA agreements will accelerate this concentration process. In the end, most money, power and resources would end up in the hands of very few people (most likely not in Europe). These people could decide the fate of the planet like dictators. Would these people pay a basic income or engage in other solutions that would allow the many unemployed people to survive, whose jobs will be taken by robots and artificial intelligence? Or would we face a global war, until we remain with, say, a billion highly qualified people or so, needed to run a data-driven brave new world? Several IT companies have started to build their own rockets. Their ambition to control the universe is hard to ignore.

However, I don't think that any kind of global dominance would be good for our planet and for humanity, and I am not alone. More than 20,000 people have recently signed a petition that we should not use AI as a weapon against humans.[19] I believe we should rather engage in a cooperative AI paradigm and distance ourselves from "big nudging", "citizen scores" and other approaches, which may be used to control millions, perhaps billions of people in a centralized and top-down way, which could easily end in the most totalitarian regime ever.

A best-case scenario

Fortunately, there are positive perspectives, too. We are just about to step into a new era of history — the digital society and economy 4.0. If we want this transformation to succeed, it is important that we create opportunities for everyone: business, politics, science, and citizens alike. With new information and communication technologies, this can now be accomplished more easily than ever. The good news is that the digital economy is not a "zero sum game". It allows us to overcome the exclusive competition, which we have had in the material world and the old economy. Now, competition becomes compatible with cooperation. Therefore, "co-opetition" will be the new paradigm, and if we manage to create a suitable legal framework, everyone could have a prosperous life.

The benefits of open information exchange are becoming increasingly evident. More and more people understand that sharing information often increases the value of information, inventions, and companies. If properly organized, the digital economy provides almost unlimited possibilities because intangible goods can be reproduced as often as we like and used in zillions of different ways. For example, more and more money will be earned in virtual worlds. This relates not just to computer games; Bitcoin has even shown that bits can be transformed into gold. Almost nobody believed that this were possible.

In fact, if we want to master the challenges humanity is faced with, our economy and society will (have to) be organized in entirely new ways. Without any doubt, in the next three decades to come, the world will see disruptive times, characterized by problems such as the digital revolution, financial and economic crises, climate change (with extreme weather and loss of biodiversity), the energy transformation, demographic challenges (such as ageing and migration), and unstable peace.

I do not think the use of Big Data and Artificial Intelligence will be the one and only solution to the above problems (see the Appendix). Due to the complexity of the world, Big-Data-driven "crystal balls" to predict the future will often fail; predictive analytics and control may be even counter-productive. They may keep us from exploring innovative paths, because AI systems, which are based on backward data, tend to repeat solutions of the past (in extreme cases, this may also be war).

Instead we need a resilient design and operation of our society. This requires diverse system components, modular system designs, and decentralized control paradigms, i.e. bottom-up participation. Such an approach would allow for a flexible adaptation to unexpected events. We also need to increase innovation rates dramatically. Therefore, I expect the new organizational principles of our future society to be collective intelligence and co-evolution in a highly diverse networked economy — an emerging participatory society, which may be viewed as an "innovation ecology".

If we want to boost innovation, the application of big nudging and citizen scores is quite counterproductive. It would promote opportunism and conformism, rather than increasing people’s readiness to take risks and to question existing solutions — something, which is absolutely necessary now.

We further need a fundamentally new approach to innovation that puts more emphasis on open innovation than today, in order to offer all of the products and services that are currently not provided by large companies. Citizen science, so-called Fablabs (public centers for communities of digital hobbyists), as well as initiatives to mobilize civil society are becoming increasingly important. The key word is co-creation, which means that citizens can augment information, knowledge, services and products in a largely open information and innovation ecosystem. Obviously, this does not preclude commercialization. On the contrary, it would create opportunities for everyone to earn money with data. The citizens and the customers would become partners. The participatory society of the future will not only build on large global corporations. Businesses of all types and sizes and self-employment will play an even bigger role than today. This is a good thing because monopolies are known to be comparatively little innovative, and they rarely care about products and services that will not generate a significant return, say, of 20 percent. Nonetheless, big business could also benefit. A rich information ecosystem is like a rain forest, in which many trees are much bigger than the few trees growing in the desert.

In this connection, the OpenAI initiative, which was recently started with a donation of 1 billion dollars, is quite remarkable. Initiator Elon Musk formulated the goals as follows: "AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible." In addition, however, we need to engage in a responsible innovation paradigm, oriented at creating value-sensitive designs that fit the respective context and culture. In particular, we must design and teach AI systems to act morally and socially. This will change the paradigm of "human-machine interaction" to the paradigm of "human-machine symbiosis". In John Brockman's book "How to think about machines that think" I conclude:

“[On the long run,] Intelligent machines would probably learn that it is good to network and cooperate, to decide in other-regarding ways, and to pay attention to systemic outcomes. They would soon learn that diversity is important for innovation, systemic resilience, and collective intelligence. Humans would become nodes in a global network of intelligences and a huge ecosystem of ideas.”

I also believe we need to generate knowledge in real time (as much as this can be done) and share our reflections, judgments and insights more adequately, faster, and worldwide. In the digital age, we must reinvent innovation, from research to publication to teaching, and this requires a new framework that I like to call "Plurality University".

We further need to think more about ways to foster the spirit of experimentation. Too many inventions are merely modest improvements of existing ideas, so-called linear innovation, which extend the life cycle of "old" products. Instead, we need to encourage radically new ideas, sometimes referred to as "disruptive innovations".

The question is, how we can ensure that such innovations will lead to sustainable products that do not harm our society and environment (given that the hope our planet would recover from all stresses and strains by itself has not materialized). For this, we need to measure and price externalities, which refer to the external costs or benefits associated with products, services and interactions. Interestingly, Big Data and the Internet of Things (IoT) make it increasingly possible to do this.

Note that the measurement and pricing of externalities would require less regulation, so it could help to swipe today's over-regulation out of the way. If we were to trade externalities like financial derivatives, this would create entirely new financial markets. That would unleash enormous economic potential. A multi-dimensional financial system would also allow enirely new applications such as self-organizing socio-economic systems, which require various incentive mechanisms. In many cases, the application of decentralization approaches and self-organization principles could increase the resource efficiency by 30 to 40 percent.

Therefore, it makes a lot of sense to empower citizens by means of information and communication technologies in order to allow them to make better decisions and contribute more to business and society, and to their digital transformation. If set up well, enabling users, customers, and citizens will lead to better services, better products, better businesses, better neighborhoods, smarter cities, and smarter societies.

For example, digital assistants can help people to behave in a healthier and environmentally friendly way. A GPS-based route guidance system may serve to illustrate this. There, the user can specify the goal, and the digital assistant offers various alternatives to choose from, pointing out the advantages and disadvantages of each. After that, the digital assistant supports the user as good as it can in reaching the goal and in making better decisions. To stimulate people to do more sports and eat more healthy food, it is not needed that the state or a health insurance records everyone's personal information. One can also think of a social media platform that allows people to form their own "health circles". The competition between friends would be able to stimulate healthier behavior. To provide incentives without violating privacy, the state or a health insurance might reward health circles rather than individuals, but this is perhaps not even necessary.

I believe that modern information technology can also help us to reduce conflict in the world, namely by mitigating the competition for scarce resources. This can be achieved by the combination of several measures. First, resources need to be used more efficiently, as discussed before. Second, recycling techniques could be considerably advanced. Third, the principles of the sharing economy could be applied to an increasing number of areas of social and economic life, including how urban space is managed and used. This would enable a higher standard of living for more people while decreasing the consumption of resources. In order to reduce war and terrorism, we certainly need to pay more attention to the living conditions in the rest of the world.

Furthermore, we must learn that, in a multicultural society, punishment mechanisms often do not cause social order, but rather escalation of conflict. This has been observed not only in the Middle East, but also in Ferguson, and many other places. Therefore, we need new mechanisms to promote coordination and cooperation in a multi-cultural world. Suitable reputation mechanisms are promising in this regard, but also qualification, competition, communication and matching mechanisms.

Last but not least, engaging in a "Cultural Genome Project" could achieve a better understanding of the success principles, on which different cultures are built. This would allow us to combine them in innovative ways and enable us to generate new social and economic value. The greatest potential of this approach lies directly on today's cultural fault lines. Some of these cultural success mechanisms will, for example, be built into the Nervousnet platform,[20] so that its "data for all" approach will lead to responsible use.

Nervousnet (see nervousnet.info) is an open and participatory, citizen-run Internet-of-Things platform that will support (1) real-time measurements of the world around us, (2) its scientific understanding, (3) awareness of the implications of various decision alternatives, (4) real-time feedback to support self-organization, and (5) collective intelligence. The project takes informational self-determination seriously. Its data storage is decentralized and various procedure are used to anonymize, encrypt and "forget" data. Users can decide for themselves which kind of data they want to produce for themselves or share with others.

In addition, imagine that all the data you generate is sent to a personal data store, where it can be sorted and managed by category. Given appropriate technical solutions and legal regulations, you would then be able decide what kind of data to share with whom, and for what purpose. Thus, more trusted companies would have access to more data. This would stimulate competition for trust, and the data-driven society would be built on trust again.

It is high time that we start to work on this best case scenario. First of all, we are forced to be much more innovative than today. Second, it would bring great benefits to everyone to implement these proposals. It seems that the USA have already started to invest in a new strategy. They are betting on reindustrialization on the one hand, and citizen science and combinatorial innovation on the other. Even Google has embarked on a new strategy with the founding of Alphabet, which aims to make the company less dependent on personalized advertising. And Apple has recognized the value of privacy as a competitive advantage.

Finally, people increasingly understand that the digital economy is not a zero-sum game. In the area of the Internet of Things, Google has engaged in open innovation. Tesla Motors has opened up many of its patents, and many billionaires have recently promised to donate large sums of money for good. So, we see many signs of change. The only question is when Europe will finally make use of the fantastic opportunities offered by the digital revolution. We are entering a digital age that increasingly frees itself of material limitations. This is absolutely fascinating!

Further Reading
D. Helbing, The Automation of Society Is Next: How to Survive the Digital Revolution (CreateSpace, 2015).
D. Helbing, Responding to complexity in socio-economic systems: How to build a smart and resilient society?, see http://papers.ssrn.com/Sol3/papers.cfm?abstract_id=2583391
D. Helbing et al., Eine Strategie für das digitale Zeitalter, Spektrum der Wissenschaft 1/2015, http://www.spektrum.de/news/eine-strategie-fuer-das-digitale-zeitalter/1376083
D. Helbing, Societal, Economic, Ethical and Legal Challenges of the Digital Revolution: From Big Data to Deep Learning, Artificial Intelligence, and Manipulative Technologies, Jusletter IT (2015), see http://papers.ssrn.com/soL3/papers.cfm?abstract_id=2594352
J. van den Hoven et al. (eds.) Responsible Innovation 1: Innovative Solutions for Global Issues (Springer, 2014)
J. van den Hoven et al. (eds.) Handbook of Ethics, Values and Technological Design (Springer, 2015)
S. Spiekermann, Ethical IT Innovation: A Value-Based System Design Approach (Auerbach, 2015)
More references at 
http://www.spektrum.de/news/wie-algorithmen-und-big-data-unsere-zukunft-bestimmen/1375933

Appendix: Some common pitfalls of data-driven technologies
In the past couple of years, the concept of Big-Data-driven and Artificial-Intelligence-based Smart Nations has spread around the globe. Without any doubt, these technologies offer interesting potentials to improve political decision-making and the state of the world. However, there are also a number of issues that need to be considered:[21]

1) Big Data Analytics

In classification problems, errors of first and second kind will occur, which implies unfairness, if decisions cannot be challenged and corrected. Current algorithms to identify terrorists are actually quite bad. They produce too long lists of suspects, and "one does not anymore see the trees for the forest."
Using more data is not necessarily better: it may lead to over-fitting. In large datasets there are always some patterns and correlations by coincidence. In many cases, however, these patterns are meaningless, or they don't imply causality. This might lead to wrong conclusions, if statistical significance and causality are not ensured (which is often the case today).
Some data-driven findings may lead to decisions that discriminate people and, thereby, violate constitution or law. Suppose we let people pay different health insurance rates dependent on what they eat. Then, we will for sure end up with different rates for women and men, for Christians, Muslims, and Jews. Such implicit discrimination is to be avoided, but common Big Data methods don't take care of such issues.
2) Artificial Intelligence (AI)
Such systems can handle huge amounts of information, but:

errors may still occur due to relevance, inconsistency or incompleteness of information, ambiguity, context-dependence, etc.
the goal function may be specified in an improper way, and by modification of the goal function one will often get completely different results as a consequence of "parameter sensitivity"; this makes results subjective, i.e. dependent on the person who controls the AI system.
if AI systems are not programmed as tools, but able to learn and evolve, they may start to take unpredictable decisions and behave maliciously.
if people are involved in defining the training data, they may intentionally or unintentionally introduce biases that are not accounted for, as we currently lack suitable institutional checks and balances regarding such training; if people are not directly involved in selecting the training data, then machine intelligence may run into similar problems as we know them from children that have not received proper moral education or coaching by adults.
3) Big Nudging 
"Big nudging" uses Big Data of a population, AI, and methods from behavioral economics (such as "nudging") to manipulate people in their decision-making and behaviors.

These systems can be used to let people make stupid mistakes (e.g. spend their money on things they don't need, undermine the security of IT systems, etc.).
They can be used to manipulate public opinion and democratic elections by means of an almost unnoticeable kind of propaganda and censorship, employing principles from attention economics.
They amplify the power of those who are allowed to use the system to an extent that is hardly controllable. For example, they can be used for a digital power grab, i.e. to establish and/or stabilize autocratic regimes. These can exploit the data asymmetry to weaken the rule of law or democratic arrangements.
Altogether, the problem of the above three approaches is that their validity is over-rated. They give very few people an extreme amount of power, but are very hard to control. In principle, they can be misused as a "weapon" against the own population. Using "big methods" implies the likelihood of making big mistakes. It's just a matter of time until they will happen.

Young woman meets Robot

Digital responsibility

Experts discuss about chances and risks of digitization.

FAQ