Company

Now that it can play Go, can we let AI go play?

  • Share
    Two clicks for more data privacy: click here to activate the button and send your recommendation. Data will be transfered as soon as the activation occurs.
  • Print
  • Read out

In October of 2015, “AlphaGo”, an artificial intelligence (AI) program developed by Google DeepMind, beat Fan Hui, the European Go champion. The score of the match was a perfect 5-0 for “AlphaGo”, and it was achieved without any handicap. Prior to that match, Go, an Asian board game 2,500 years old, had been seen as one of the last remaining realms in which humans had a fundamental edge over AI. In recent years, AI programs have kept beating master human players in a variety of different games, including chess, checkers and even Jeopardy. So the win over Fan Hui is seen as a new milestone in the development of artificial intelligence. Now, in a match to be held in Seoul, from March 9 to 15, 2016, and broadcast live on YouTube, “AlphaGo” will "go" against Lee Sedol, the reigning world champion.

The most interesting aspect of this latest cyber-human matchup is the technique that AlphaGo will use. To achieve its 1996 win over world chess champion Garri Kasparow, IBM's "Deep Blue" computer used its prodigious computing power to carry out "brute force" tactics. This means that it chose moves by working through enormous numbers of combinations of future moves. Kasparow wrote afterwards that "instead of developing a computer that could think and play chess like a human, with human creativity and intuition, they (the developers; editor's remark) created one that played like a machine. One that could systematically evaluate 200 million possible chessboard moves per second and thus win with raw, number-crunching force."

At a crossroads

But the game of Go is fundamentally different. The number of possible moves in Go is larger than the number of atoms in the universe. In chess, a player has 35 choices, while in Go he has 250. To enable the program to find and evaluate useful moves in such a vast world of possibilities, AlphaGo's developers first had the program train with 30 million moves made by human players. Then they had AlphaGo play against itself. This tactic, which allowed the program to continue improving, employed a strategy known as "deep learning," which enables computers or networks to teach themselves, i.e., to learn. The same method is used in a variety of different applications. Facebook, for example, uses it to recognize faces in photographs, while Microsoft uses it in its Skype Translator. A brain simulation developed by Google used a deep-learning algorithm to teach itself to recognize pictures of cats. AneedA, the operating system that is used in the Dial shown at the recent Mobile World Congress 2016 by i.am+, also comes with an intelligent, context-based software.

Deep learning is already present in many different contexts; it is certainly not limited to the world of Go. Should this worry us? Although technology giants have published the underlying codes, many people are concerned about where AI could be taking us. Elon Musk, CEO of Tesla Motors, has joined with other major players in technology research to found the non-profit organization OpenAI, which seeks to keep AI research in the public eye. "I want to gain a deep understanding of where we are now in terms of AI and of whether we could be heading in a dangerous direction," Musk stated as a reason for founding OpenAI. Musk, along with Microsoft founder Bill Gates and the physicist Stephen Hawking, believes that artificial intelligence has the potential to present an existential threat. "We've come to a crossroads. The important thing now is to choose the right road," noted our expert Dirk Helbing in a pertinent comment.

Young woman meets Robot

Digital responsibility

Experts discuss about chances and risks of digitization.

FAQ