
Ten years ago, an AI defeated a man. It wasn’t the end, but a beginning.
Di Pier Luigi Pisa
Fonte: La Repubblica
In 2016, a machine defeated, against all odds, the greatest talent in Go, a millennia-old game that requires profoundly human intuition. That challenge, played in Seoul, changed the course of artificial intelligence. And perhaps even that of human history. We explain why through exclusive interviews with two scientists from Google DeepMind, the team behind that success.
When IBM’s Deep Blue supercomputer defeated chess champion Garry Kasparov in 1997, the New York Times produced a legendary headline: Machines 1, Men 0. Ten years ago, in a hotel room in Seoul, that score worsened. Today, the machines lead 2-0. But there’s a world of difference between the first and second points. And it took the world a long time to fully understand it.
Deep Blue won with brute force: millions of positions calculated per second. The AI system that challenged South Korea’s Lee Sedol in 2016—an eighteen-time world champion, the strongest Go player of his generation—prevailed with something different.
AlphaGo, as the AI created by DeepMind, an AI research lab Google had acquired for over $400 million, was different.
A way of acting that resembled human intuition.
In 2016, a machine defeated, against all odds, the greatest talent in Go, a millennia-old game that requires profoundly human intuition. That challenge, played in Seoul, changed the course of artificial intelligence. And perhaps even that of human history. We explain why through exclusive interviews with two scientists from Google DeepMind, the team behind that success.
The defeat that opened the future of AI
Despite his triumphs, Lee Sedol—like Kasparov—will likely forever be remembered for a defeat.
But for humanity, who witnessed his confrontation with a machine—followed live by two hundred million viewers—that defeat actually meant a victory. For science.
By beating the South Korean Go champion, AlphaGo opened a door to the future that the fathers of artificial intelligence had been pushing for in vain for sixty years.
The defeat of the century, seen through today’s eyes, was not the end. It was the beginning of the race towards artificial intelligence that is changing our lives.
Why Go Was an Unsolvable Problem for AI
To understand what happened in March 2016, we must first understand Go. It’s over four thousand years old, and despite its relatively simple rules, it has always been considered one of the most complex games in history. Two players take turns placing black and white stones on a wooden grid—called a goban—with nineteen lines on each side. The goal is to conquer territory by surrounding the opponent’s stones. A child can learn it in ten minutes. But the number of possible configurations on the goban exceeds the number of atoms in the observable universe: ten to the one hundred and seventieth power.
“Go is often compared to chess, but in reality, it’s a more interesting game in many ways,” Thore Graepel, Distinguished Research Scientist at Google DeepMind, who was one of the architects of that match, told us.
“It has a very long tradition; it has been played for thousands of years, it was invented in China, and it has been played throughout the millennia. Even today, there are many people, professional Go players, who dedicate their lives to playing this game at the highest level possible,” Graepel added. “When Deep Blue defeated Garry Kasparov in 1997, chess was in a sense solved [in the sense that the strongest human player had been surpassed by a machine, ed.], but Go remained completely unexplored territory. One might wonder why the same effective techniques weren’t applied to chess, but the reality is that Go possesses such a structural richness that that approach is completely inadequate. At every stage of the game, the move options are countless, and a match can last 200 to 300 turns: a complexity that was beyond any computational capacity imaginable at the time.”
Deep Learning and Neural Networks: AlphaGo’s Strategy
It was only with the advent of neural networks and deep learning, machine learning techniques inspired by the functioning of the human brain, that Google DeepMind found the tools necessary to tackle and solve Go as well.
DeepMind’s solution involved the use of two neural networks. One corresponded to slow thinking: it evaluated the position on the board. The other imitated fast thinking: it suggested the most promising moves, thanks to an algorithm that explored only the most fertile branches of the tree of possibilities.
AlphaGo, in short, didn’t calculate everything as Deep Blue would have attempted: it chose where to look. In this, it was more similar to a human being than a computer had ever been before.
AlphaGo, the difference between machine and human, explained by Thore Graepel
Fan Hui, the first sign of the revolution
The first to fall was Fan Hui, who in the mid-2010s was the European Go champion. In October 2015, five months before the Seoul challenge, DeepMind invited him to London for a secret test.
Fan Hui was a professional who had dedicated his life to the game. After five games, however, the verdict left no doubt: five to zero for the machine. AlphaGo was ready for the next challenge. Against the best. “It didn’t look like a computer,” Fan Hui later said. “It played with a beauty that frightened me.” I felt naked.”
But Fan Hui was no Lee Sedol. And the team knew it. Demis Hassabis, head of Google DeepMind, called the South Korean champion a legend with a killer instinct, and warned the team: if AlphaGo had the slightest weakness, Lee Sedol would find it and destroy it. Fan Hui, who had since become a consultant on the project, was even more blunt: Lee Sedol doesn’t play like me, he said. He’s a predator.
Lee Sedol, for his part, had studied footage of his matches against Fan Hui and hastily concluded that he belonged on another level. At the press conference that formalized the match, he stated categorically: “I don’t believe artificial intelligence can reach the level of human intuition in Go. I think it will end 5-0, or maybe 4-1, for me. I want to protect the beauty of the human game.”
He was right about the score. He was wrong about who would win.
….
