The ancient board game Go has long been one of the major goals of artificial intelligence research.
Computers have slowly started to encroach on activities we previously believed only the brilliantly sophisticated human brain could handle. IBM’s Deep Blue supercomputer beat Grand Master Garry Kasparov at chess in 1997, and in 2011 IBM’s Watson beat former human winners at the quiz game Jeopardy. But the ancient board game Go has long been one of the major goals of artificial intelligence research. It’s understood to be one of the most difficult games for computers to handle due to the sheer number of possible moves a player can make at any given point. Until now, that is.
Researchers at Google DeepMind, the Alphabet-owned artificial intelligence research company, announced today that it had created an artificial intelligence system that has beat a professional Go player at the game. The company’s research was published in the scientific journal Nature.
While winning at chess or checkers may seem like an intricate task, computers are able to essentially map out the consequences of every possible move, and choose the one most likely to lead them to winning the game. This style of processing, sometimes called “brute-force” computing—where the computer essentially wins by just computing as fast as it possibly can, wouldn’t work for Go.
“There’s more configurations of the board than there are atoms in the universe,” Demis Hassabis, the CEO of Google DeepMind, said in a video released by Nature. That means that even the fastest supercomputer wouldn’t be able to work through every possible outcome for every possible move on the board in any sort of reasonable fashion. Instead, a computer system would need to work more like a human.
Players often choose moves, Hassabis said, because they “felt right”—which is not a way a computer program acts. DeepMind’s solution was to build two neural networks—two computer systems that are modeled after the human brain, and can be trained on large data sets to perform certain tasks based on the knowledge it’s accrued. One of the networks, called a “value network” evaluates the computer’s positions on the board, and the other, a “policy network” decides where to move. Instead of evaluating every possible move, it selects a few moves that it senses to likely be potentially good moves.
Last October, DeepMind invited the reigning European Go champion, Fan Hui, into its U.K. office to play its computer at Go. Nature’s senior editor, Tanguy Chouard, was present when AlphaGo took on Fan Hui and served as a moderator for the game. AlphaGo beat Hui in five straight games. DeepMind’s researchers said in their paper that AlphaGo “evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov.”
“It was one of the most exciting moments in my career,” Chouard said at a press briefing Jan. 25. But Chouard said the event prompted mixed feelings in him. While the technical achievement was worth celebrating, “one couldn’t help but root for the poor human being getting beaten,” he said.
AlphaGo had a 99.8 percent win rate against other Go programs, as well as beating Hui. And it will likely only get stronger with more training.
“Humans have weaknesses. They get tired when they play a very long match. They can play mistakes,” Hassabis said. “Humans have a limitation in terms of the actual number of Go games that they’re able to process in a lifetime. AlphaGo can play through millions of games every single day.”
Google is by no means the only company or research institution that has been working on solving the Go problem. Prior to today, scientists had only been able to create systems that could beat a human with a few moves’ head start. And in December, a prominent AI researcher, Rémi Coulom—who’s spent years trying to crack the game, and is even cited in DeepMind’s research paper—told Wired that he believed someone would crack the game in the next 10 years.
A little over a month later, Google has done just that. And, not to be upstaged by Google, Facebook CEO, Mark Zuckerberg posted this morning that his company’s AI researchers are also pretty close to beating the game.
Given enough training and processing, David Silver, a researcher at DeepMind, said in the video that he thinks it’s conceivable that AlphaGo could play the game at a level that no human could ever attain. Hassabis said that games like Go represent perfect stepping stones for researchers to hit on the pathway to potentially creating something that could be considered a true artificial intelligence system. AlphaGo is slated to face off against the reigning world champion of Go, Lee Sedol, in Korea in March.
As Google has cracked what has long been held as one of the “grand challenges” in AI research, giving credence to the idea that true, general-purpose, artificial intelligence may be possible.
“AlphaGo has finally reached a professional level in Go,” DeepMind said in its research paper, “providing hope that human-level performance can now be achieved in other seemingly intractable artificial intelligence domains.”