DeepMind Has a Bigger Plan for Its Newest Go-Playing AI

TV screens show the live broadcast of the Google DeepMind Challenge Match between Google's artificial intelligence program, AlphaGo, and South Korean professional Go player Lee Sedol, at the Yongsan Electronic store in Seoul, South Korea, March, 2016.

TV screens show the live broadcast of the Google DeepMind Challenge Match between Google's artificial intelligence program, AlphaGo, and South Korean professional Go player Lee Sedol, at the Yongsan Electronic store in Seoul, South Korea, March, 2016. Ahn Young-joon/AP

AlphaGo was never just about board games.

AlphaGo was never just about board games.

Alphabet’s AI research arm, DeepMind, announced in an article in the journal Nature today that it has built a final version of its prolific digital Go master: AlphaGo Zero. The software is a distillation of previous systems DeepMind has built: It’s more powerful, but simpler and doesn’t require the computational power of a Google server to run. When pitted against the version of AlphaGo that beat world champion Lee Sedol, AlphaGo Zero won 100-0.

The most important new modification is how AlphaGo Zero learned to master the game. It doesn’t analyze data from Go games humans have played before, unlike past versions. It learns only by playing the game against itself. AlphaGo Zero isn’t the first algorithm to learn from self-play—Elon Musk’s nonprofit OpenAI has used similar techniques to train an AI playing a video game—but its capabilities show that it’s one of the most powerful examples of the technology to date.

“By not using this human data, by not using human features or human expertise in any fashion, we’ve actually removed the constraints of human knowledge,” David Silver, who led the AlphaGo team, said. “It’s able to therefore create knowledge for itself.”

That’s how the system earned its name: Zero human knowledge.

Critical to real world problems

DeepMind CEO Demis Hassabis says this approach is crucial to AlphaGo’s applicability outside the lab or off the Go board. By building an algorithm that can learn from a blank slate, the company says it can now be applied to other real world problems.

“For us AlphaGo wasn’t just about winning the game of Go, it was also a big step for us towards building general purpose learning algorithms,” he said.

Instead of Go moves, DeepMind claims the AlphaGo Zero algorithm will be able to learn the interactions between proteins in the human body to further scientific research, or the laws of physics to help create new building materials.

The idea of using AI to help mine the vast potential combinations of molecules to built a super-battery or some other futuristic device isn’t new; Hassabis has been saying it for years. But AlphaGo Zero represents the company’s first major vehicle that some researchers agree can help us get there.

How it works

Previously, AlphaGo was a combination of AI strategies. DeepMind’s first paper in Nature last year showed that the algorithm learned for a while from how humans played the game, and then started to play itself to refine those skills. The “learning from humans” part is called supervised learning, while the self-play is called reinforcement learning. DeepMind also built some code into how AlphaGo interpreted the game board, like which pieces were its own and whether like pieces were next to each other.

“It left open this question: Do we really need the human expertise?” says Satinder Singh, a professor specializing in reinforcement learning at the University of Michigan, who wrote a review of the new paper in Nature. “[AlphaGo Zero] is a really pleasing, pure reinforcement learning success. Pure learning through experience.”

When building the first iterations of AlphaGo, the team explored working on a system like AlphaGo Zero, but then the technology didn’t work.

AlphaGo Zero doesn’t have hints from humans that previous systems had, like which pieces are whose or how to interpret the board. It simply sees black and white pieces, called stones, on the board. It plays itself thousands of times, randomly at first, rewarded for winning and punished for losing.

“If we learn the game of Go purely through supervised learning, the best you could hope to do would be as good as the human you’re imitating…” Singh said. “Through self-play you could learn something completely novel.”

AlphaGo Zero could beat the version of AlphaGo that faced Lee Sedol after training for just 36 hours and earned its 100-o score after 72 hours. That previous version took “weeks” to train, according to DeepMind. The team says they don’t know AlphaGo Zero’s upper limit—it got so strong that it didn’t seem worth training it anymore. It’s not brute computing power that did the trick either: AlphaGo Zero was trained on one machine with 4 of Google’s speciality AI chips, TPUs, while the previous version was trained on servers with 48 TPUs.

Beyond Go

DeepMind has big plans for this algorithm, which it claims can be applied to a wide variety of problems with minor modifications.

“Drug discovery, proteins, quantum chemistry, material design—material design, think about it, maybe there is a room-temperature superconductor out and about there,” Hassabis says, alluding to a hypothetical metal that would be able to perfectly conduct electricity. “I used to dream about that when I was a kid reading through my physics books. That would be the Holy Grail, a superconductor discovery.”

DeepMind said that it’s not releasing the code as it might for other projects. Hassabis says outside researchers will likely be able to replicate parts of it from the Nature paper.

Others in the field say that the simplicity of the approach was surprising, which bodes well for the algorithm being applicable to other areas. Simple, general methods are valued in AI research because less effort is required to bring that same solution to other problems, Tim Salimans, an AI research scientist at OpenAI told Quartz in an email.

“I think the characterization as ‘as general as it can be for today’s cutting edge’ is fair,” Salimans said. “Although it’s certainly not general enough to directly apply to these other problems it is not unreasonable to see it as a first step in potentially solving those other problems somewhere down the road.”