Google Mastered a Game That Vexed Scientists – and Their Machines – for Decades

Google Mastered a Game That Vexed Scientists - and Their Machines - for DecadesArtificial intelligence took a historic step forward last week when a Google team announced that it taught a machine to master the ancient Chinese game Go, a feat researchers have chased for decades.

While computers learned to outclass humans at checkers and chess in the ’90s, Go – a 2,500-year-old game – was still vexing computer scientists. Because the game offers players a nearly infinite number of moves – and is difficult to score in the middle of a match – it has proved to be the most difficult of classic games to teach computers to play.

But that all changed last week as Google’s researchers brought a fresh approach and wealth of computing power to findings published in the scientific journal Nature.

“It’s a real milestone and surprise for me how quickly things have happened,” said Martin Muller, a professor at the University of Alberta and longtime researcher of Go. A decade ago, his work helped computers draw closer to the caliber of human players, which Google then used in its approach. The company’s researchers “have these new ideas, and they showed they’re very effective.”

The Google team hopes that in the long term, the technology behind the breakthrough can be applied to society’s most challenging problems, including making medical diagnoses and modeling climates.

Such efforts are years away, the researchers admit. In the near term, they’re looking to integrate the work into smartphone assistants – think of iPhone’s Siri or Google’s voice assistant.

In Go, players place black and white stones on a grid to spread across open areas and surround their opponent’s pieces. If you surround your foe’s stone, it’s removed from the board. The player with the most territory wins.

Google’s system swept the European Go champion, Fan Hui, 5-0, in a match refereed by the British Go Association. It is the first time a computer has beaten a professional player in a game on a full-size board, without a handicap. (The game is sometimes played on a smaller board with fewer squares, which is easier for a machine to master.) Google’s technology relied on the strength of more than 1,200 cloud computers in warehouses around the globe.

Google’s system was trained on 30 million moves players made in actual games of Go. Then the system began to play games against itself, using trial and error, to recognize which moves work in a given situation and which don’t. While a human may master Go with thousands of games of experience, the computer system relied on millions of matches.

Last week’s feat has drawn comparisons to when IBM’s Deep Blue computer beat chess champion Garry Kasparov in 1997. It also brings to mind IBM Watson’s system, which has trumped humans at “Jeopardy.”

Like Deep Blue, Google’s system relies on its ability to process millions of scenarios. But Google’s computers do more than just memorize every possible outcome. They learn through trial and error, just like humans do. That makes the innovation more applicable to a wide array of tasks. Google showed the power of this approach last year when one of its systems taught itself to be better at Atari games than humans.

“My dream is to use these types of general learning systems to help with science,” said Demis Hassabis, who leads DeepMind, the London-based Google team behind the findings. “You can think of AI scientists or AI-assisted science working hand in hand with human expert scientists to help them in a complementary way to make faster breakthroughs in scientific endeavors.”

While Deep Blue’s defeat of Kasparov drew plenty of headlines, the science behind it has not had broad implications for humanity in the 19 years since.

“This feels like it could be different, because there’s more generality in the methods,” Muller said. “There’s potential to have applicability to many other things.” He also cautioned that just as Go was significantly tougher than mastering chess, making predictions in real-world situations will bring another challenge for Google’s researchers.

In March, the Google system – called AlphaGo – will take on Lee Sedol, arguably the top Go player in the world, in a five-game match in Seoul. That could be its “Kasparov moment.”

“I heard Google DeepMind’s artificial intelligence is surprisingly strong and getting stronger,” Sebol said in a statement. “But I am confident that I can win at least this time.”

© 2016 The Washington Post

Download the Gadgets 360 app for Android and iOS to stay up to date with the latest tech news, product reviews, and exclusive deals on the popular mobiles.

Tags: AI, AlphaGo, Artificial Intelligence, DeepMind, Go, Google, Google AlphaGo, Google DeepMind, Science

About the author

Related Post