Google's AlphaGo Artificial Itelligence Beats Go Champion 5-0 In Big Step Forward For AI
The Ancient Chinese Board Game Is More Complex Than Chess
Games have been one of the key testing grounds for the development of Artificial Intelligence. The complex but controlled systems are excellent platforms with which to test the limits of an AI. To draw attention to the research the programs are often pitted against human players who are masters of certain games, with chess computer Deep Blue playing Garry Kasparov in the late 1990’s and more recent battles between top poker pros and AI’s like Polaris, Cepheus and Claudico.
The latest breakthrough in the battle of man vs. machine took place this week when Google’s AlphaGo AI beat European Go champion Fan Hui 5-0. The 2,500-year-old Chinese board game is one of the most complex in the world in terms of strategy and in regards to “state space,” or how many total positions are possible in the game. According to an article on statistics website FiveThirtyEight, Tic-tac-toe has a state space of 10^x3 and has been solved. Checkers, with a state space of 10^x20, is another of several less complex games that are solved.
Chess is much more complex at 10^x50. These confusing looking numbers undoubtedly offer little insight to the average person as to just how large of an amount of possible setups we are talking about, so to put it in perspective, the total number of atoms in the universe is estimated to be around 10^x80.
Go played on the standard 19 by 19 grid has a state space of 10^x171.
Written out that is 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000.
Demis Hassabis of Google’s DeepMind artificial intelligence team explained a bit about how the complex game was approached.
“We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time (the previous record before AlphaGo was 44 percent). But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning. Of course, all of this requires a huge amount of computing power, so we made extensive use of Google Cloud Platform.”
Go, like chess and many variants of poker, is still far from being solved, but this was a huge step forward for artificial intelligence researchers.
Check out a video about the match released by Google below:
|1||Damon Impersonates Malkovich From 'Rounders'|
|2||Jon Turner Wins CPPT Venetian Main Event|
|3||CPPT Venetian: Thomas Paul Leads Final 16 Players|
|4||Damon Says 'Rounders 2' Would Be About I-Poker|
|5||Poker Strategy: Beware Of Unforced Bets|
|6||Jon Little: Top Poker Pro Becomes Top Instructor|
|7||Caesars In Talks To Sell CIE, But Would Keep WSOP|
|8||Trump Running Mate Has Anti-Online Poker History|
|9||POY: WSOP Main Event Champ To Earn 3,300 Points|
|10||Poker Strategy: Maximizing River Value in Position|
|1||Poker Pro Folds Quads To All-In Bet In Main Event|
|2||Final Nine Set For 2016 WSOP Main Event|
|3||Fedor Holz Wins 2016 WSOP $111,111 High Roller|
|4||Man Runs Up $10K Into One Drop Buy-In Playing BJ|
|5||Phil Ivey Enters First And Only WSOP Event In 2016|
|6||Raymer Not Impressed By Being Last Former Champ Left|
|7||Mercier Proposes To Girlfriend At Final Table|
|8||Man Pays $10K To Enter Ladies Event At 2016 WSOP|
|9||2016 WSOP Main Event: 80 Players Remain After Day 5|
|10||Money Bubble Bursts In 2016 WSOP Main Event|