Approximately twenty years ago, the chess world witnessed something groundbreaking. Deep Blue, the IBM-build chess engine, won a match using standard tournament time controls against a World Champion for the first time. That World Champion, of course, was Garry Kasparov. It was the first time that the chess world witnessed machine over man. Some believed that was going to be the extent of the experiment, but the next couple of decades implied otherwise. Ever since that first defeat, computers, by and large, have been increasing in strength, and consequently, the gap in strength between engines and humans has widened more and more.
The developers of the top engines in the world are always making incremental improvements to the program, resulting in single-digit increases to the engines’ ratings. This is evident on a yearly basis at the TCEC – the Top Chess Engine Championships – where engine ratings are almost always higher than in the previous year. In 2016, the engine Stockfish came out of the tournament victorious. It looked like the strongest engine on the planet.
However, I guess it is now safe to say that it is no longer the case. Meet AlphaZero, a newly developed algorithm created by Google in partnership with DeepMind. The algorithm is an offshoot of AlphaGoZero, a slightly more specific algorithm meant for the purpose of playing Go. The remarkable point is that the only information fed to the algorithm were the rules of the game. From there, AlphaZero used self-play to learn the chess knowledge that humans have spent centuries and even millennia discovering. For those interested in the mechanisms behind it, essentially the algorithm played against itself, and when arriving at a position at the “end” of a game, it evaluated it as either won, lost, or draw; it then used these evaluations to reinforce its neural networks so that it could decide whether entering into that specific position would be favorable or unfavorable. In this way, within just four hours, AlphaZero quickly strengthened into a super-engine capable of [more than] competing with the current engines of the day.
In a 100-game match between AlphaZero and Stockfish, AlphaZero crushed rather handily, ending with 28 wins to none and 72 draws. The researchers involved published a few of those games online, and I have two of them to share with you because of their complexity and ability to fascinate us humans.
This game caught my attention because of AlphaZero’s depth of calculation and piece maneuvering/handling. After 18. … g5, rather than saving the piece, AlphaZero calmly develops his rook; only a few moves later, the queen travels from a4 to h4 down to the h1 corner before reappearing in the center with deadly effect.
This game also fascinated me, but it was particularly the end of the game. After noticing the Black queen stuck in the corner after 45. … Qh8, AlphaZero proceeds to sacrifice an exchange so that it can plant the other rook on f6 to trap the queen in the corner. This plan immobilized the kingside and allowed White to sit while Black exhausted all move possibilities until it would have to give up material.
In general, both of these games showed how AlphaZero was able to smoothly outplay Stockfish. In a way, all of this is somewhat jaw-dropping, since the appearance of AlphaZero to the eventual match between it and Stockfish and its victory all happened so quickly. However, if one thing is for sure, it is that AI, neural-network-based engines seem to hold promise (or doom, depending on your opinion of chess engines) for the future.
As always, thanks for reading, and I’ll see you next time!