Learning to play the Chess Variant Crazyhouse above World Champion Level with Deep Neural Networks and Human Data

08/19/2019
by   Johannes Czech, et al.
7

Deep neural networks have been successfully applied in learning the board games Go, chess and shogi without prior knowledge by making use of reinforcement learning. Although starting from zero knowledge has been shown to yield impressive results, it is associated with high computationally costs especially for complex games. With this paper, we present CrazyAra which is a neural network based engine solely trained in supervised manner for the chess variant crazyhouse. Crazyhouse is a game with a higher branching factor than chess and there is only limited data of lower quality available compared to AlphaGo. Therefore, we focus on improving efficiency in multiple aspects while relying on low computational resources. These improvements include modifications in the neural network design and training configuration, the introduction of a data normalization step and a more sample efficient Monte-Carlo tree search which has a lower chance to blunder. After training on 569,537 human games for 1.5 days we achieve a move prediction accuracy of 60.4 players. Most notably, CrazyAra achieved a four to one win over 2017 crazyhouse world champion Justin Tan (aka LM Jann Lee) who is more than 400 Elo higher rated compared to the average player in our training set. Furthermore, we test the playing strength of CrazyAra on CPU against all participants of the second Crazyhouse Computer Championships 2017, winning against twelve of the thirteen participants. Finally, for CrazyAraFish we continue training our model on generated engine games. In ten long-time control matches playing Stockfish 10, CrazyAraFish wins three games and draws one out of ten matches.

READ FULL TEXT

page 8

page 9

page 15

page 16

page 25

research
04/28/2022

AlphaZero-Inspired General Board Game Learning and Playing

Recently, the seminal algorithms AlphaGo and AlphaZero have started a ne...
research
08/25/2019

Exploring the Performance of Deep Residual Networks in Crazyhouse Chess

Crazyhouse is a chess variant that incorporates all of the classical che...
research
03/21/2019

Biasing MCTS with Features for General Games

This paper proposes using a linear function approximator, rather than a ...
research
05/10/2020

Accelerating Deep Neuroevolution on Distributed FPGAs for Reinforcement Learning Problems

Reinforcement learning augmented by the representational power of deep n...
research
06/26/2018

Q-DeckRec: A Fast Deck Recommendation System for Collectible Card Games

Deck building is a crucial component in playing Collectible Card Games (...
research
09/26/2021

Applying supervised and reinforcement learning methods to create neural-network-based agents for playing StarCraft II

Recently, multiple approaches for creating agents for playing various co...
research
03/22/2019

Improving Search with Supervised Learning in Trick-Based Card Games

In trick-taking card games, a two-step process of state sampling and eva...

Please sign up or login with your details

Forgot password? Click here to reset