Back to Search Start Over

Polygames: Improved Zero Learning

Authors :
Cazenave, Tristan
Chen, Yen-Chi
Chen, Guan-Wei
Chen, Shi-Yu
Chiu, Xian-Dong
Dehos, Julien
Elsa, Maria
Gong, Qucheng
Hu, Hengyuan
Khalidov, Vasil
Li, Cheng-Ling
Lin, Hsin-I
Lin, Yu-Jin
Martinet, Xavier
Mella, Vegard
Rapin, Jeremy
Roziere, Baptiste
Synnaeve, Gabriel
Teytaud, Fabien
Teytaud, Olivier
Ye, Shi-Cheng
Ye, Yi-Jun
Yen, Shi-Jim
Zagoruyko, Sergey
Publication Year :
2020

Abstract

Since DeepMind's AlphaZero, Zero learning quickly became the state-of-the-art method for many board games. It can be improved using a fully convolutional structure (no fully connected layer). Using such an architecture plus global pooling, we can create bots independent of the board size. The training can be made more robust by keeping track of the best checkpoints during the training and by training against them. Using these features, we release Polygames, our framework for Zero learning, with its library of games and its checkpoints. We won against strong humans at the game of Hex in 19x19, which was often said to be untractable for zero learning; and in Havannah. We also won several first places at the TAAI competitions.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2001.09832
Document Type :
Working Paper