Back to Search Start Over

Policy Gradient Search: Online Planning and Expert Iteration without Search Trees

Authors :
Anthony, Thomas
Nishihara, Robert
Moritz, Philipp
Salimans, Tim
Schulman, John
Publication Year :
2019
Publisher :
arXiv, 2019.

Abstract

Monte Carlo Tree Search (MCTS) algorithms perform simulation-based search to improve policies online. During search, the simulation policy is adapted to explore the most promising lines of play. MCTS has been used by state-of-the-art programs for many problems, however a disadvantage to MCTS is that it estimates the values of states with Monte Carlo averages, stored in a search tree; this does not scale to games with very high branching factors. We propose an alternative simulation-based search method, Policy Gradient Search (PGS), which adapts a neural network simulation policy online via policy gradient updates, avoiding the need for a search tree. In Hex, PGS achieves comparable performance to MCTS, and an agent trained using Expert Iteration with PGS was able defeat MoHex 2.0, the strongest open-source Hex agent, in 9x9 Hex.

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....d091a5eb311d34ad71403709ed2bdaaf
Full Text :
https://doi.org/10.48550/arxiv.1904.03646