Back to Search Start Over

Anytime Sequential Halving in Monte-Carlo Tree Search

Authors :
Sagers, Dominic
Winands, Mark H. M.
Soemers, Dennis J. N. J.
Publication Year :
2024

Abstract

Monte-Carlo Tree Search (MCTS) typically uses multi-armed bandit (MAB) strategies designed to minimize cumulative regret, such as UCB1, as its selection strategy. However, in the root node of the search tree, it is more sensible to minimize simple regret. Previous work has proposed using Sequential Halving as selection strategy in the root node, as, in theory, it performs better with respect to simple regret. However, Sequential Halving requires a budget of iterations to be predetermined, which is often impractical. This paper proposes an anytime version of the algorithm, which can be halted at any arbitrary time and still return a satisfactory result, while being designed such that it approximates the behavior of Sequential Halving. Empirical results in synthetic MAB problems and ten different board games demonstrate that the algorithm's performance is competitive with Sequential Halving and UCB1 (and their analogues in MCTS).<br />Comment: Accepted by the Computers and Games 2024 conference

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.07171
Document Type :
Working Paper