Back to Search Start Over

Double Explore-then-Commit: Asymptotic Optimality and Beyond

Authors :
Jin, Tianyuan
Xu, Pan
Xiao, Xiaokui
Gu, Quanquan
Publication Year :
2020

Abstract

We study the multi-armed bandit problem with subgaussian rewards. The explore-then-commit (ETC) strategy, which consists of an exploration phase followed by an exploitation phase, is one of the most widely used algorithms in a variety of online decision applications. Nevertheless, it has been shown in Garivier et al. (2016) that ETC is suboptimal in the asymptotic sense as the horizon grows, and thus, is worse than fully sequential strategies such as Upper Confidence Bound (UCB). In this paper, we show that a variant of ETC algorithm can actually achieve the asymptotic optimality for multi-armed bandit problems as UCB-type algorithms do and extend it to the batched bandit setting. Specifically, we propose a double explore-then-commit (DETC) algorithm that has two exploration and exploitation phases and prove that DETC achieves the asymptotically optimal regret bound. To our knowledge, DETC is the first non-fully-sequential algorithm that achieves such asymptotic optimality. In addition, we extend DETC to batched bandit problems, where (i) the exploration process is split into a small number of batches and (ii) the round complexity is of central interest. We prove that a batched version of DETC can achieve the asymptotic optimality with only a constant round complexity. This is the first batched bandit algorithm that can attain the optimal asymptotic regret bound and optimal round complexity simultaneously.<br />Comment: 46 pages. This version improves the presentation, and adds new algorithms and theoretical results: an anytime algorithm with asymptotic optimality guarantee, and an extension to K-armed bandits

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2002.09174
Document Type :
Working Paper