Back to Search Start Over

Parallelizing Contextual Bandits

Authors :
Chan, Jeffrey
Pacchiano, Aldo
Tripuraneni, Nilesh
Song, Yun S.
Bartlett, Peter
Jordan, Michael I.
Publication Year :
2021

Abstract

Standard approaches to decision-making under uncertainty focus on sequential exploration of the space of decisions. However, \textit{simultaneously} proposing a batch of decisions, which leverages available resources for parallel experimentation, has the potential to rapidly accelerate exploration. We present a family of (parallel) contextual bandit algorithms applicable to problems with bounded eluder dimension whose regret is nearly identical to their perfectly sequential counterparts -- given access to the same total number of oracle queries -- up to a lower-order ``burn-in" term. We further show these algorithms can be specialized to the class of linear reward functions where we introduce and analyze several new linear bandit algorithms which explicitly introduce diversity into their action selection. Finally, we also present an empirical evaluation of these parallel algorithms in several domains, including materials discovery and biological sequence design problems, to demonstrate the utility of parallelized bandits in practical settings.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2105.10590
Document Type :
Working Paper