Back to Search Start Over

LP-based policies for restless bandits: necessary and sufficient conditions for (exponentially fast) asymptotic optimality

Authors :
Gast, Nicolas
Gaujal, Bruno
Yan, Chen
Publication Year :
2021

Abstract

We provide a framework to analyse control policies for the restless Markovian bandit model, under both finite and infinite time horizon. We show that when the population of arms goes to infinity, the value of the optimal control policy converges to the solution of a linear program (LP). We provide necessary and sufficient conditions for a generic control policy to be: i) asymptotically optimal; ii) asymptotically optimal with square root convergence rate; iii) asymptotically optimal with exponential rate. We then construct the LP-index policy that is asymptotically optimal with square root convergence rate on all models, and with exponential rate if the model is non-degenerate in finite horizon, and satisfies a uniform global attractor property in infinite horizon. We next define the LP-update policy, which is essentially a repeated LP-index policy that solves a new linear program at each decision epoch. We provide numerical experiments to compare the efficiency of LP-based policies. We compare the performance of the LP-index policy and the LP-update policy with other heuristics. Our result demonstrates that the LP-update policy outperforms the LP-index policy in general, and can have a significant advantage when the transition matrices are wrongly estimated.<br />Comment: Mathematics of Operations Research, 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2106.10067
Document Type :
Working Paper
Full Text :
https://doi.org/10.1287/moor.2022.0101