1. Approximation Algorithms for Restless Bandit Problems.
- Author
-
GUHA, SUDIPTO, MUNAGALA, KAMESH, and PENG SHI
- Subjects
OPERATIONS research ,DECISION theory ,STOCHASTIC processes ,MARKOV processes ,STOCHASTIC systems ,COMPUTATIONAL mathematics ,APPROXIMATION theory ,STOCHASTIC approximation - Abstract
The restless bandit problem is one of the mostwell-studied generalizations of the celebrated stochastic multi-armed bandit (MAB) problem in decision theory. In its ultimate generality, the restless bandit problem is known to be PSPACE-Hard to approximate to any nontrivial factor, and little progress has been made on this problem despite its significance in modeling activity allocation under uncertainty. In this article, we consider the FEEDBACK MAB problem, where the reward obtained by playing each of n independent arms varies according to an underlying on/off Markov process whose exact state is only revealed when the arm is played. The goal is to design a policy for playing the arms in order to maximize the infinite horizon time average expected reward. This problem is also an instance of a Partially Observable Markov Decision Process (POMDP), and is widely studied in wireless scheduling and unmanned aerial vehicle (UAV) routing. Unlike the stochastic MAB problem, the FEEDBACK MAB problem does not admit to greedy index-based optimal policies. We develop a novel duality-based algorithmic technique that yields a surprisingly simple and intuitive (2 + ϵ)-approximate greedy policy to this problem. We show that both in terms of approximation factor and computational efficiency, our policy is closely related to the Whittle index, which is widely used for its simplicity and efficiency of computation. Subsequently we define a multi-state generalization, that we term MONOTONE bandits, which remains subclass of the restless bandit problem. We show that our policy remains a 2-approximation in this setting, and further, our technique is robust enough to incorporate various side-constraints such as blocking plays, switching costs, and even models where determining the state of an arm is a separate operation from playing it. Our technique is also of independent interest for other restless bandit problems, and we provide an example in non-preemptive machine replenishment. Interestingly, in this case, our policy provides a constant factor guarantee, whereas the Whittle index is provably polynomially worse. By presenting the first O(1) approximations for nontrivial instances of restless bandits as well as of POMDPs, our work initiates the study of approximation algorithms in both these contexts. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF