1. Experience-efficient learning in associative bandit problems
- Author
-
Haym Hirsh, Michael L. Littman, Alexander L. Strehl, and Chris Mesterharm
- Subjects
Computer Science::Machine Learning ,Concept class ,business.industry ,Learning environment ,Stochastic game ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Class (biology) ,VC dimension ,Statistical classification ,Scalability ,Artificial intelligence ,business ,Associative property ,Mathematics - Abstract
We formalize the associative bandit problem framework introduced by Kaelbling as a learning-theory problem. The learning environment is modeled as a k-armed bandit where arm payoffs are conditioned on an observable input selected on each trial. We show that, if the payoff functions are constrained to a known hypothesis class, learning can be performed efficiently with respect to the VC dimension of this class. We formally reduce the problem of PAC classification to the associative bandit problem, producing an efficient algorithm for any hypothesis class for which efficient classification algorithms are known. We demonstrate the approach empirically on a scalable concept class.
- Published
- 2006
- Full Text
- View/download PDF