Back to Search
Start Over
Stochastic Bandits with Context Distributions
- Source :
- Advances in Neural Information Processing Systems 32
- Publication Year :
- 2019
-
Abstract
- We introduce a stochastic contextual bandit model where at each time step the environment chooses a distribution over a context set and samples the context from this distribution. The learner observes only the context distribution while the exact context realization remains hidden. This allows for a broad range of applications where the context is stochastic or when the learner needs to predict the context. We leverage the UCB algorithm to this setting and show that it achieves an order-optimal high-probability bound on the cumulative regret for linear and kernelized reward functions. Our results strictly generalize previous work in the sense that both our model and the algorithm reduce to the standard setting when the environment chooses only Dirac delta distributions and therefore provides the exact context to the learner. We further analyze a variant where the learner observes the realized context after choosing the action. Finally, we demonstrate the proposed method on synthetic and real-world datasets.<br />Advances in Neural Information Processing Systems 32<br />ISBN:978-1-7138-0793-3
- Subjects :
- FOS: Computer and information sciences
Computer Science::Machine Learning
Statistics::Machine Learning
Computer Science - Machine Learning
Statistics - Machine Learning
0502 economics and business
05 social sciences
Machine Learning (stat.ML)
050207 economics
010501 environmental sciences
01 natural sciences
0105 earth and related environmental sciences
Machine Learning (cs.LG)
Subjects
Details
- Language :
- English
- ISBN :
- 978-1-71380-793-3
- ISBNs :
- 9781713807933
- Database :
- OpenAIRE
- Journal :
- Advances in Neural Information Processing Systems 32
- Accession number :
- edsair.doi.dedup.....f9cb95524b8ac5dd16dbbccd13b166c7