Back to Search Start Over

On the Complexity of Representation Learning in Contextual Linear Bandits

Authors :
Tirinzoni, Andrea
Pirotta, Matteo
Lazaric, Alessandro
Publication Year :
2022
Publisher :
arXiv, 2022.

Abstract

In contextual linear bandits, the reward function is assumed to be a linear combination of an unknown reward vector and a given embedding of context-arm pairs. In practice, the embedding is often learned at the same time as the reward vector, thus leading to an online representation learning problem. Existing approaches to representation learning in contextual bandits are either very generic (e.g., model-selection techniques or algorithms for learning with arbitrary function classes) or specialized to particular structures (e.g., nested features or representations with certain spectral properties). As a result, the understanding of the cost of representation learning in contextual linear bandit is still limited. In this paper, we take a systematic approach to the problem and provide a comprehensive study through an instance-dependent perspective. We show that representation learning is fundamentally more complex than linear bandits (i.e., learning with a given representation). In particular, learning with a given set of representations is never simpler than learning with the worst realizable representation in the set, while we show cases where it can be arbitrarily harder. We complement this result with an extensive discussion of how it relates to existing literature and we illustrate positive instances where representation learning is as complex as learning with a fixed representation and where sub-logarithmic regret is achievable.

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....e5554cff363e10f21f976f0607499585
Full Text :
https://doi.org/10.48550/arxiv.2212.09429