Back to Search Start Over

Frequentist Regret Bounds for Randomized Least-Squares Value Iteration

Authors :
Zanette, Andrea
Brandfonbrener, David
Brunskill, Emma
Pirotta, Matteo
Lazaric, Alessandro
Publication Year :
2019
Publisher :
arXiv, 2019.

Abstract

We consider the exploration-exploitation dilemma in finite-horizon reinforcement learning (RL). When the state space is large or continuous, traditional tabular approaches are unfeasible and some form of function approximation is mandatory. In this paper, we introduce an optimistically-initialized variant of the popular randomized least-squares value iteration (RLSVI), a model-free algorithm where exploration is induced by perturbing the least-squares approximation of the action-value function. Under the assumption that the Markov decision process has low-rank transition dynamics, we prove that the frequentist regret of RLSVI is upper-bounded by $\widetilde O(d^2 H^2 \sqrt{T})$ where $ d $ are the feature dimension, $ H $ is the horizon, and $ T $ is the total number of steps. To the best of our knowledge, this is the first frequentist regret analysis for randomized exploration with function approximation.<br />Comment: AISTATS 2020; minor bug fix

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....198de697120171ce205dfbd283c4e0ed
Full Text :
https://doi.org/10.48550/arxiv.1911.00567