Back to Search Start Over

Sublinear Regret for Learning POMDPs

Authors :
Xiong, Yi
Chen, Ningyuan
Gao, Xuefeng
Zhou, Xiang
Publication Year :
2021

Abstract

We study the model-based undiscounted reinforcement learning for partially observable Markov decision processes (POMDPs). The oracle we consider is the optimal policy of the POMDP with a known environment in terms of the average reward over an infinite horizon. We propose a learning algorithm for this problem, building on spectral method-of-moments estimations for hidden Markov models, the belief error control in POMDPs and upper-confidence-bound methods for online learning. We establish a regret bound of $O(T^{2/3}\sqrt{\log T})$ for the proposed learning algorithm where $T$ is the learning horizon. This is, to the best of our knowledge, the first algorithm achieving sublinear regret with respect to our oracle for learning general POMDPs.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2107.03635
Document Type :
Working Paper