Back to Search Start Over

A maximum-entropy approach to off-policy evaluation in average-reward MDPs

Authors :
Lazic, Nevena
Yin, Dong
Farajtabar, Mehrdad
Levine, Nir
Gorur, Dilan
Harris, Chris
Schuurmans, Dale
Publication Year :
2020

Abstract

This work focuses on off-policy evaluation (OPE) with function approximation in infinite-horizon undiscounted Markov decision processes (MDPs). For MDPs that are ergodic and linear (i.e. where rewards and dynamics are linear in some known features), we provide the first finite-sample OPE error bound, extending existing results beyond the episodic and discounted cases. In a more general setting, when the feature dynamics are approximately linear and for arbitrary rewards, we propose a new approach for estimating stationary distributions with function approximation. We formulate this problem as finding the maximum-entropy distribution subject to matching feature expectations under empirical dynamics. We show that this results in an exponential-family distribution whose sufficient statistics are the features, paralleling maximum-entropy approaches in supervised learning. We demonstrate the effectiveness of the proposed OPE approaches in multiple environments.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2006.12620
Document Type :
Working Paper