Back to Search Start Over

Offline reinforcement learning with representations for actions.

Authors :
Lou, Xingzhou
Yin, Qiyue
Zhang, Junge
Yu, Chao
He, Zhaofeng
Cheng, Nengjie
Huang, Kaiqi
Source :
Information Sciences. Sep2022, Vol. 609, p746-758. 13p.
Publication Year :
2022

Abstract

Prevailing offline reinforcement learning (RL) methods limit the policy within the area supported by the offline dataset to avoid the distributional shift problem. But potential high-reward actions, which are out of the distribution of the dataset, are neglected in these methods. To address such issue, we propose a new method, which generalizes from the offline dataset to out-of-distribution (OOD) actions. Specifically, we design a novel action embedding model to help infer the effect of actions. As a result, our value function reaches a better generalization over the action space, and further alleviate the distributional shift caused by overestimation of OOD actions. Theoretically, we give an information-theoretic explanation on the improvement of the value function's generalization over the action space. Experiments on D4RL demonstrate that our model improves the performance compared to previous offline RL methods, especially when the experience in the offline dataset is good. We conduct further study and validate that the value function's generalization on OOD actions is improved, which reinforces the effectiveness of our proposed action embedding model. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00200255
Volume :
609
Database :
Academic Search Index
Journal :
Information Sciences
Publication Type :
Periodical
Accession number :
158863440
Full Text :
https://doi.org/10.1016/j.ins.2022.08.019