Back to Search Start Over

Offline Reinforcement Learning from Datasets with Structured Non-Stationarity

Authors :
Ackermann, Johannes
Osa, Takayuki
Sugiyama, Masashi
Publication Year :
2024

Abstract

Current Reinforcement Learning (RL) is often limited by the large amount of data needed to learn a successful policy. Offline RL aims to solve this issue by using transitions collected by a different behavior policy. We address a novel Offline RL problem setting in which, while collecting the dataset, the transition and reward functions gradually change between episodes but stay constant within each episode. We propose a method based on Contrastive Predictive Coding that identifies this non-stationarity in the offline dataset, accounts for it when training a policy, and predicts it during evaluation. We analyze our proposed method and show that it performs well in simple continuous control tasks and challenging, high-dimensional locomotion tasks. We show that our method often achieves the oracle performance and performs better than baselines.<br />Comment: Accepted for Reinforcement Learning Conference (RLC) 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.14114
Document Type :
Working Paper