Back to Search
Start Over
Boosting Offline Reinforcement Learning via Data Rebalancing
- Publication Year :
- 2022
- Publisher :
- arXiv, 2022.
-
Abstract
- Offline reinforcement learning (RL) is challenged by the distributional shift between learning policies and datasets. To address this problem, existing works mainly focus on designing sophisticated algorithms to explicitly or implicitly constrain the learned policy to be close to the behavior policy. The constraint applies not only to well-performing actions but also to inferior ones, which limits the performance upper bound of the learned policy. Instead of aligning the densities of two distributions, aligning the supports gives a relaxed constraint while still being able to avoid out-of-distribution actions. Therefore, we propose a simple yet effective method to boost offline RL algorithms based on the observation that resampling a dataset keeps the distribution support unchanged. More specifically, we construct a better behavior policy by resampling each transition in an old dataset according to its episodic return. We dub our method ReD (Return-based Data Rebalance), which can be implemented with less than 10 lines of code change and adds negligible running time. Extensive experiments demonstrate that ReD is effective at boosting offline RL performance and orthogonal to decoupling strategies in long-tailed classification. New state-of-the-arts are achieved on the D4RL benchmark.<br />Comment: 8 pages, 2 figures
Details
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....cdf9dd96245cc17f938ee20735417911
- Full Text :
- https://doi.org/10.48550/arxiv.2210.09241