Back to Search Start Over

Model-Based Reinforcement Learning with Multi-Task Offline Pretraining

Authors :
Pan, Minting
Zheng, Yitao
Wang, Yunbo
Yang, Xiaokang
Publication Year :
2023

Abstract

Pretraining reinforcement learning (RL) models on offline datasets is a promising way to improve their training efficiency in online tasks, but challenging due to the inherent mismatch in dynamics and behaviors across various tasks. We present a model-based RL method that learns to transfer potentially useful dynamics and action demonstrations from offline data to a novel task. The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the task relevance for both dynamics representation transfer and policy transfer. We build a time-varying, domain-selective distillation loss to generate a set of offline-to-online similarity weights. These weights serve two purposes: (i) adaptively transferring the task-agnostic knowledge of physical dynamics to facilitate world model training, and (ii) learning to replay relevant source actions to guide the target policy. We demonstrate the advantages of our approach compared with the state-of-the-art methods in Meta-World and DeepMind Control Suite.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2306.03360
Document Type :
Working Paper