Back to Search Start Over

Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning

Authors :
Li, Lanqing
Zhang, Hai
Zhang, Xinyu
Zhu, Shatong
Zhao, Junqiao
Heng, Pheng-Ann
Publication Year :
2024

Abstract

As a marriage between offline RL and meta-RL, the advent of offline meta-reinforcement learning (OMRL) has shown great promise in enabling RL agents to multi-task and quickly adapt while acquiring knowledge safely. Among which, Context-based OMRL (COMRL) as a popular paradigm, aims to learn a universal policy conditioned on effective task representations. In this work, by examining several key milestones in the field of COMRL, we propose to integrate these seemingly independent methodologies into a unified information theoretic framework. Most importantly, we show that the pre-existing COMRL algorithms are essentially optimizing the same mutual information objective between the task variable $\boldsymbol{M}$ and its latent representation $\boldsymbol{Z}$ by implementing various approximate bounds. Based on the theoretical insight and the information bottleneck principle, we arrive at a novel algorithm dubbed UNICORN, which exhibits remarkable generalization across a broad spectrum of RL benchmarks, context shift scenarios, data qualities and deep learning architectures, attaining the new state-of-the-art. We believe that our framework could open up avenues for new optimality bounds and COMRL algorithms.<br />Comment: 20 pages, 8 figures, 5 tables. TLDR: We propose a novel information theoretic framework of the context-based offline meta-RL paradigm, which unifies several mainstream methods and leads to a general and state-of-the-art algorithm called UNICORN

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.02429
Document Type :
Working Paper