Back to Search Start Over

Robustifying Reinforcement Learning Policies with $\mathcal{L}_1$ Adaptive Control

Authors :
Cheng, Yikun
Zhao, Pan
Gandhi, Manan
Li, Bo
Theodorou, Evangelos
Hovakimyan, Naira
Publication Year :
2021

Abstract

A reinforcement learning (RL) policy trained in a nominal environment could fail in a new/perturbed environment due to the existence of dynamic variations. Existing robust methods try to obtain a fixed policy for all envisioned dynamic variation scenarios through robust or adversarial training. These methods could lead to conservative performance due to emphasis on the worst case, and often involve tedious modifications to the training environment. We propose an approach to robustifying a pre-trained non-robust RL policy with $\mathcal{L}_1$ adaptive control. Leveraging the capability of an $\mathcal{L}_1$ control law in the fast estimation of and active compensation for dynamic variations, our approach can significantly improve the robustness of an RL policy trained in a standard (i.e., non-robust) way, either in a simulator or in the real world. Numerical experiments are provided to validate the efficacy of the proposed approach.<br />A significantly extended version of this paper has been uploaded to arXiv. arXiv:2112.01953

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....1a1354dab5b176d88facd58acb135c2f