Back to Search Start Over

Distributionally Robust Constrained Reinforcement Learning under Strong Duality

Authors :
Zhang, Zhengfei
Panaganti, Kishan
Shi, Laixi
Sui, Yanan
Wierman, Adam
Yue, Yisong
Publication Year :
2024

Abstract

We study the problem of Distributionally Robust Constrained RL (DRC-RL), where the goal is to maximize the expected reward subject to environmental distribution shifts and constraints. This setting captures situations where training and testing environments differ, and policies must satisfy constraints motivated by safety or limited budgets. Despite significant progress toward algorithm design for the separate problems of distributionally robust RL and constrained RL, there do not yet exist algorithms with end-to-end convergence guarantees for DRC-RL. We develop an algorithmic framework based on strong duality that enables the first efficient and provable solution in a class of environmental uncertainties. Further, our framework exposes an inherent structure of DRC-RL that arises from the combination of distributional robustness and constraints, which prevents a popular class of iterative methods from tractably solving DRC-RL, despite such frameworks being applicable for each of distributionally robust RL and constrained RL individually. Finally, we conduct experiments on a car racing benchmark to evaluate the effectiveness of the proposed algorithm.<br />Comment: Accepted at the Reinforcement Learning Conference (RLC) 2024; 28 pages, 4 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.15788
Document Type :
Working Paper