Back to Search Start Over

Natural Policy Gradient and Actor Critic Methods for Constrained Multi-Task Reinforcement Learning

Authors :
Zeng, Sihan
Doan, Thinh T.
Romberg, Justin
Publication Year :
2024

Abstract

Multi-task reinforcement learning (RL) aims to find a single policy that effectively solves multiple tasks at the same time. This paper presents a constrained formulation for multi-task RL where the goal is to maximize the average performance of the policy across tasks subject to bounds on the performance in each task. We consider solving this problem both in the centralized setting, where information for all tasks is accessible to a single server, and in the decentralized setting, where a network of agents, each given one task and observing local information, cooperate to find the solution of the globally constrained objective using local communication. We first propose a primal-dual algorithm that provably converges to the globally optimal solution of this constrained formulation under exact gradient evaluations. When the gradient is unknown, we further develop a sampled-based actor-critic algorithm that finds the optimal policy using online samples of state, action, and reward. Finally, we study the extension of the algorithm to the linear function approximation setting.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.02456
Document Type :
Working Paper