Back to Search Start Over

On Distributed Model-Free Reinforcement Learning Control With Stability Guarantee

Authors :
Sayak Mukherjee
Thanh Long Vu
Source :
ACC
Publication Year :
2021
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2021.

Abstract

Distributed learning can enable scalable and effective decision making in numerous complex cyber-physical systems such as smart transportation, robotics swarm, power systems, etc. However, stability of the system is usually not guaranteed in most existing learning paradigms; and this limitation can hinder the wide deployment of machine learning in decision making of safety-critical systems. This letter presents a stability-guaranteed distributed reinforcement learning (SGDRL) framework for interconnected linear subsystems, without knowing the subsystem models. While the learning process requires data from a peer-to-peer (p2p) communication architecture, the control implementation of each subsystem is only based on its local states. The stability of the interconnected subsystems will be ensured by a diagonally dominant eigenvalue condition, which will then be used in a model-free RL algorithm to learn the stabilizing control gains. The RL algorithm structure follows an off-policy iterative framework, with interleaved policy evaluation and policy update steps. We numerically validate our theoretical results by performing simulations on four interconnected sub-systems.

Details

ISSN :
24751456
Volume :
5
Database :
OpenAIRE
Journal :
IEEE Control Systems Letters
Accession number :
edsair.doi.dedup.....8ed1f7ee04d44dc32a028ad165a2a552
Full Text :
https://doi.org/10.1109/lcsys.2020.3041218