Back to Search Start Over

Stabilizing Dynamical Systems via Policy Gradient Methods

Authors :
Perdomo, Juan C.
Umenberger, Jack
Simchowitz, Max
Publication Year :
2021

Abstract

Stabilizing an unknown control system is one of the most fundamental problems in control systems engineering. In this paper, we provide a simple, model-free algorithm for stabilizing fully observed dynamical systems. While model-free methods have become increasingly popular in practice due to their simplicity and flexibility, stabilization via direct policy search has received surprisingly little attention. Our algorithm proceeds by solving a series of discounted LQR problems, where the discount factor is gradually increased. We prove that this method efficiently recovers a stabilizing controller for linear systems, and for smooth, nonlinear systems within a neighborhood of their equilibria. Our approach overcomes a significant limitation of prior work, namely the need for a pre-given stabilizing control policy. We empirically evaluate the effectiveness of our approach on common control benchmarks.<br />Comment: accepted for publication at Neurips 2021

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2110.06418
Document Type :
Working Paper