101. MPC-Inspired Reinforcement Learning for Verifiable Model-Free Control
- Author
-
Lu, Yiwen, Li, Zishuo, Zhou, Yihan, Li, Na, and Mo, Yilin
- Subjects
Electrical Engineering and Systems Science - Systems and Control ,Computer Science - Machine Learning ,Computer Science - Robotics ,Mathematics - Optimization and Control - Abstract
In this paper, we introduce a new class of parameterized controllers, drawing inspiration from Model Predictive Control (MPC). The controller resembles a Quadratic Programming (QP) solver of a linear MPC problem, with the parameters of the controller being trained via Deep Reinforcement Learning (DRL) rather than derived from system models. This approach addresses the limitations of common controllers with Multi-Layer Perceptron (MLP) or other general neural network architecture used in DRL, in terms of verifiability and performance guarantees, and the learned controllers possess verifiable properties like persistent feasibility and asymptotic stability akin to MPC. On the other hand, numerical examples illustrate that the proposed controller empirically matches MPC and MLP controllers in terms of control performance and has superior robustness against modeling uncertainty and noises. Furthermore, the proposed controller is significantly more computationally efficient compared to MPC and requires fewer parameters to learn than MLP controllers. Real-world experiments on vehicle drift maneuvering task demonstrate the potential of these controllers for robotics and other demanding control tasks.
- Published
- 2023