Back to Search Start Over

Off-policy reinforcement learning-based novel model-free minmax fault-tolerant tracking control for industrial processes

Authors :
Li, Xueyu
Luo, Qiuwen
Wang, Limin
Zhang, Ridong
Gao, Furong
Li, Xueyu
Luo, Qiuwen
Wang, Limin
Zhang, Ridong
Gao, Furong
Publication Year :
2022

Abstract

For industrial processes with external disturbance and actuator failure, off-policy reinforcement learning-based novel model-free minmax fault-tolerant control is proposed in this paper to solve H∞ fault-tolerant tracking control problem. An augmented model equivalent to the original system is constructed, and the state of the new augmented model is composed of state increment and tracking error of the original system. The original H∞ fault-tolerant tracking problem was transformed into the linear quadratic zero-sum game problem by establishing performance index function, and the Game Algebraic Riccati Equation (GARE) was established. Then Q function was introduced and the Off-policy reinforcement learning algorithm was designed. Different from the traditional model-based fault-tolerant control method, the proposed algorithm does not need the knowledge of system dynamics, and it can learn from the measured data of the system trajectory to solve the GARE. In addition, it is proved that the probing noise added to satisfy the persistent excitation condition does not cause bias. A simulation example of injection molding process is used to verify the effectiveness of the proposed algorithm. © 2022 Elsevier Ltd

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1363076272
Document Type :
Electronic Resource