Back to Search Start Over

Learning‐based T‐sHDP(λ) for optimal control of a class of nonlinear discrete‐time systems.

Authors :
Yu, Luyang
Liu, Weibo
Liu, Yurong
Alsaadi, Fawaz E.
Source :
International Journal of Robust & Nonlinear Control. 3/25/2022, Vol. 32 Issue 5, p2624-2643. 20p.
Publication Year :
2022

Abstract

This article investigates the optimal control problem via reinforcement learning for a class of nonlinear discrete‐time systems. The nonlinear system under consideration is assumed to be partially unknown. A new learning‐based algorithm, T‐step heuristic dynamic programming with eligibility traces (T‐sHDP(λ)), is proposed to tackle the optimal control problem for such partially unknown system. First, the concerned optimal control problem is turned into its equivalence problem, that is, solving a Bellman equation. Then, the T‐sHDP(λ) is utilized to get an approximate solution of Bellman equation, and a rigorous convergence analysis is also conducted as well. Instead of the commonly used single step update approach, the T‐sHDP(λ) stores finite step past returns by introducing a parameter, and then utilizes these knowledge to update the value function (VF) of multiple moments synchronously, so as to achieve higher convergence speed. For implementation of T‐sHDP(λ), a neural network‐based actor‐critic architecture is applied to approximate VF and optimal control scheme. Finally, the feasibility of the algorithm is demonstrated by two illustrative simulation examples. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10498923
Volume :
32
Issue :
5
Database :
Academic Search Index
Journal :
International Journal of Robust & Nonlinear Control
Publication Type :
Academic Journal
Accession number :
155325326
Full Text :
https://doi.org/10.1002/rnc.5847