Back to Search Start Over

Reinforcement learning for inverse linear-quadratic dynamic non-cooperative games.

Authors :
Martirosyan, E.
Cao, M.
Source :
Systems & Control Letters. Sep2024, Vol. 191, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

The paper addresses the inverse problem in the case of linear-quadratic discrete-time dynamic non-cooperative games. We consider a game with some unknown cost function parameters, referred to as the observed game, that has a set of known feedback laws constituting a Nash equilibrium. The inverse problem is to find values of the cost function parameters that together with the observed game dynamics form a new game, equivalent to the observed one in the sense that it has the same Nash equilibrium. We present a model-based algorithm to solve this problem. We prove the convergence of the algorithm and show that the given set of feedback laws is a Nash equilibrium for the designed game. We also demonstrate how to generate new games with the required properties without repeatedly running the complete algorithm. Moreover, the model-based algorithm is extended to a model-free version that operates without requiring the knowledge of the system matrices, but relies on the ability to collect sufficient data. Simulation results validate the effectiveness of the proposed algorithms. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01676911
Volume :
191
Database :
Academic Search Index
Journal :
Systems & Control Letters
Publication Type :
Academic Journal
Accession number :
178939415
Full Text :
https://doi.org/10.1016/j.sysconle.2024.105883