Back to Search Start Over

No-Regret Distributed Learning in Subnetwork Zero-Sum Games

Authors :
Huang, Shijie
Lei, Jinlong
Hong, Yiguang
Shanbhag, Uday V.
Chen, Jie
Source :
IEEE Transactions on Automatic Control; October 2024, Vol. 69 Issue: 10 p6620-6635, 16p
Publication Year :
2024

Abstract

In this article, we consider a distributed learning problem in a subnetwork zero-sum game, where agents are competing in different subnetworks. These agents are connected through time-varying graphs where each agent has its own cost function and can receive information from its neighbors. We propose a distributed mirror descent algorithm for computing a Nash equilibrium and establish a sublinear regret bound on the sequence of iterates when the graphs are uniformly strongly connected and the cost functions are convex–concave. Moreover, we prove its convergence with suitably selected diminishing step sizes for a strictly convex–concave cost function. We also consider a constant step-size variant of the algorithm and establish an asymptotic error bound between the cost function values of running average actions and a Nash equilibrium. In addition, we apply the algorithm to compute a mixed-strategy Nash equilibrium in subnetwork zero-sum finite-strategy games, which have merely convex–concave (to be specific, multilinear) cost functions, and obtain a final-iteration convergence result and an ergodic convergence result, respectively, under different assumptions.

Details

Language :
English
ISSN :
00189286 and 15582523
Volume :
69
Issue :
10
Database :
Supplemental Index
Journal :
IEEE Transactions on Automatic Control
Publication Type :
Periodical
Accession number :
ejs67506277
Full Text :
https://doi.org/10.1109/TAC.2024.3379253