Back to Search Start Over

Noise-Regularized Advantage Value for Multi-Agent Reinforcement Learning.

Authors :
Wang, Siying
Chen, Wenyu
Hu, Jian
Hu, Siyue
Huang, Liwei
Source :
Mathematics (2227-7390). Aug2022, Vol. 10 Issue 15, p2728-2728. 15p.
Publication Year :
2022

Abstract

Leveraging global state information to enhance policy optimization is a common approach in multi-agent reinforcement learning (MARL). Even with the supplement of state information, the agents still suffer from insufficient exploration in the training stage. Moreover, training with batch-sampled examples from the replay buffer will induce the policy overfitting problem, i.e., multi-agent proximal policy optimization (MAPPO) may not perform as good as independent PPO (IPPO) even with additional information in the centralized critic. In this paper, we propose a novel noise-injection method to regularize the policies of agents and mitigate the overfitting issue. We analyze the cause of policy overfitting in actor–critic MARL, and design two specific patterns of noise injection applied to the advantage function with random Gaussian noise to stabilize the training and enhance the performance. The experimental results on the Matrix Game and StarCraft II show the higher training efficiency and superior performance of our method, and the ablation studies indicate our method will keep higher entropy of agents' policies during training, which leads to more exploration. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
22277390
Volume :
10
Issue :
15
Database :
Academic Search Index
Journal :
Mathematics (2227-7390)
Publication Type :
Academic Journal
Accession number :
158519443
Full Text :
https://doi.org/10.3390/math10152728