Back to Search Start Over

A collaboration of multi-agent model using an interactive interface.

Authors :
Li, Jingchen
Wu, Fan
Shi, Haobin
Hwang, Kao-Shing
Source :
Information Sciences. Sep2022, Vol. 611, p349-363. 15p.
Publication Year :
2022

Abstract

• Investigating the effect of noises in multi-agent reinforcement learning. • Leveraging interactive interface to generate consensuses. • Training collaboration policy and behavior policy through a temporal abstraction mechanism. • Improving sampling by calculating sequence priority. Multi-agent reinforcement learning algorithms scarcely attend to noisy environments, in which agents are inhibited from achieving optimal policy training and making correct decisions. This work investigates the effect of noises in multi-agent environments and proposes a multi-agent actor-critic with collaboration (MACC) model. The model uses lightweight communication to overcome the interference from noises. There are two policies for each agent in MACC: collaboration policy and behavior policy. The behavior of an agent not only depends on its own state but also be influenced by each other agent through a scalar, collaboration value. The collaboration value is generated by the collaboration policy for each individual agent, and it ensures a succinct consensus about the environment. This paper elaborates on the training of the collaboration policy and specifies how it coordinates the behavior policy in a manner of temporal abstraction mechanism, while the observation sequence is considered for more accurate perception. Several experiments on multi-agent collaboration simulation platforms demonstrate that the MACC performs better than baselines in noisy environments, especially in partially observable environments. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00200255
Volume :
611
Database :
Academic Search Index
Journal :
Information Sciences
Publication Type :
Periodical
Accession number :
159431826
Full Text :
https://doi.org/10.1016/j.ins.2022.07.052