Back to Search Start Over

Multi-Agent Coordination in Adversarial Environments through Signal Mediated Strategies

Authors :
Cacciamani, Federico
Celli, Andrea
Ciccone, Marco
Gatti, Nicola
Publication Year :
2021

Abstract

Many real-world scenarios involve teams of agents that have to coordinate their actions to reach a shared goal. We focus on the setting in which a team of agents faces an opponent in a zero-sum, imperfect-information game. Team members can coordinate their strategies before the beginning of the game, but are unable to communicate during the playing phase of the game. This is the case, for example, in Bridge, collusion in poker, and collusion in bidding. In this setting, model-free RL methods are oftentimes unable to capture coordination because agents' policies are executed in a decentralized fashion. Our first contribution is a game-theoretic centralized training regimen to effectively perform trajectory sampling so as to foster team coordination. When team members can observe each other actions, we show that this approach provably yields equilibrium strategies. Then, we introduce a signaling-based framework to represent team coordinated strategies given a buffer of past experiences. Each team member's policy is parametrized as a neural network whose output is conditioned on a suitable exogenous signal, drawn from a learned probability distribution. By combining these two elements, we empirically show convergence to coordinated equilibria in cases where previous state-of-the-art multi-agent RL algorithms did not.<br />Comment: Accepted at AAMAS 2021 (full paper)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2102.05026
Document Type :
Working Paper