Back to Search Start Over

Agent-Temporal Attention for Reward Redistribution in Episodic Multi-Agent Reinforcement Learning

Authors :
Xiao, Baicen
Ramasubramanian, Bhaskar
Poovendran, Radha
Publication Year :
2022

Abstract

This paper considers multi-agent reinforcement learning (MARL) tasks where agents receive a shared global reward at the end of an episode. The delayed nature of this reward affects the ability of the agents to assess the quality of their actions at intermediate time-steps. This paper focuses on developing methods to learn a temporal redistribution of the episodic reward to obtain a dense reward signal. Solving such MARL problems requires addressing two challenges: identifying (1) relative importance of states along the length of an episode (along time), and (2) relative importance of individual agents' states at any single time-step (among agents). In this paper, we introduce Agent-Temporal Attention for Reward Redistribution in Episodic Multi-Agent Reinforcement Learning (AREL) to address these two challenges. AREL uses attention mechanisms to characterize the influence of actions on state transitions along trajectories (temporal attention), and how each agent is affected by other agents at each time-step (agent attention). The redistributed rewards predicted by AREL are dense, and can be integrated with any given MARL algorithm. We evaluate AREL on challenging tasks from the Particle World environment and the StarCraft Multi-Agent Challenge. AREL results in higher rewards in Particle World, and improved win rates in StarCraft compared to three state-of-the-art reward redistribution methods. Our code is available at https://github.com/baicenxiao/AREL.<br />Comment: Extended version of paper accepted for Oral Presentation at the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2201.04612
Document Type :
Working Paper