Back to Search Start Over

Consolidation via Policy Information Regularization in Deep RL for Multi-Agent Games

Authors :
Malloy, Tyler
Klinger, Tim
Liu, Miao
Riemer, Matthew
Tesauro, Gerald
Sims, Chris R.
Publication Year :
2020

Abstract

This paper introduces an information-theoretic constraint on learned policy complexity in the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) reinforcement learning algorithm. Previous research with a related approach in continuous control experiments suggests that this method favors learning policies that are more robust to changing environment dynamics. The multi-agent game setting naturally requires this type of robustness, as other agents' policies change throughout learning, introducing a nonstationary environment. For this reason, recent methods in continual learning are compared to our approach, termed Capacity-Limited MADDPG. Results from experimentation in multi-agent cooperative and competitive tasks demonstrate that the capacity-limited approach is a good candidate for improving learning performance in these environments.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2011.11517
Document Type :
Working Paper