Back to Search Start Over

Reinforcement Learning In Two Player Zero Sum Simultaneous Action Games

Authors :
Phillips, Patrick
Publication Year :
2021

Abstract

Two player zero sum simultaneous action games are common in video games, financial markets, war, business competition, and many other settings. We first introduce the fundamental concepts of reinforcement learning in two player zero sum simultaneous action games and discuss the unique challenges this type of game poses. Then we introduce two novel agents that attempt to handle these challenges by using joint action Deep Q-Networks (DQN). The first agent, called the Best Response AgenT (BRAT), builds an explicit model of its opponent's policy using imitation learning, and then uses this model to find the best response to exploit the opponent's strategy. The second agent, Meta-Nash DQN, builds an implicit model of its opponent's policy in order to produce a context variable that is used as part of the Q-value calculation. An explicit minimax over Q-values is used to find actions close to Nash equilibrium. We find empirically that both agents converge to Nash equilibrium in a self-play setting for simple matrix games, while also performing well in games with larger state and action spaces. These novel algorithms are evaluated against vanilla RL algorithms as well as recent state of the art multi-agent and two agent algorithms. This work combines ideas from traditional reinforcement learning, game theory, and meta learning.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2110.04835
Document Type :
Working Paper