Back to Search Start Over

Proxy Experience Replay: Federated Distillation for Distributed Reinforcement Learning.

Authors :
Cha, Han
Park, Jihong
Kim, Hyesung
Bennis, Mehdi
Kim, Seong-Lyun
Source :
IEEE Intelligent Systems; Jul-Aug2020, Vol. 35 Issue 4, p94-101, 8p
Publication Year :
2020

Abstract

Traditional distributed deep reinforcement learning (RL) commonly relies on exchanging the experience replay memory (RM) of each agent. Since the RM contains all state observations and action policy history, it may incur huge communication overhead while violating the privacy of each agent. Alternatively, this article presents a communication-efficient and privacy-preserving distributed RL framework, coined federated reinforcement distillation (FRD). In FRD, each agent exchanges its proxy experience RM (ProxRM), in which policies are locally averaged with respect to proxy states clustering actual states. To provide FRD design insights, we present ablation studies on the impact of ProxRM structures, neural network architectures, and communication intervals. Furthermore, we propose an improved version of FRD, coined mixup augmented FRD (MixFRD), in which ProxRM is interpolated using the mixup data augmentation algorithm. Simulations in a Cartpole environment validate the effectiveness of MixFRD in reducing the variance of mission completion time and communication cost, compared to the benchmark schemes, vanilla FRD, federated RL (FRL), and policy distillation. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15411672
Volume :
35
Issue :
4
Database :
Complementary Index
Journal :
IEEE Intelligent Systems
Publication Type :
Academic Journal
Accession number :
145399577
Full Text :
https://doi.org/10.1109/MIS.2020.2994942