Back to Search Start Over

Only Relevant Information Matters: Filtering Out Noisy Samples to Boost RL

Authors :
Philippe Preux
Yannis Flet-Berliac
Sequential Learning (SEQUEL)
Inria Lille - Nord Europe
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189 (CRIStAL)
Ecole Centrale de Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Ecole Centrale de Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)
University of Lille
Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189 (CRIStAL)
Ecole Centrale de Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)
Centre National de la Recherche Scientifique (CNRS)-Université de Lille-Ecole Centrale de Lille-Centre National de la Recherche Scientifique (CNRS)-Université de Lille-Ecole Centrale de Lille
Centre National de la Recherche Scientifique (CNRS)-Université de Lille-Ecole Centrale de Lille
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL)
Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)
Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL)
Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)
Scool (Scool)
Source :
IJCAI 2020-International Joint Conference on Artificial Intelligence, IJCAI 2020-International Joint Conference on Artificial Intelligence, Jul 2020, Yokohama, Japan. ⟨10.24963/ijcai.2020/376⟩, IJCAI
Publication Year :
2020
Publisher :
HAL CCSD, 2020.

Abstract

In reinforcement learning, policy gradient algorithms optimize the policy directly and rely on sampling efficiently an environment. Nevertheless, while most sampling procedures are based on direct policy sampling, self-performance measures could be used to improve such sampling prior to each policy update. Following this line of thought, we introduce SAUNA, a method where non-informative transitions are rejected from the gradient update. The level of information is estimated according to the fraction of variance explained by the value function: a measure of the discrepancy between V and the empirical returns. In this work, we use this metric to select samples that are useful to learn from, and we demonstrate that this selection can significantly improve the performance of policy gradient methods. In this paper: (a) We define SAUNA's metric and introduce its method to filter transitions. (b) We conduct experiments on a set of benchmark continuous control problems. SAUNA significantly improves performance. (c) We investigate how SAUNA reliably selects samples with the most positive impact on learning and study its improvement on both performance and sample efficiency.<br />Accepted at IJCAI 2020

Details

Language :
English
Database :
OpenAIRE
Journal :
IJCAI 2020-International Joint Conference on Artificial Intelligence, IJCAI 2020-International Joint Conference on Artificial Intelligence, Jul 2020, Yokohama, Japan. ⟨10.24963/ijcai.2020/376⟩, IJCAI
Accession number :
edsair.doi.dedup.....2696a4b127a6c278dbffe49fa0914774
Full Text :
https://doi.org/10.24963/ijcai.2020/376⟩