Back to Search Start Over

Reducing Estimation Bias via Triplet-Average Deep Deterministic Policy Gradient.

Authors :
Wu, Dongming
Dong, Xingping
Shen, Jianbing
Hoi, Steven C. H.
Source :
IEEE Transactions on Neural Networks & Learning Systems. Nov2020, Vol. 31 Issue 11, p4933-4945. 13p.
Publication Year :
2020

Abstract

The overestimation caused by function approximation is a well-known property in Q-learning algorithms, especially in single-critic models, which leads to poor performance in practical tasks. However, the opposite property, underestimation, which often occurs in Q-learning methods with double critics, has been largely left untouched. In this article, we investigate the underestimation phenomenon in the recent twin delay deep deterministic actor-critic algorithm and theoretically demonstrate its existence. We also observe that this underestimation bias does indeed hurt performance in various experiments. Considering the opposite properties of single-critic and double-critic methods, we propose a novel triplet-average deep deterministic policy gradient algorithm that takes the weighted action value of three target critics to reduce the estimation bias. Given the connection between estimation bias and approximation error, we suggest averaging previous target values to reduce per-update error and further improve performance. Extensive empirical results over various continuous control tasks in OpenAI gym show that our approach outperforms the state-of-the-art methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
2162237X
Volume :
31
Issue :
11
Database :
Academic Search Index
Journal :
IEEE Transactions on Neural Networks & Learning Systems
Publication Type :
Periodical
Accession number :
146914687
Full Text :
https://doi.org/10.1109/TNNLS.2019.2959129