Back to Search Start Over

Network Architecture for Optimizing Deep Deterministic Policy Gradient Algorithms.

Authors :
Zhang, Haifei
Xu, Jian
Zhang, Jian
Liu, Quan
Source :
Computational Intelligence & Neuroscience. 11/18/2022, Vol. 2022, p1-10. 10p.
Publication Year :
2022

Abstract

The traditional Deep Deterministic Policy Gradient (DDPG) algorithm has been widely used in continuous action spaces, but it still suffers from the problems of easily falling into local optima and large error fluctuations. Aiming at these deficiencies, this paper proposes a dual-actor-dual-critic DDPG algorithm (DN-DDPG). First, on the basis of the original actor-critic network architecture of the algorithm, a critic network is added to assist the training, and the smallest Q value of the two critic networks is taken as the estimated value of the action in each update. Reduce the probability of local optimal phenomenon; then, introduce the idea of dual-actor network to alleviate the underestimation of value generated by dual-evaluator network, and select the action with the greatest value in the two-actor networks to update to stabilize the training of the algorithm process. Finally, the improved method is validated on four continuous action tasks provided by MuJoCo, and the results show that the improved method can reduce the fluctuation range of error and improve the cumulative return compared with the classical algorithm. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
*ALGORITHMS
*PROBABILITY theory

Details

Language :
English
ISSN :
16875265
Volume :
2022
Database :
Academic Search Index
Journal :
Computational Intelligence & Neuroscience
Publication Type :
Academic Journal
Accession number :
160374852
Full Text :
https://doi.org/10.1155/2022/1117781