Back to Search Start Over

Value Improved Actor Critic Algorithms

Authors :
Oren, Yaniv
Zanger, Moritz A.
van der Vaart, Pascal R.
Spaan, Matthijs T. J.
Bohmer, Wendelin
Publication Year :
2024

Abstract

Many modern reinforcement learning algorithms build on the actor-critic (AC) framework: iterative improvement of a policy (the actor) using policy improvement operators and iterative approximation of the policy's value (the critic). In contrast, the popular value-based algorithm family employs improvement operators in the value update, to iteratively improve the value function directly. In this work, we propose a general extension to the AC framework that employs two separate improvement operators: one applied to the policy in the spirit of policy-based algorithms and one applied to the value in the spirit of value-based algorithms, which we dub Value-Improved AC (VI-AC). We design two practical VI-AC algorithms based in the popular online off-policy AC algorithms TD3 and DDPG. We evaluate VI-TD3 and VI-DDPG in the Mujoco benchmark and find that both improve upon or match the performance of their respective baselines in all environments tested.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.01423
Document Type :
Working Paper