Back to Search Start Over

Which Experiences Are Influential for RL Agents? Efficiently Estimating The Influence of Experiences

Authors :
Hiraoka, Takuya
Wang, Guanquan
Onishi, Takashi
Tsuruoka, Yoshimasa
Publication Year :
2024

Abstract

In reinforcement learning (RL) with experience replay, experiences stored in a replay buffer influence the RL agent's performance. Information about the influence of these experiences is valuable for various purposes, such as identifying experiences that negatively influence poorly performing RL agents. One method for estimating the influence of experiences is the leave-one-out (LOO) method. However, this method is usually computationally prohibitive. In this paper, we present Policy Iteration with Turn-over Dropout (PIToD), which efficiently estimates the influence of experiences. We evaluate how accurately PIToD estimates the influence of experiences and its efficiency compared to LOO. We then apply PIToD to amend poorly performing RL agents, i.e., we use PIToD to estimate negatively influential experiences for the RL agents and to delete the influence of these experiences. We show that RL agents' performance is significantly improved via amendments with PIToD.<br />Comment: Source code: https://github.com/TakuyaHiraoka/Which-Experiences-Are-Influential-for-RL-Agents

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.14629
Document Type :
Working Paper