Back to Search Start Over

CHIRPs: Change-Induced Regret Proxy metrics for Lifelong Reinforcement Learning

Authors :
Birkbeck, John
Sobey, Adam
Cerutti, Federico
Flynn, Katherine Heseltine Hurley
Norman, Timothy J.
Publication Year :
2024

Abstract

Reinforcement learning (RL) agents are costly to train and fragile to environmental changes. They often perform poorly when there are many changing tasks, prohibiting their widespread deployment in the real world. Many Lifelong RL agent designs have been proposed to mitigate issues such as catastrophic forgetting or demonstrate positive characteristics like forward transfer when change occurs. However, no prior work has established whether the impact on agent performance can be predicted from the change itself. Understanding this relationship will help agents proactively mitigate a change's impact for improved learning performance. We propose Change-Induced Regret Proxy (CHIRP) metrics to link change to agent performance drops and use two environments to demonstrate a CHIRP's utility in lifelong learning. A simple CHIRP-based agent achieved $48\%$ higher performance than the next best method in one benchmark and attained the best success rates in 8 of 10 tasks in a second benchmark which proved difficult for existing lifelong RL agents.<br />Comment: 7 pages, 9 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.03577
Document Type :
Working Paper