Back to Search Start Over

PROGRESSOR: A Perceptually Guided Reward Estimator with Self-Supervised Online Refinement

Authors :
Ayalew, Tewodros
Zhang, Xiao
Wu, Kevin Yuanbo
Jiang, Tianchong
Maire, Michael
Walter, Matthew R.
Publication Year :
2024

Abstract

We present PROGRESSOR, a novel framework that learns a task-agnostic reward function from videos, enabling policy training through goal-conditioned reinforcement learning (RL) without manual supervision. Underlying this reward is an estimate of the distribution over task progress as a function of the current, initial, and goal observations that is learned in a self-supervised fashion. Crucially, PROGRESSOR refines rewards adversarially during online RL training by pushing back predictions for out-of-distribution observations, to mitigate distribution shift inherent in non-expert observations. Utilizing this progress prediction as a dense reward together with an adversarial push-back, we show that PROGRESSOR enables robots to learn complex behaviors without any external supervision. Pretrained on large-scale egocentric human video from EPIC-KITCHENS, PROGRESSOR requires no fine-tuning on in-domain task-specific data for generalization to real-robot offline RL under noisy demonstrations, outperforming contemporary methods that provide dense visual reward for robotic learning. Our findings highlight the potential of PROGRESSOR for scalable robotic applications where direct action labels and task-specific rewards are not readily available.<br />Comment: 15 pages,13 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.17764
Document Type :
Working Paper