Back to Search Start Over

MetaRM: Shifted Distributions Alignment via Meta-Learning

Authors :
Dou, Shihan
Liu, Yan
Zhou, Enyu
Li, Tianlong
Jia, Haoxiang
Xiong, Limao
Zhao, Xin
Ye, Junjie
Zheng, Rui
Gui, Tao
Zhang, Qi
Huang, Xuanjing
Publication Year :
2024

Abstract

The success of Reinforcement Learning from Human Feedback (RLHF) in language model alignment is critically dependent on the capability of the reward model (RM). However, as the training process progresses, the output distribution of the policy model shifts, leading to the RM's reduced ability to distinguish between responses. This issue is further compounded when the RM, trained on a specific data distribution, struggles to generalize to examples outside of that distribution. These two issues can be united as a challenge posed by the shifted distribution of the environment. To surmount this challenge, we introduce MetaRM, a method leveraging meta-learning to align the RM with the shifted environment distribution. MetaRM is designed to train the RM by minimizing data loss, particularly for data that can improve the differentiation ability to examples of the shifted target distribution. Extensive experiments demonstrate that MetaRM significantly improves the RM's distinguishing ability in iterative RLHF optimization, and also provides the capacity to identify subtle differences in out-of-distribution samples.<br />Comment: 11 pages, 6 figures. arXiv admin note: text overlap with arXiv:2401.06080

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.00438
Document Type :
Working Paper