Back to Search Start Over

LiPO: Listwise Preference Optimization through Learning-to-Rank

Authors :
Liu, Tianqi
Qin, Zhen
Wu, Junru
Shen, Jiaming
Khalman, Misha
Joshi, Rishabh
Zhao, Yao
Saleh, Mohammad
Baumgartner, Simon
Liu, Jialu
Liu, Peter J.
Wang, Xuanhui
Publication Year :
2024

Abstract

Aligning language models (LMs) with curated human feedback is critical to control their behaviors in real-world applications. Several recent policy optimization methods, such as DPO and SLiC, serve as promising alternatives to the traditional Reinforcement Learning from Human Feedback (RLHF) approach. In practice, human feedback often comes in a format of a ranked list over multiple responses to amortize the cost of reading prompt. Multiple responses can also be ranked by reward models or AI feedback. There lacks such a thorough study on directly fitting upon a list of responses. In this work, we formulate the LM alignment as a \textit{listwise} ranking problem and describe the LiPO framework, where the policy can potentially learn more effectively from a ranked list of plausible responses given the prompt. This view draws an explicit connection to Learning-to-Rank (LTR), where most existing preference optimization work can be mapped to existing ranking objectives. Following this connection, we provide an examination of ranking objectives that are not well studied for LM alignment with DPO and SLiC as special cases when list size is two. In particular, we highlight a specific method, LiPO-$\lambda$, which leverages a state-of-the-art \textit{listwise} ranking objective and weights each preference pair in a more advanced manner. We show that LiPO-$\lambda$ can outperform DPO variants and SLiC by a clear margin on several preference alignment tasks with both curated and real rankwise preference data.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.01878
Document Type :
Working Paper