Back to Search Start Over

Preference Optimization as Probabilistic Inference

Authors :
Abdolmaleki, Abbas
Piot, Bilal
Shahriari, Bobak
Springenberg, Jost Tobias
Hertweck, Tim
Joshi, Rishabh
Oh, Junhyuk
Bloesch, Michael
Lampe, Thomas
Heess, Nicolas
Buchli, Jonas
Riedmiller, Martin
Publication Year :
2024

Abstract

Existing preference optimization methods are mainly designed for directly learning from human feedback with the assumption that paired examples (preferred vs. dis-preferred) are available. In contrast, we propose a method that can leverage unpaired preferred or dis-preferred examples, and works even when only one type of feedback (positive or negative) is available. This flexibility allows us to apply it in scenarios with varying forms of feedback and models, including training generative language models based on human feedback as well as training policies for sequential decision-making problems, where learned (value) functions are available. Our approach builds upon the probabilistic framework introduced in (Dayan and Hinton, 1997), which proposes to use expectation-maximization (EM) to directly optimize the probability of preferred outcomes (as opposed to classic expected reward maximization). To obtain a practical algorithm, we identify and address a key limitation in current EM-based methods: when applied to preference optimization, they solely maximize the likelihood of preferred examples, while neglecting dis-preferred samples. We show how one can extend EM algorithms to explicitly incorporate dis-preferred outcomes, leading to a novel, theoretically grounded, preference optimization algorithm that offers an intuitive and versatile way to learn from both positive and negative feedback.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.04166
Document Type :
Working Paper