Back to Search Start Over

Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment

Authors :
Yang, Tong
Mei, Jincheng
Dai, Hanjun
Wen, Zixin
Cen, Shicong
Schuurmans, Dale
Chi, Yuejie
Dai, Bo
Publication Year :
2024

Abstract

Recent advances in aligning large language models with human preferences have corroborated the growing importance of best-of-N distillation (BOND). However, the iterative BOND algorithm is prohibitively expensive in practice due to the sample and computation inefficiency. This paper addresses the problem by revealing a unified game-theoretic connection between iterative BOND and self-play alignment, which unifies seemingly disparate algorithmic paradigms. Based on the connection, we establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization that approximates iterative BOND in the parameter space. We provides provable sample efficiency guarantee for one of the WIND variant with the square loss objective. The experimental results confirm that our algorithm not only accelerates the computation, but also achieves superior sample efficiency compared to existing methods.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.20727
Document Type :
Working Paper