Back to Search Start Over

Controlled Decoding from Language Models

Authors :
Mudgal, Sidharth
Lee, Jong
Ganapathy, Harish
Li, YaGuang
Wang, Tao
Huang, Yanping
Chen, Zhifeng
Cheng, Heng-Tze
Collins, Michael
Strohman, Trevor
Chen, Jilin
Beutel, Alex
Beirami, Ahmad
Publication Year :
2023

Abstract

KL-regularized reinforcement learning (RL) is a popular alignment framework to control the language model responses towards high reward outcomes. We pose a tokenwise RL objective and propose a modular solver for it, called controlled decoding (CD). CD exerts control through a separate prefix scorer module, which is trained to learn a value function for the reward. The prefix scorer is used at inference time to control the generation from a frozen base model, provably sampling from a solution to the RL objective. We empirically demonstrate that CD is effective as a control mechanism on popular benchmarks. We also show that prefix scorers for multiple rewards may be combined at inference time, effectively solving a multi-objective RL problem with no additional training. We show that the benefits of applying CD transfer to an unseen base model with no further tuning as well. Finally, we show that CD can be applied in a blockwise decoding fashion at inference-time, essentially bridging the gap between the popular best-of-K strategy and tokenwise control through reinforcement learning. This makes CD a promising approach for alignment of language models.<br />Comment: ICML 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.17022
Document Type :
Working Paper