Back to Search Start Over

Tuning-Free Personalized Alignment via Trial-Error-Explain In-Context Learning

Authors :
Cho, Hyundong
Sharma, Karishma
Jedema, Nicolaas
Ribeiro, Leonardo F. R.
Moschitti, Alessandro
Krishnan, Ravi
May, Jonathan
Publication Year :
2025

Abstract

Language models are aligned to the collective voice of many, resulting in generic outputs that do not align with specific users' styles. In this work, we present Trial-Error-Explain In-Context Learning (TICL), a tuning-free method that personalizes language models for text generation tasks with fewer than 10 examples per user. TICL iteratively expands an in-context learning prompt via a trial-error-explain process, adding model-generated negative samples and explanations that provide fine-grained guidance towards a specific user's style. TICL achieves favorable win rates on pairwise comparisons with LLM-as-a-judge up to 91.5% against the previous state-of-the-art and outperforms competitive tuning-free baselines for personalized alignment tasks of writing emails, essays and news articles. Both lexical and qualitative analyses show that the negative samples and explanations enable language models to learn stylistic context more effectively and overcome the bias towards structural and formal phrases observed in their zero-shot outputs. By front-loading inference compute to create a user-specific in-context learning prompt that does not require extra generation steps at test time, TICL presents a novel yet simple approach for personalized alignment.<br />Comment: NAACL 2025 Findings

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.08972
Document Type :
Working Paper