1. Course-Correction: Safety Alignment Using Synthetic Preferences
- Author
-
Xu, Rongwu, Cai, Yishuo, Zhou, Zhenhong, Gu, Renjie, Weng, Haiqin, Liu, Yan, Zhang, Tianwei, Xu, Wei, and Qiu, Han
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
The risk of harmful content generated by large language models (LLMs) becomes a critical concern. This paper presents a systematic study on assessing and improving LLMs' capability to perform the task of \textbf{course-correction}, \ie, the model can steer away from generating harmful content autonomously. To start with, we introduce the \textsc{C$^2$-Eval} benchmark for quantitative assessment and analyze 10 popular LLMs, revealing varying proficiency of current safety-tuned LLMs in course-correction. To improve, we propose fine-tuning LLMs with preference learning, emphasizing the preference for timely course-correction. Using an automated pipeline, we create \textsc{C$^2$-Syn}, a synthetic dataset with 750K pairwise preferences, to teach models the concept of timely course-correction through data-driven preference learning. Experiments on 2 LLMs, \textsc{Llama2-Chat 7B} and \textsc{Qwen2 7B}, show that our method effectively enhances course-correction skills without affecting general performance. Additionally, it effectively improves LLMs' safety, particularly in resisting jailbreak attacks., Comment: Paper accepted to EMNLP 2024. Camera-ready version. We have released our dataset and scripts at https://github.com/pillowsofwind/Course-Correction
- Published
- 2024