Back to Search Start Over

AI Alignment: A Comprehensive Survey

Authors :
Ji, Jiaming
Qiu, Tianyi
Chen, Boyuan
Zhang, Borong
Lou, Hantao
Wang, Kaile
Duan, Yawen
He, Zhonghao
Zhou, Jiayi
Zhang, Zhaowei
Zeng, Fanzhi
Ng, Kwan Yee
Dai, Juntao
Pan, Xuehai
O'Gara, Aidan
Lei, Yingshan
Xu, Hua
Tse, Brian
Fu, Jie
McAleer, Stephen
Yang, Yaodong
Wang, Yizhou
Zhu, Song-Chun
Guo, Yike
Gao, Wen
Publication Year :
2023

Abstract

AI alignment aims to make AI systems behave in line with human intentions and values. As AI systems grow more capable, so do risks from misalignment. To provide a comprehensive and up-to-date overview of the alignment field, in this survey, we delve into the core concepts, methodology, and practice of alignment. First, we identify four principles as the key objectives of AI alignment: Robustness, Interpretability, Controllability, and Ethicality (RICE). Guided by these four principles, we outline the landscape of current alignment research and decompose them into two key components: forward alignment and backward alignment. The former aims to make AI systems aligned via alignment training, while the latter aims to gain evidence about the systems' alignment and govern them appropriately to avoid exacerbating misalignment risks. On forward alignment, we discuss techniques for learning from feedback and learning under distribution shift. On backward alignment, we discuss assurance techniques and governance practices. We also release and continually update the website (www.alignmentsurvey.com) which features tutorials, collections of papers, blog posts, and other resources.<br />Comment: Continually updated, including weak-to-strong generalization and socio-technical thinking. 58 pages (excluding bibliography), 801 references

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.19852
Document Type :
Working Paper