Back to Search Start Over

Immunization against harmful fine-tuning attacks

Authors :
Rosati, Domenic
Wehner, Jan
Williams, Kai
Bartoszcze, Łukasz
Batzner, Jan
Sajjad, Hassan
Rudzicz, Frank
Publication Year :
2024

Abstract

Large Language Models (LLMs) are often trained with safety guards intended to prevent harmful text generation. However, such safety training can be removed by fine-tuning the LLM on harmful datasets. While this emerging threat (harmful fine-tuning attacks) has been characterized by previous work, there is little understanding of how we should proceed in constructing and validating defenses against these attacks especially in the case where defenders would not have control of the fine-tuning process. We introduce a formal framework based on the training budget of an attacker which we call "Immunization" conditions. Using a formal characterisation of the harmful fine-tuning problem, we provide a thorough description of what a successful defense must comprise of and establish a set of guidelines on how rigorous defense research that gives us confidence should proceed.<br />Comment: Published in EMNLP 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.16382
Document Type :
Working Paper