Back to Search Start Over

Watch Out for Your Guidance on Generation! Exploring Conditional Backdoor Attacks against Large Language Models

Authors :
He, Jiaming
Jiang, Wenbo
Hou, Guanyu
Fan, Wenshu
Zhang, Rui
Li, Hongwei
Publication Year :
2024

Abstract

Mainstream backdoor attacks on large language models (LLMs) typically set a fixed trigger in the input instance and specific responses for triggered queries. However, the fixed trigger setting (e.g., unusual words) may be easily detected by human detection, limiting the effectiveness and practicality in real-world scenarios. To enhance the stealthiness of backdoor activation, we present a new poisoning paradigm against LLMs triggered by specifying generation conditions, which are commonly adopted strategies by users during model inference. The poisoned model performs normally for output under normal/other generation conditions, while becomes harmful for output under target generation conditions. To achieve this objective, we introduce BrieFool, an efficient attack framework. It leverages the characteristics of generation conditions by efficient instruction sampling and poisoning data generation, thereby influencing the behavior of LLMs under target conditions. Our attack can be generally divided into two types with different targets: Safety unalignment attack and Ability degradation attack. Our extensive experiments demonstrate that BrieFool is effective across safety domains and ability domains, achieving higher success rates than baseline methods, with 94.3 % on GPT-3.5-turbo

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.14795
Document Type :
Working Paper