Back to Search Start Over

Mitigating Adversarial Attacks in LLMs through Defensive Suffix Generation

Authors :
Kim, Minkyoung
Kim, Yunha
Seo, Hyeram
Choi, Heejung
Han, Jiye
Kee, Gaeun
Ko, Soyoung
Jung, HyoJe
Kim, Byeolhee
Kim, Young-Hak
Park, Sanghyun
Jun, Tae Joon
Publication Year :
2024

Abstract

Large language models (LLMs) have exhibited outstanding performance in natural language processing tasks. However, these models remain susceptible to adversarial attacks in which slight input perturbations can lead to harmful or misleading outputs. A gradient-based defensive suffix generation algorithm is designed to bolster the robustness of LLMs. By appending carefully optimized defensive suffixes to input prompts, the algorithm mitigates adversarial influences while preserving the models' utility. To enhance adversarial understanding, a novel total loss function ($L_{\text{total}}$) combining defensive loss ($L_{\text{def}}$) and adversarial loss ($L_{\text{adv}}$) generates defensive suffixes more effectively. Experimental evaluations conducted on open-source LLMs such as Gemma-7B, mistral-7B, Llama2-7B, and Llama2-13B show that the proposed method reduces attack success rates (ASR) by an average of 11\% compared to models without defensive suffixes. Additionally, the perplexity score of Gemma-7B decreased from 6.57 to 3.93 when applying the defensive suffix generated by openELM-270M. Furthermore, TruthfulQA evaluations demonstrate consistent improvements with Truthfulness scores increasing by up to 10\% across tested configurations. This approach significantly enhances the security of LLMs in critical applications without requiring extensive retraining.<br />Comment: 9 pages, 2 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.13705
Document Type :
Working Paper