Back to Search Start Over

Faster-GCG: Efficient Discrete Optimization Jailbreak Attacks against Aligned Large Language Models

Authors :
Li, Xiao
Li, Zhuhong
Li, Qiongxiu
Lee, Bingze
Cui, Jinghao
Hu, Xiaolin
Publication Year :
2024

Abstract

Aligned Large Language Models (LLMs) have demonstrated remarkable performance across various tasks. However, LLMs remain susceptible to jailbreak adversarial attacks, where adversaries manipulate prompts to elicit malicious responses that aligned LLMs should have avoided. Identifying these vulnerabilities is crucial for understanding the inherent weaknesses of LLMs and preventing their potential misuse. One pioneering work in jailbreaking is the GCG attack, a discrete token optimization algorithm that seeks to find a suffix capable of jailbreaking aligned LLMs. Despite the success of GCG, we find it suboptimal, requiring significantly large computational costs, and the achieved jailbreaking performance is limited. In this work, we propose Faster-GCG, an efficient adversarial jailbreak method by delving deep into the design of GCG. Experiments demonstrate that Faster-GCG can surpass the original GCG with only 1/10 of the computational cost, achieving significantly higher attack success rates on various open-source aligned LLMs. In addition, We demonstrate that Faster-GCG exhibits improved attack transferability when testing on closed-sourced LLMs such as ChatGPT.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.15362
Document Type :
Working Paper