Back to Search Start Over

EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models

Authors :
Zhou, Weikang
Wang, Xiao
Xiong, Limao
Xia, Han
Gu, Yingshuang
Chai, Mingxu
Zhu, Fukang
Huang, Caishuang
Dou, Shihan
Xi, Zhiheng
Zheng, Rui
Gao, Songyang
Zou, Yicheng
Yan, Hang
Le, Yifan
Wang, Ruohui
Li, Lijun
Shao, Jing
Gui, Tao
Zhang, Qi
Huang, Xuanjing
Publication Year :
2024

Abstract

Jailbreak attacks are crucial for identifying and mitigating the security vulnerabilities of Large Language Models (LLMs). They are designed to bypass safeguards and elicit prohibited outputs. However, due to significant differences among various jailbreak methods, there is no standard implementation framework available for the community, which limits comprehensive security evaluations. This paper introduces EasyJailbreak, a unified framework simplifying the construction and evaluation of jailbreak attacks against LLMs. It builds jailbreak attacks using four components: Selector, Mutator, Constraint, and Evaluator. This modular framework enables researchers to easily construct attacks from combinations of novel and existing components. So far, EasyJailbreak supports 11 distinct jailbreak methods and facilitates the security validation of a broad spectrum of LLMs. Our validation across 10 distinct LLMs reveals a significant vulnerability, with an average breach probability of 60% under various jailbreaking attacks. Notably, even advanced models like GPT-3.5-Turbo and GPT-4 exhibit average Attack Success Rates (ASR) of 57% and 33%, respectively. We have released a wealth of resources for researchers, including a web platform, PyPI published package, screencast video, and experimental outputs.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.12171
Document Type :
Working Paper