Back to Search Start Over

A Mutation-Based Method for Multi-Modal Jailbreaking Attack Detection

Authors :
Zhang, Xiaoyu
Zhang, Cen
Li, Tianlin
Huang, Yihao
Jia, Xiaojun
Xie, Xiaofei
Liu, Yang
Shen, Chao
Zhang, Xiaoyu
Zhang, Cen
Li, Tianlin
Huang, Yihao
Jia, Xiaojun
Xie, Xiaofei
Liu, Yang
Shen, Chao
Publication Year :
2023

Abstract

Large Language Models and Multi-Modal LLMs have become pervasive, and so does the importance of their security; yet, modern LLMs are known to be vulnerable to jailbreaking attacks. These attacks can allow malicious users to exploit the models, making the case for effective jailbreak detection mechanisms an essential aspect of maintaining the integrity and trustworthiness of LLM-based applications. However, existing detection works on jailbreak attacks have limitations. Existing post-query-based strategies require target domain knowledge, and pre-query-based methods mainly focus on text-level attacks and fail to meet the increasingly complex multi-modal security requirements placed upon contemporary LLMs. This gap underscores the need for a more comprehensive approach to safeguarding these influential systems. In this work, we propose JailGuard, the first mutation-based jailbreaking detection framework which supports both image and text modalities. Our key observation is that attack queries inherently possess less robustness compared to benign queries. Specifically, to confuse the model, attack queries are usually crafted with well-designed templates or complicate perturbations, leading to a fact that a slight disturbance in input may result in a drastic change in the response. This lack of robustness can be utilized in attack detection. Based on this intuition, we designed and implemented a detection framework comprising 19 different mutators and a divergence-based detection formula. To fully understand the effectiveness of our framework, we built the first multi-modal LLM jailbreaking attack dataset, which has 304 items of data, covering ten types of known jailbreaking attacks on image and text modalities. The evaluation suggests that JailGuard achieves the best detection accuracy of 89.38%/85.42% on image and text inputs, outperforming state-of-the-art defense methods by 15.28%.<br />Comment: 12 pages, 8 figures

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438510452
Document Type :
Electronic Resource