Back to Search Start Over

Turning Logic Against Itself : Probing Model Defenses Through Contrastive Questions

Authors :
Sachdeva, Rachneet
Hazra, Rima
Gurevych, Iryna
Publication Year :
2025

Abstract

Large language models, despite extensive alignment with human values and ethical principles, remain vulnerable to sophisticated jailbreak attacks that exploit their reasoning abilities. Existing safety measures often detect overt malicious intent but fail to address subtle, reasoning-driven vulnerabilities. In this work, we introduce POATE (Polar Opposite query generation, Adversarial Template construction, and Elaboration), a novel jailbreak technique that harnesses contrastive reasoning to provoke unethical responses. POATE crafts semantically opposing intents and integrates them with adversarial templates, steering models toward harmful outputs with remarkable subtlety. We conduct extensive evaluation across six diverse language model families of varying parameter sizes to demonstrate the robustness of the attack, achieving significantly higher attack success rates (~44%) compared to existing methods. To counter this, we propose Intent-Aware CoT and Reverse Thinking CoT, which decompose queries to detect malicious intent and reason in reverse to evaluate and reject harmful responses. These methods enhance reasoning robustness and strengthen the model's defense against adversarial exploits.<br />Comment: Our code is publicly available at https://github.com/UKPLab/POATE-attack

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.01872
Document Type :
Working Paper