Back to Search
Start Over
BadGPT-4o: stripping safety finetuning from GPT models
- Publication Year :
- 2024
-
Abstract
- We show a version of Qi et al. 2023's simple fine-tuning poisoning technique strips GPT-4o's safety guardrails without degrading the model. The BadGPT attack matches best white-box jailbreaks on HarmBench and StrongREJECT. It suffers no token overhead or performance hits common to jailbreaks, as evaluated on tinyMMLU and open-ended generations. Despite having been known for a year, this attack remains easy to execute.
- Subjects :
- Computer Science - Cryptography and Security
Computer Science - Machine Learning
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2412.05346
- Document Type :
- Working Paper