Back to Search Start Over

Backdoor Defense through Self-Supervised and Generative Learning

Authors :
Sabolić, Ivan
Grubišić, Ivan
Šegvić, Siniša
Publication Year :
2024

Abstract

Backdoor attacks change a small portion of training data by introducing hand-crafted triggers and rewiring the corresponding labels towards a desired target class. Training on such data injects a backdoor which causes malicious inference in selected test samples. Most defenses mitigate such attacks through various modifications of the discriminative learning procedure. In contrast, this paper explores an approach based on generative modelling of per-class distributions in a self-supervised representation space. Interestingly, these representations get either preserved or heavily disturbed under recent backdoor attacks. In both cases, we find that per-class generative models allow to detect poisoned data and cleanse the dataset. Experiments show that training on cleansed dataset greatly reduces the attack success rate and retains the accuracy on benign inputs.<br />Comment: Accepted to BMVC 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.01185
Document Type :
Working Paper