Back to Search Start Over

Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks

Authors :
Jia, Jinyuan
Liu, Yupei
Cao, Xiaoyu
Gong, Neil Zhenqiang
Publication Year :
2020

Abstract

Data poisoning attacks and backdoor attacks aim to corrupt a machine learning classifier via modifying, adding, and/or removing some carefully selected training examples, such that the corrupted classifier makes incorrect predictions as the attacker desires. The key idea of state-of-the-art certified defenses against data poisoning attacks and backdoor attacks is to create a majority vote mechanism to predict the label of a testing example. Moreover, each voter is a base classifier trained on a subset of the training dataset. Classical simple learning algorithms such as k nearest neighbors (kNN) and radius nearest neighbors (rNN) have intrinsic majority vote mechanisms. In this work, we show that the intrinsic majority vote mechanisms in kNN and rNN already provide certified robustness guarantees against data poisoning attacks and backdoor attacks. Moreover, our evaluation results on MNIST and CIFAR10 show that the intrinsic certified robustness guarantees of kNN and rNN outperform those provided by state-of-the-art certified defenses. Our results serve as standard baselines for future certified defenses against data poisoning attacks and backdoor attacks.<br />Comment: To appear in AAAI Conference on Artificial Intelligence, 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2012.03765
Document Type :
Working Paper