Back to Search Start Over

Pruning Adversarially Robust Neural Networks without Adversarial Examples

Authors :
Jian, Tong
Wang, Zifeng
Wang, Yanzhi
Dy, Jennifer
Ioannidis, Stratis
Publication Year :
2022

Abstract

Adversarial pruning compresses models while preserving robustness. Current methods require access to adversarial examples during pruning. This significantly hampers training efficiency. Moreover, as new adversarial attacks and training methods develop at a rapid rate, adversarial pruning methods need to be modified accordingly to keep up. In this work, we propose a novel framework to prune a previously trained robust neural network while maintaining adversarial robustness, without further generating adversarial examples. We leverage concurrent self-distillation and pruning to preserve knowledge in the original model as well as regularizing the pruned model via the Hilbert-Schmidt Information Bottleneck. We comprehensively evaluate our proposed framework and show its superior performance in terms of both adversarial robustness and efficiency when pruning architectures trained on the MNIST, CIFAR-10, and CIFAR-100 datasets against five state-of-the-art attacks. Code is available at https://github.com/neu-spiral/PwoA/.<br />Comment: Published at ICDM 2022 as a conference paper

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2210.04311
Document Type :
Working Paper