Back to Search Start Over

NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks

Authors :
Xu, Nuo
Wang, Binghui
Ran, Ran
Wen, Wujie
Venkitasubramaniam, Parv
Source :
Annual Computer Security Applications Conference (ACSAC '22), December 5--9, 2022, Austin, TX, USA
Publication Year :
2022

Abstract

Membership inference attacks (MIAs) against machine learning models can lead to serious privacy risks for the training dataset used in the model training. In this paper, we propose a novel and effective Neuron-Guided Defense method named NeuGuard against membership inference attacks (MIAs). We identify a key weakness in existing defense mechanisms against MIAs wherein they cannot simultaneously defend against two commonly used neural network based MIAs, indicating that these two attacks should be separately evaluated to assure the defense effectiveness. We propose NeuGuard, a new defense approach that jointly controls the output and inner neurons' activation with the object to guide the model output of training set and testing set to have close distributions. NeuGuard consists of class-wise variance minimization targeting restricting the final output neurons and layer-wise balanced output control aiming to constrain the inner neurons in each layer. We evaluate NeuGuard and compare it with state-of-the-art defenses against two neural network based MIAs, five strongest metric based MIAs including the newly proposed label-only MIA on three benchmark datasets. Results show that NeuGuard outperforms the state-of-the-art defenses by offering much improved utility-privacy trade-off, generality, and overhead.

Details

Database :
arXiv
Journal :
Annual Computer Security Applications Conference (ACSAC '22), December 5--9, 2022, Austin, TX, USA
Publication Type :
Report
Accession number :
edsarx.2206.05565
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3564625.3567986