Back to Search Start Over

XNN: Paradigm Shift in Mitigating Identity Leakage within Cloud-Enabled Deep Learning

Authors :
Liu, Kaixin
Xiong, Huixin
Duan, Bingyu
Cheng, Zexuan
Zhou, Xinyu
Zhang, Wanqian
Zhang, Xiangyu
Publication Year :
2024

Abstract

In the domain of cloud-based deep learning, the imperative for external computational resources coexists with acute privacy concerns, particularly identity leakage. To address this challenge, we introduce XNN and XNN-d, pioneering methodologies that infuse neural network features with randomized perturbations, striking a harmonious balance between utility and privacy. XNN, designed for the training phase, ingeniously blends random permutation with matrix multiplication techniques to obfuscate feature maps, effectively shielding private data from potential breaches without compromising training integrity. Concurrently, XNN-d, devised for the inference phase, employs adversarial training to integrate generative adversarial noise. This technique effectively counters black-box access attacks aimed at identity extraction, while a distilled face recognition network adeptly processes the perturbed features, ensuring accurate identification. Our evaluation demonstrates XNN's effectiveness, significantly outperforming existing methods in reducing identity leakage while maintaining a high model accuracy.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.04974
Document Type :
Working Paper