Back to Search
Start Over
Automatic Transformation Search Against Deep Leakage From Gradients
- Source :
- IEEE Transactions on Pattern Analysis and Machine Intelligence; September 2023, Vol. 45 Issue: 9 p10650-10668, 19p
- Publication Year :
- 2023
-
Abstract
- Collaborative learning has gained great popularity due to its benefit of data privacy protection: participants can jointly train a Deep Learning model without sharing their training sets. However, recent works discovered that an adversary can fully recover the sensitive training samples from the shared gradients. Such reconstruction attacks pose severe threats to collaborative learning. Hence, effective mitigation solutions are urgently desired. In this paper, we systematically analyze existing reconstruction attacks and propose to leverage data augmentation to defeat these attacks: by preprocessing sensitive images with carefully-selected transformation policies, it becomes infeasible for the adversary to extract training samples from the corresponding gradients. We first design two new metrics to quantify the impacts of transformations on data privacy and model usability. With the two metrics, we design a novel search method to automatically discover qualified policies from a given data augmentation library. Our defense method can be further combined with existing collaborative training systems without modifying the training protocols. We conduct comprehensive experiments on various system settings. Evaluation results demonstrate that the policies discovered by our method can defeat state-of-the-art reconstruction attacks in collaborative learning, with high efficiency and negligible impact on the model performance.
Details
- Language :
- English
- ISSN :
- 01628828
- Volume :
- 45
- Issue :
- 9
- Database :
- Supplemental Index
- Journal :
- IEEE Transactions on Pattern Analysis and Machine Intelligence
- Publication Type :
- Periodical
- Accession number :
- ejs63732495
- Full Text :
- https://doi.org/10.1109/TPAMI.2023.3262813