Back to Search Start Over

Bayes' capacity as a measure for reconstruction attacks in federated learning

Authors :
Biswas, Sayan
Dras, Mark
Faustini, Pedro
Fernandes, Natasha
McIver, Annabelle
Palamidessi, Catuscia
Sadeghi, Parastoo
Publication Year :
2024

Abstract

Within the machine learning community, reconstruction attacks are a principal attack of concern and have been identified even in federated learning, which was designed with privacy preservation in mind. In federated learning, it has been shown that an adversary with knowledge of the machine learning architecture is able to infer the exact value of a training element given an observation of the weight updates performed during stochastic gradient descent. In response to these threats, the privacy community recommends the use of differential privacy in the stochastic gradient descent algorithm, termed DP-SGD. However, DP has not yet been formally established as an effective countermeasure against reconstruction attacks. In this paper, we formalise the reconstruction threat model using the information-theoretic framework of quantitative information flow. We show that the Bayes' capacity, related to the Sibson mutual information of order infinity, represents a tight upper bound on the leakage of the DP-SGD algorithm to an adversary interested in performing a reconstruction attack. We provide empirical results demonstrating the effectiveness of this measure for comparing mechanisms against reconstruction threats.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.13569
Document Type :
Working Paper