Back to Search
Start Over
Attacks on Robust Distributed Learning Schemes via Sensitivity Curve Maximization
- Publication Year :
- 2023
-
Abstract
- Distributed learning paradigms, such as federated or decentralized learning, allow a collection of agents to solve global learning and optimization problems through limited local interactions. Most such strategies rely on a mixture of local adaptation and aggregation steps, either among peers or at a central fusion center. Classically, aggregation in distributed learning is based on averaging, which is statistically efficient, but susceptible to attacks by even a small number of malicious agents. This observation has motivated a number of recent works, which develop robust aggregation schemes by employing robust variations of the mean. We present a new attack based on sensitivity curve maximization (SCM), and demonstrate that it is able to disrupt existing robust aggregation schemes by injecting small, but effective perturbations.
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2304.14024
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1109/DSP58604.2023.10167919