Back to Search Start Over

Peer-to-Peer Learning Dynamics of Wide Neural Networks

Authors :
Chaudhari, Shreyas
Pranav, Srinivasa
Anand, Emile
Moura, José M. F.
Publication Year :
2024

Abstract

Peer-to-peer learning is an increasingly popular framework that enables beyond-5G distributed edge devices to collaboratively train deep neural networks in a privacy-preserving manner without the aid of a central server. Neural network training algorithms for emerging environments, e.g., smart cities, have many design considerations that are difficult to tune in deployment settings -- such as neural network architectures and hyperparameters. This presents a critical need for characterizing the training dynamics of distributed optimization algorithms used to train highly nonconvex neural networks in peer-to-peer learning environments. In this work, we provide an explicit, non-asymptotic characterization of the learning dynamics of wide neural networks trained using popular distributed gradient descent (DGD) algorithms. Our results leverage both recent advancements in neural tangent kernel (NTK) theory and extensive previous work on distributed learning and consensus. We validate our analytical results by accurately predicting the parameter and error dynamics of wide neural networks trained for classification tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.15267
Document Type :
Working Paper