Back to Search Start Over

Scaling Distributed Machine Learning with In-Network Aggregation

Authors :
Sapio, Amedeo
Canini, Marco
Ho, Chen-Yu
Nelson, Jacob
Kalnis, Panos
Kim, Changhoon
Krishnamurthy, Arvind
Moshref, Masoud
Ports, Dan R. K.
Richtárik, Peter
Publication Year :
2019

Abstract

Training machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process. Our approach, SwitchML, reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide an efficient solution that speeds up training by up to 5.5$\times$ for a number of real-world benchmark models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1903.06701
Document Type :
Working Paper