Back to Search Start Over

Hoplite

Authors :
Stephanie Wang
Zhuohan Li
Danyang Zhuo
Siyuan Zhuang
Philipp Moritz
Ion Stoica
Robert Nishihara
Eric Liang
Source :
SIGCOMM
Publication Year :
2021
Publisher :
ACM, 2021.

Abstract

Task-based distributed frameworks (e.g., Ray, Dask, Hydro) have become increasingly popular for distributed applications that contain asynchronous and dynamic workloads, including asynchronous gradient descent, reinforcement learning, and model serving. As more data-intensive applications move to run on top of task-based systems, collective communication efficiency has become an important problem. Unfortunately, traditional collective communication libraries (e.g., MPI, Horovod, NCCL) are an ill fit, because they require the communication schedule to be known before runtime and they do not provide fault tolerance. We design and implement Hoplite, an efficient and fault-tolerant collective communication layer for task-based distributed systems. Our key technique is to compute data transfer schedules on the fly and execute the schedules efficiently through fine-grained pipelining. At the same time, when a task fails, the data transfer schedule adapts quickly to allow other tasks to keep making progress. We apply Hoplite to a popular task-based distributed framework, Ray. We show that Hoplite speeds up asynchronous stochastic gradient descent, reinforcement learning, and serving an ensemble of machine learning models that are difficult to execute efficiently with traditional collective communication by up to 7.8x, 3.9x, and 3.3x, respectively.<br />Comment: SIGCOMM 2021

Details

Database :
OpenAIRE
Journal :
Proceedings of the 2021 ACM SIGCOMM 2021 Conference
Accession number :
edsair.doi.dedup.....c3d03c69063b503b7c7fc5250dce68dc
Full Text :
https://doi.org/10.1145/3452296.3472897