Back to Search
Start Over
Hoplite
- Source :
- SIGCOMM
- Publication Year :
- 2021
- Publisher :
- ACM, 2021.
-
Abstract
- Task-based distributed frameworks (e.g., Ray, Dask, Hydro) have become increasingly popular for distributed applications that contain asynchronous and dynamic workloads, including asynchronous gradient descent, reinforcement learning, and model serving. As more data-intensive applications move to run on top of task-based systems, collective communication efficiency has become an important problem. Unfortunately, traditional collective communication libraries (e.g., MPI, Horovod, NCCL) are an ill fit, because they require the communication schedule to be known before runtime and they do not provide fault tolerance. We design and implement Hoplite, an efficient and fault-tolerant collective communication layer for task-based distributed systems. Our key technique is to compute data transfer schedules on the fly and execute the schedules efficiently through fine-grained pipelining. At the same time, when a task fails, the data transfer schedule adapts quickly to allow other tasks to keep making progress. We apply Hoplite to a popular task-based distributed framework, Ray. We show that Hoplite speeds up asynchronous stochastic gradient descent, reinforcement learning, and serving an ensemble of machine learning models that are difficult to execute efficiently with traditional collective communication by up to 7.8x, 3.9x, and 3.3x, respectively.<br />Comment: SIGCOMM 2021
- Subjects :
- Networking and Internet Architecture (cs.NI)
FOS: Computer and information sciences
Computer Science - Machine Learning
Schedule
Computer science
Distributed computing
020206 networking & telecommunications
Fault tolerance
02 engineering and technology
Machine Learning (cs.LG)
Task (project management)
Computer Science - Networking and Internet Architecture
Stochastic gradient descent
Computer Science - Distributed, Parallel, and Cluster Computing
Asynchronous communication
020204 information systems
0202 electrical engineering, electronic engineering, information engineering
Reinforcement learning
Distributed, Parallel, and Cluster Computing (cs.DC)
Gradient descent
Data transmission
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- Proceedings of the 2021 ACM SIGCOMM 2021 Conference
- Accession number :
- edsair.doi.dedup.....c3d03c69063b503b7c7fc5250dce68dc
- Full Text :
- https://doi.org/10.1145/3452296.3472897