Back to Search Start Over

A Distributed Multi-GPU System for Large-Scale Node Embedding at Tencent

Authors :
Wei, Wanjing
Wang, Yangzihao
Gao, Pin
Sun, Shijie
Yu, Donghai
Publication Year :
2020

Abstract

Real-world node embedding applications often contain hundreds of billions of edges with high-dimension node features. Scaling node embedding systems to efficiently support these applications remains a challenging problem. In this paper we present a high-performance multi-GPU node embedding system. It uses model parallelism to split node embeddings onto each GPU's local parameter server, and data parallelism to train these embeddings on different edge samples in parallel. We propose a hierarchical data partitioning strategy and an embedding training pipeline to optimize both communication and memory usage on a GPU cluster. With the decoupled design of CPU tasks (random walk) and GPU tasks (embedding training), our system is highly flexible and can fully utilize all computing resources on a GPU cluster. Comparing with the current state-of-the-art multi-GPU single-node embedding system, our system achieves 5.9x-14.4x speedup on average with competitive or better accuracy on open datasets. Using 40 NVIDIA V100 GPUs on a network with almost three hundred billion edges and more than one billion nodes, our implementation requires only 3 minutes to finish one training epoch.<br />Comment: Accepted by Scalable Deep Learning over Parallel And Distributed Infrastructures, An IPDPS 2021 Workshop

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2005.13789
Document Type :
Working Paper