Sorry, I don't understand your search. ×
Back to Search Start Over

DxPU: Large Scale Disaggregated GPU Pools in the Datacenter

Authors :
He, Bowen
Zheng, Xiao
Chen, Yuan
Li, Weinan
Zhou, Yajin
Long, Xin
Zhang, Pengcheng
Lu, Xiaowei
Jiang, Linquan
Liu, Qiang
Cai, Dennis
Zhang, Xiantao
Publication Year :
2023

Abstract

The rapid adoption of AI and convenience offered by cloud services have resulted in the growing demands for GPUs in the cloud. Generally, GPUs are physically attached to host servers as PCIe devices. However, the fixed assembly combination of host servers and GPUs is extremely inefficient in resource utilization, upgrade, and maintenance. Due to these issues, the GPU disaggregation technique has been proposed to decouple GPUs from host servers. It aggregates GPUs into a pool, and allocates GPU node(s) according to user demands. However, existing GPU disaggregation systems have flaws in software-hardware compatibility, disaggregation scope, and capacity. In this paper, we present a new implementation of datacenter-scale GPU disaggregation, named DxPU. DxPU efficiently solves the above problems and can flexibly allocate as many GPU node(s) as users demand. In order to understand the performance overhead incurred by DxPU, we build up a performance model for AI specific workloads. With the guidance of modeling results, we develop a prototype system, which has been deployed into the datacenter of a leading cloud provider for a test run. We also conduct detailed experiments to evaluate the performance overhead caused by our system. The results show that the overhead of DxPU is less than 10%, compared with native GPU servers, in most of user scenarios.<br />Comment: 23 pages, 6 figures, published in ACM Transactions on Architecture and Code Optimization

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.04648
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3617995