Back to Search Start Over

NDRec: A Near-Data Processing System for Training Large-Scale Recommendation Models

Authors :
Li, Shiyu
Wang, Yitu
Hanson, Edward
Chang, Andrew
Seok Ki, Yang
Li, Hai
Chen, Yiran
Source :
IEEE Transactions on Computers; 2024, Vol. 73 Issue: 5 p1248-1261, 14p
Publication Year :
2024

Abstract

Recent advances in deep neural networks (DNNs) have enabled highly effective recommendation models for diverse web services. In such DNN-based recommendation models, the embedding layer comprises the majority of model parameters. As these models scale rapidly, the embedding layer's memory capacity and bandwidth requirements threaten to exceed the limits of current computing architectures. We observe the embedding layer's computational demands increase much more slowly than its storage needs, suggesting an opportunity to offload embeddings to storage hardware. In this work, we present NDRec, a near-data processing system to train large-scale recommendation models. NDRec offloads both the parameters and the computation of the embedding layer to computational storage devices (CSDs), using coherence interconnects (CXLs) for communication between GPUs and CSDs. By leveraging the statistical properties of embedding access patterns, we develop an optimized CSD memory hierarchy and caching strategy. A lookahead embedding scheme enables concurrent execution of embeddings and other operations, hiding latency and reducing memory bandwidth requirements. We evaluate NDRec using real-world and synthetic benchmarks. Results demonstrate NDRec achieves up to 4.33<inline-formula><tex-math notation="LaTeX">$\boldsymbol{\times}$</tex-math><alternatives><mml:math><mml:mo mathvariant="bold">×</mml:mo></mml:math><inline-graphic xlink:href="li-ieq1-3365939.gif"/></alternatives></inline-formula> and 3.97<inline-formula><tex-math notation="LaTeX">$\boldsymbol{\times}$</tex-math><alternatives><mml:math><mml:mo mathvariant="bold">×</mml:mo></mml:math><inline-graphic xlink:href="li-ieq2-3365939.gif"/></alternatives></inline-formula> speedups over heterogeneous CPU-GPU platforms and GPU caching, respectively. NDRec also reduces per-iteration energy consumption by up to 54.9%.

Details

Language :
English
ISSN :
00189340 and 15579956
Volume :
73
Issue :
5
Database :
Supplemental Index
Journal :
IEEE Transactions on Computers
Publication Type :
Periodical
Accession number :
ejs66118835
Full Text :
https://doi.org/10.1109/TC.2024.3365939