Back to Search Start Over

Semi-External Memory Sparse Matrix Multiplication for Billion-Node Graphs

Authors :
Carey E. Priebe
Randal Burns
Da Zheng
Disa Mhembere
Vince Lyzinski
Joshua T. Vogelstein
Publication Year :
2016

Abstract

Sparse matrix multiplication is traditionally performed in memory and scales to large matrices using the distributed memory of multiple nodes. In contrast, we scale sparse matrix multiplication beyond memory capacity by implementing sparse matrix dense matrix multiplication (SpMM) in a semi-external memory (SEM) fashion; i.e., we keep the sparse matrix on commodity SSDs and dense matrices in memory. Our SEM-SpMM incorporates many in-memory optimizations for large power-law graphs. It outperforms the in-memory implementations of Trilinos and Intel MKL and scales to billion-node graphs, far beyond the limitations of memory. Furthermore, on a single large parallel machine, our SEM-SpMM operates as fast as the distributed implementations of Trilinos using five times as much processing power. We also run our implementation in memory (IM-SpMM) to quantify the overhead of keeping data on SSDs. SEM-SpMM achieves almost 100% performance of IM-SpMM on graphs when the dense matrix has more than four columns; it achieves at least 65% performance of IM-SpMM on all inputs. We apply our SpMM to three important data analysis tasks--PageRank, eigensolving, and non-negative matrix factorization--and show that our SEM implementations significantly advance the state of the art.<br />published in IEEE Transactions on Parallel and Distributed Systems

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....76ff70c39a912d12432561661f3e185b