Back to Search Start Over

Triangle Counting Accelerations: From Algorithm to In-Memory Computing Architecture.

Authors :
Wang, Xueyan
Yang, Jianlei
Zhao, Yinglin
Jia, Xiaotao
Yin, Rong
Chen, Xuhang
Qu, Gang
Zhao, Weisheng
Source :
IEEE Transactions on Computers. Oct2022, Vol. 71 Issue 10, p2462-2472. 11p.
Publication Year :
2022

Abstract

Triangles are the basic substructure of networks and triangle counting (TC) has been a fundamental graph computing problem in numerous fields such as social network analysis. Nevertheless, like other graph computing problems, due to the high memory-computation ratio and random memory access pattern, TC involves a large amount of data transfers thus suffers from the bandwidth bottleneck in the traditional Von-Neumann architecture. To overcome this challenge, in this paper, we propose to accelerate TC with the emerging processing-in-memory (PIM) architecture through an algorithm-architecture co-optimization manner. To enable the efficient in-memory implementations, we come up to reformulate TC with bitwise logic operations (such as AND), and develop customized graph compression and mapping techniques for efficient data flow management. With the emerging computational Spin-Transfer Torque Magnetic RAM (STT-MRAM) array, which is one of the most promising PIM enabling techniques, the device-to-architecture co-simulation results demonstrate that the proposed TC in-memory accelerator outperforms the state-of-the-art GPU and FPGA accelerations by $12.2\times$ 12. 2 × and $31.8\times$ 31. 8 × , respectively, and achieves a $34\times$ 34 × energy efficiency improvement over the FPGA accelerator. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189340
Volume :
71
Issue :
10
Database :
Academic Search Index
Journal :
IEEE Transactions on Computers
Publication Type :
Academic Journal
Accession number :
159041213
Full Text :
https://doi.org/10.1109/TC.2021.3131049