Back to Search Start Over

SF-MMCN: A Low Power Re-configurable Server Flow Convolution Neural Network Accelerator

Authors :
Hsu, Huan-Ke
Wey, I-Chyn
Teo, T. Hui
Hsu, Huan-Ke
Wey, I-Chyn
Teo, T. Hui
Publication Year :
2024

Abstract

Convolution Neural Network (CNN) accelerators have been developed rapidly in recent studies. There are lots of CNN accelerators equipped with a variety of function and algorithm which results in low power and high-speed performances. However, the scale of a PE array in traditional CNN accelerators is too big, which costs the most energy consumption while conducting multiply and accumulation (MAC) computations. The other issue is that due to the advance of CNN models, there are enormous models consist of parallel structures such as residual block in Residual Network (ResNet). The appearance of parallel structure in CNN models gives a challenge to the design of CNN accelerators owing to impacts on both operation and area efficiency. This study proposed SF-MMCN structure. The scale of PE array in proposed designs is reduced by pipeline technique in a PE. Proposed SF structure successfully make proposed SF-MMCN operate in high efficiency when facing parallel structures in CNN models. Proposed design is implemented with TSMC 90nm technology on VGG-16 and ResNet-18 environments. The performance of proposed design achieves 76% energy saving, 55% area saving and increases operation and are efficiency 9.25 times and 4.92 times respectively.<br />Comment: 16 pages, 16 figures

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438537199
Document Type :
Electronic Resource