Back to Search
Start Over
Energy-efficient DNN Inference on Approximate Accelerators Through Formal Property Exploration
- Publication Year :
- 2022
-
Abstract
- Deep Neural Networks (DNNs) are being heavily utilized in modern applications and are putting energy-constraint devices to the test. To bypass high energy consumption issues, approximate computing has been employed in DNN accelerators to balance out the accuracy-energy reduction trade-off. However, the approximation-induced accuracy loss can be very high and drastically degrade the performance of the DNN. Therefore, there is a need for a fine-grain mechanism that would assign specific DNN operations to approximation in order to maintain acceptable DNN accuracy, while also achieving low energy consumption. In this paper, we present an automated framework for weight-to-approximation mapping enabling formal property exploration for approximate DNN accelerators. At the MAC unit level, our experimental evaluation surpassed already energy-efficient mappings by more than $\times2$ in terms of energy gains, while also supporting significantly more fine-grain control over the introduced approximation.<br />Comment: Accepted for publication at the International Conference on Compilers, Architectures, and Synthesis for Embedded Systems (CASES) 2022. Will appear as part of the ESWEEK-TCAD special issue
- Subjects :
- Computer Science - Machine Learning
I.2.6
I.3.1
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2207.12350
- Document Type :
- Working Paper