1. TMA: Tera‐MACs/W neural hardware inference accelerator with a multiplier‐less massive parallel processor
- Author
-
Shiho Kim, Dohyun Kim, and Hyunbin Park
- Subjects
Artificial neural network ,business.industry ,Computer science ,Applied Mathematics ,020208 electrical & electronic engineering ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Electronic, Optical and Magnetic Materials ,CMOS ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,Electrical and Electronic Engineering ,business ,Tera ,Throughput (business) ,Computer hardware ,Efficient energy use ,Integer (computer science) - Abstract
Computationally intensive Inference tasks of Deep neural networks have enforced revolution of new accelerator architecture to reduce power consumption as well as latency. The key figure of merit in hardware inference accelerators is the number of multiply-and-accumulation operations per watt (MACs/W), where, the state-of-the-arts MACs/W remains several hundreds Giga-MACs/W. We propose a Tera-MACS/W neural hardware inference Accelerator (TMA) with 8-bit activations and scalable integer weights less than 1-byte. The architectures main feature is configurable neural processing element for matrix-vector operations. The proposed neural processing element has Multiplier-less Massive Parallel Processor to work without any multiplications, which makes it attractive for energy efficient high-performance neural network applications. We benchmark our systems latency, power, and performance using Alexnet trained on ImageNet. Finally, we compared our accelerators throughput and power consumption to the prior works. The proposed accelerator outperforms the state of the art in terms of energy and area achieving 2.3 TMACS/W@1.0 V, 65 nm CMOS technology.
- Published
- 2021
- Full Text
- View/download PDF