Back to Search
Start Over
Harnessing Numerical Flexibility for Deep Learning on FPGAs
- Source :
- HEART
- Publication Year :
- 2018
- Publisher :
- ACM, 2018.
-
Abstract
- Deep learning has become a key workload in the data centre and edge leading to an arms race for compute dominance in this space. FPGAs have shown they can compete by combining deterministic low-latency with high throughput and flexibility. In particular, due to FPGAs' bit-level programmability, FPGAs can efficiently implement arbitrary precisions and numeric data types which is critical to fast evolving fields like deep learning. In this work, we explore minifloat (floating point representations with non-standard exponent and mantissa sizes) implementations on the FPGA, and show how we use a block floating point implementation that shares the exponent across many numbers to reduce the required logic to perform floating point operations. We will show that using this technique we can significantly improve the performance of the FPGA with no impact to accuracy. Using this approach, we show how we can reduce logic utilization by 3x, and memory bandwidth and capacity required by more than 40%.
- Subjects :
- Flexibility (engineering)
Floating point
business.industry
Memory bandwidth
02 engineering and technology
Minifloat
020202 computer hardware & architecture
Significand
Computer engineering
020204 information systems
0202 electrical engineering, electronic engineering, information engineering
Medicine
Block floating-point
business
Field-programmable gate array
Throughput (business)
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- Proceedings of the 9th International Symposium on Highly-Efficient Accelerators and Reconfigurable Technologies
- Accession number :
- edsair.doi...........632ae584f5d1f87026f9bb083b0ccbc9