1. Efficient Computation Reduction in Bayesian Neural Networks Through Feature Decomposition and Memorization
- Author
-
Jianlei Yang, Xiaotao Jia, Xueyan Wang, Weisheng Zhao, Sorin Cotofana, and Runze Liu
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Speedup ,Computer Networks and Communications ,Computer science ,Software Validation ,Inference ,Machine Learning (stat.ML) ,02 engineering and technology ,Overfitting ,Machine Learning (cs.LG) ,Reduction (complexity) ,Deep Learning ,Statistics - Machine Learning ,Computer Systems ,Memory ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,Overhead (computing) ,Artificial neural network ,Computers ,Bayes Theorem ,Computer Science Applications ,Computer engineering ,020201 artificial intelligence & image processing ,Neural Networks, Computer ,Algorithms ,Software - Abstract
Bayesian method is capable of capturing real world uncertainties/incompleteness and properly addressing the over-fitting issue faced by deep neural networks. In recent years, Bayesian Neural Networks (BNNs) have drawn tremendous attentions of AI researchers and proved to be successful in many applications. However, the required high computation complexity makes BNNs difficult to be deployed in computing systems with limited power budget. In this paper, an efficient BNN inference flow is proposed to reduce the computation cost then is evaluated by means of both software and hardware implementations. A feature decomposition and memorization (\texttt{DM}) strategy is utilized to reform the BNN inference flow in a reduced manner. About half of the computations could be eliminated compared to the traditional approach that has been proved by theoretical analysis and software validations. Subsequently, in order to resolve the hardware resource limitations, a memory-friendly computing framework is further deployed to reduce the memory overhead introduced by \texttt{DM} strategy. Finally, we implement our approach in Verilog and synthesise it with 45 $nm$ FreePDK technology. Hardware simulation results on multi-layer BNNs demonstrate that, when compared with the traditional BNN inference method, it provides an energy consumption reduction of 73\% and a 4$\times$ speedup at the expense of 14\% area overhead., Comment: accepted by IEEE Transactions on Neural Networks and Learning Systems (TNNLS)
- Published
- 2021