Back to Search Start Over

K-TanH: Efficient TanH For Deep Learning

Authors :
Kundu, Abhisek
Heinecke, Alex
Kalamkar, Dhiraj
Srinivasan, Sudarshan
Qin, Eric C.
Mellempudi, Naveen K.
Das, Dipankar
Banerjee, Kunal
Kaul, Bharat
Dubey, Pradeep
Publication Year :
2019

Abstract

We propose K-TanH, a novel, highly accurate, hardware efficient approximation of popular activation function TanH for Deep Learning. K-TanH consists of parameterized low-precision integer operations, such as, shift and add/subtract (no floating point operation needed) where parameters are stored in very small look-up tables that can fit in CPU registers. K-TanH can work on various numerical formats, such as, Float32 and BFloat16. High quality approximations to other activation functions, e.g., Sigmoid, Swish and GELU, can be derived from K-TanH. Our AVX512 implementation of K-TanH demonstrates $>5\times$ speed up over Intel SVML, and it is consistently superior in efficiency over other approximations that use floating point arithmetic. Finally, we achieve state-of-the-art Bleu score and convergence results for training language translation model GNMT on WMT16 data sets with approximate TanH obtained via K-TanH on BFloat16 inputs.<br />Comment: 6 pages, 1 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1909.07729
Document Type :
Working Paper