Back to Search Start Over

ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs

Authors :
Zhang, Zhengyan
Song, Yixin
Yu, Guanghui
Han, Xu
Lin, Yankai
Xiao, Chaojun
Song, Chenyang
Liu, Zhiyuan
Mi, Zeyu
Sun, Maosong
Publication Year :
2024

Abstract

Sparse computation offers a compelling solution for the inference of Large Language Models (LLMs) in low-resource scenarios by dynamically skipping the computation of inactive neurons. While traditional approaches focus on ReLU-based LLMs, leveraging zeros in activation values, we broaden the scope of sparse LLMs beyond zero activation values. We introduce a general method that defines neuron activation through neuron output magnitudes and a tailored magnitude threshold, demonstrating that non-ReLU LLMs also exhibit sparse activation. To find the most efficient activation function for sparse computation, we propose a systematic framework to examine the sparsity of LLMs from three aspects: the trade-off between sparsity and performance, the predictivity of sparsity, and the hardware affinity. We conduct thorough experiments on LLMs utilizing different activation functions, including ReLU, SwiGLU, ReGLU, and ReLU$^2$. The results indicate that models employing ReLU$^2$ excel across all three evaluation aspects, highlighting its potential as an efficient activation function for sparse LLMs. We will release the code to facilitate future research.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.03804
Document Type :
Working Paper