Back to Search Start Over

Effective Model Sparsification by Scheduled Grow-and-Prune Methods

Authors :
Ma, Xiaolong
Qin, Minghai
Sun, Fei
Hou, Zejiang
Yuan, Kun
Xu, Yi
Wang, Yanzhi
Chen, Yen-Kuang
Jin, Rong
Xie, Yuan
Publication Year :
2021

Abstract

Deep neural networks (DNNs) are effective in solving many real-world problems. Larger DNN models usually exhibit better quality (e.g., accuracy) but their excessive computation results in long inference time. Model sparsification can reduce the computation and memory cost while maintaining model quality. Most existing sparsification algorithms unidirectionally remove weights, while others randomly or greedily explore a small subset of weights in each layer for pruning. The limitations of these algorithms reduce the level of achievable sparsity. In addition, many algorithms still require pre-trained dense models and thus suffer from large memory footprint. In this paper, we propose a novel scheduled grow-and-prune (GaP) methodology without having to pre-train a dense model. It addresses the shortcomings of the previous works by repeatedly growing a subset of layers to dense and then pruning them back to sparse after some training. Experiments show that the models pruned using the proposed methods match or beat the quality of the highly optimized dense models at 80% sparsity on a variety of tasks, such as image classification, objective detection, 3D object part segmentation, and translation. They also outperform other state-of-the-art (SOTA) methods for model sparsification. As an example, a 90% non-uniform sparse ResNet-50 model obtained via GaP achieves 77.9% top-1 accuracy on ImageNet, improving the previous SOTA results by 1.5%. Code available at: https://github.com/boone891214/GaP.<br />Comment: ICLR 2022 camera ready

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2106.09857
Document Type :
Working Paper