Back to Search Start Over

Structural Pruning of Pre-trained Language Models via Neural Architecture Search

Authors :
Klein, Aaron
Golebiowski, Jacek
Ma, Xingchen
Perrone, Valerio
Archambeau, Cedric
Publication Year :
2024

Abstract

Pre-trained language models (PLM), for example BERT or RoBERTa, mark the state-of-the-art for natural language understanding task when fine-tuned on labeled data. However, their large size poses challenges in deploying them for inference in real-world applications, due to significant GPU memory requirements and high inference latency. This paper explores neural architecture search (NAS) for structural pruning to find sub-parts of the fine-tuned network that optimally trade-off efficiency, for example in terms of model size or latency, and generalization performance. We also show how we can utilize more recently developed two-stage weight-sharing NAS approaches in this setting to accelerate the search process. Unlike traditional pruning methods with fixed thresholds, we propose to adopt a multi-objective approach that identifies the Pareto optimal set of sub-networks, allowing for a more flexible and automated compression process.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.02267
Document Type :
Working Paper