Back to Search Start Over

Masking as an Efficient Alternative to Finetuning for Pretrained Language Models

Authors :
Zhao, Mengjie
Lin, Tao
Mi, Fei
Jaggi, Martin
Schütze, Hinrich
Publication Year :
2020

Abstract

We present an efficient method of utilizing pretrained language models, where we learn selective binary masks for pretrained weights in lieu of modifying them through finetuning. Extensive evaluations of masking BERT and RoBERTa on a series of NLP tasks show that our masking scheme yields performance comparable to finetuning, yet has a much smaller memory footprint when several tasks need to be inferred simultaneously. Through intrinsic evaluations, we show that representations computed by masked language models encode information necessary for solving downstream tasks. Analyzing the loss landscape, we show that masking and finetuning produce models that reside in minima that can be connected by a line segment with nearly constant test accuracy. This confirms that masking can be utilized as an efficient alternative to finetuning.<br />Comment: EMNLP 2020; MZ and TL contribute equally

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2004.12406
Document Type :
Working Paper