Back to Search Start Over

Meta-learning for compressed language model: A multiple choice question answering study.

Authors :
Yan, Ming
Pan, Yi
Source :
Neurocomputing. May2022, Vol. 487, p181-189. 9p.
Publication Year :
2022

Abstract

Model compression is a promising approach for reducing the model size of pretrained-language-models (PLMs) on low resource edge devices and applications. Unfortunately, the compression process always accompanies a cost of performance degradation, especially for the low resource downstream tasks, i.e., multiple-choice question answering. To address the degradation issue of model compression on PLMs, we proposed an end-to-end reptile (ETER) meta-learning approach to improving the performance of PLMs on the low resource multiple-choice question answering task. Specifically, our ETER improves the traditional two-stage meta-learning to an end-to-end manner, integrating the target finetuning stage into the meta training stage. To strengthen the generic meta-learning, ETER employs two-level meta-task construction from instance-level and domain-level to enrich its task generalization. What is more, ETER optimizes meta-learning by parameter constraints to reduce its parameter learning space. Experiments demonstrate that ETER significantly improved the performance of compressed PLMs and achieved large superiority over the baselines on different datasets. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
487
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
156026500
Full Text :
https://doi.org/10.1016/j.neucom.2021.01.148