Back to Search Start Over

Improving and Simplifying Pattern Exploiting Training

Authors :
Tam, Derek
Menon, Rakesh R
Bansal, Mohit
Srivastava, Shashank
Raffel, Colin
Tam, Derek
Menon, Rakesh R
Bansal, Mohit
Srivastava, Shashank
Raffel, Colin
Publication Year :
2021

Abstract

Recently, pre-trained language models (LMs) have achieved strong performance when fine-tuned on difficult benchmarks like SuperGLUE. However, performance can suffer when there are very few labeled examples available for fine-tuning. Pattern Exploiting Training (PET) is a recent approach that leverages patterns for few-shot learning. However, PET uses task-specific unlabeled data. In this paper, we focus on few-shot learning without any unlabeled data and introduce ADAPET, which modifies PET's objective to provide denser supervision during fine-tuning. As a result, ADAPET outperforms PET on SuperGLUE without any task-specific unlabeled data. Our code can be found at https://github.com/rrmenon10/ADAPET.<br />Comment: EMNLP 2021 (12 pages, 2 figures)

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1269536692
Document Type :
Electronic Resource