Back to Search Start Over

Revisiting Sample Size Determination in Natural Language Understanding

Authors :
Chang, Ernie
Rashid, Muhammad Hassan
Lin, Pin-Jie
Zhao, Changsheng
Demberg, Vera
Shi, Yangyang
Chandra, Vikas
Publication Year :
2023

Abstract

Knowing exactly how many data points need to be labeled to achieve a certain model performance is a hugely beneficial step towards reducing the overall budgets for annotation. It pertains to both active learning and traditional data annotation, and is particularly beneficial for low resource scenarios. Nevertheless, it remains a largely under-explored area of research in NLP. We therefore explored various techniques for estimating the training sample size necessary to achieve a targeted performance value. We derived a simple yet effective approach to predict the maximum achievable model performance based on small amount of training samples - which serves as an early indicator during data annotation for data quality and sample size determination. We performed ablation studies on four language understanding tasks, and showed that the proposed approach allows us to forecast model performance within a small margin of mean absolute error (~ 0.9%) with only 10% data.<br />Comment: Accepted to ACL 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2307.00374
Document Type :
Working Paper