Back to Search Start Over

Cheap Learning: Maximising Performance of Language Models for Social Data Science Using Minimal Data

Authors :
Castro-Gonzalez, Leonardo
Chung, Yi-Ling
Kirk, Hannak Rose
Francis, John
Williams, Angus R.
Johansson, Pica
Bright, Jonathan
Publication Year :
2024

Abstract

The field of machine learning has recently made significant progress in reducing the requirements for labelled training data when building new models. These `cheaper' learning techniques hold significant potential for the social sciences, where development of large labelled training datasets is often a significant practical impediment to the use of machine learning for analytical tasks. In this article we review three `cheap' techniques that have developed in recent years: weak supervision, transfer learning and prompt engineering. For the latter, we also review the particular case of zero-shot prompting of large language models. For each technique we provide a guide of how it works and demonstrate its application across six different realistic social science applications (two different tasks paired with three different dataset makeups). We show good performance for all techniques, and in particular we demonstrate how prompting of large language models can achieve high accuracy at very low cost. Our results are accompanied by a code repository to make it easy for others to duplicate our work and use it in their own research. Overall, our article is intended to stimulate further uptake of these techniques in the social sciences.<br />Comment: 39 pages, 10 figures, 6 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.12295
Document Type :
Working Paper