Back to Search Start Over

Getting it right: the limits of fine-tuning large language models.

Authors :
Browning, Jacob
Source :
Ethics & Information Technology; Jun2024, Vol. 26 Issue 2, p1-9, 9p
Publication Year :
2024

Abstract

The surge in interest in natural language processing in artificial intelligence has led to an explosion of new language models capable of engaging in plausible language use. But ensuring these language models produce honest, helpful, and inoffensive outputs has proved difficult. In this paper, I argue problems of inappropriate content in current, autoregressive language models—such as ChatGPT and Gemini—are inescapable; merely predicting the next word is incompatible with reliably providing appropriate outputs. The various fine-tuning methods, while helpful, cannot transform the model from mere next word prediction to the kind of planning and forethought necessary for saying the right thing. The upshot is that these models will increasingly churn out bland, generic responses that will still fail to be accurate or appropriate. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
13881957
Volume :
26
Issue :
2
Database :
Complementary Index
Journal :
Ethics & Information Technology
Publication Type :
Academic Journal
Accession number :
177585516
Full Text :
https://doi.org/10.1007/s10676-024-09779-1