Back to Search Start Over

An Evaluation on Large Language Model Outputs: Discourse and Memorization

Authors :
de Wynter, Adrian
Wang, Xun
Sokolov, Alex
Gu, Qilong
Chen, Si-Qing
Publication Year :
2023

Abstract

We present an empirical evaluation of various outputs generated by nine of the most widely-available large language models (LLMs). Our analysis is done with off-the-shelf, readily-available tools. We find a correlation between percentage of memorized text, percentage of unique text, and overall output quality, when measured with respect to output pathologies such as counterfactual and logically-flawed statements, and general failures like not staying on topic. Overall, 80.0% of the outputs evaluated contained memorized data, but outputs containing the most memorized content were also more likely to be considered of high quality. We discuss and evaluate mitigation strategies, showing that, in the models evaluated, the rate of memorized text being output is reduced. We conclude with a discussion on potential implications around what it means to learn, to memorize, and to evaluate quality text.<br />Comment: Preprint. Under review

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2304.08637
Document Type :
Working Paper
Full Text :
https://doi.org/10.1016/j.nlp.2023.100024