Back to Search Start Over

RepEval: Effective Text Evaluation with LLM Representation

Authors :
Sheng, Shuqian
Xu, Yi
Zhang, Tianhang
Shen, Zanwei
Fu, Luoyi
Ding, Jiaxin
Zhou, Lei
Wang, Xinbing
Zhou, Chenghu
Publication Year :
2024

Abstract

Automatic evaluation metrics for generated texts play an important role in the NLG field, especially with the rapid growth of LLMs. However, existing metrics are often limited to specific scenarios, making it challenging to meet the evaluation requirements of expanding LLM applications. Therefore, there is a demand for new, flexible, and effective metrics. In this study, we introduce RepEval, the first metric leveraging the projection of LLM representations for evaluation. RepEval requires minimal sample pairs for training, and through simple prompt modifications, it can easily transition to various tasks. Results on ten datasets from three tasks demonstrate the high effectiveness of our method, which exhibits stronger correlations with human judgments compared to previous metrics, even outperforming GPT-4. Our work underscores the richness of information regarding text quality embedded within LLM representations, offering insights for the development of new metrics.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.19563
Document Type :
Working Paper