Back to Search Start Over

Leveraging Large Language Models for NLG Evaluation: Advances and Challenges

Authors :
Li, Zhen
Xu, Xiaohan
Shen, Tao
Xu, Can
Gu, Jia-Chen
Lai, Yuxuan
Tao, Chongyang
Ma, Shuai
Publication Year :
2024

Abstract

In the rapidly evolving domain of Natural Language Generation (NLG) evaluation, introducing Large Language Models (LLMs) has opened new avenues for assessing generated content quality, e.g., coherence, creativity, and context relevance. This paper aims to provide a thorough overview of leveraging LLMs for NLG evaluation, a burgeoning area that lacks a systematic analysis. We propose a coherent taxonomy for organizing existing LLM-based evaluation metrics, offering a structured framework to understand and compare these methods. Our detailed exploration includes critically assessing various LLM-based methodologies, as well as comparing their strengths and limitations in evaluating NLG outputs. By discussing unresolved challenges, including bias, robustness, domain-specificity, and unified evaluation, this paper seeks to offer insights to researchers and advocate for fairer and more advanced NLG evaluation techniques.<br />Comment: 21 pages, 5 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.07103
Document Type :
Working Paper