Back to Search Start Over

ComperDial: Commonsense Persona-grounded Dialogue Dataset and Benchmark

Authors :
Wakaki, Hiromi
Mitsufuji, Yuki
Maeda, Yoshinori
Nishimura, Yukiko
Gao, Silin
Zhao, Mengjie
Yamada, Keiichi
Bosselut, Antoine
Publication Year :
2024

Abstract

We propose a new benchmark, ComperDial, which facilitates the training and evaluation of evaluation metrics for open-domain dialogue systems. ComperDial consists of human-scored responses for 10,395 dialogue turns in 1,485 conversations collected from 99 dialogue agents submitted to the Commonsense Persona-grounded Dialogue (CPD) challenge. As a result, for any dialogue, our benchmark includes multiple diverse responses with variety of characteristics to ensure more robust evaluation of learned dialogue metrics. In addition to single-turn response scores, ComperDial also contains dialogue-level human-annotated scores, enabling joint assessment of multi-turn model responses throughout a dialogue. Finally, building off ComperDial, we devise a new automatic evaluation metric to measure the general similarity of model-generated dialogues to human conversations. Our experimental results demonstrate that our novel metric, CPDScore is more correlated with human judgments than existing metrics. We release both ComperDial and CPDScore to the community to accelerate development of automatic evaluation metrics for open-domain dialogue systems.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.11228
Document Type :
Working Paper