Back to Search Start Over

Chinese SimpleQA: A Chinese Factuality Evaluation for Large Language Models

Authors :
He, Yancheng
Li, Shilong
Liu, Jiaheng
Tan, Yingshui
Wang, Weixun
Huang, Hui
Bu, Xingyuan
Guo, Hangyu
Hu, Chengwei
Zheng, Boren
Lin, Zhuoran
Liu, Xuepeng
Sun, Dekai
Lin, Shirong
Zheng, Zhicheng
Zhu, Xiaoyong
Su, Wenbo
Zheng, Bo
Publication Year :
2024

Abstract

New LLM evaluation benchmarks are important to align with the rapid development of Large Language Models (LLMs). In this work, we present Chinese SimpleQA, the first comprehensive Chinese benchmark to evaluate the factuality ability of language models to answer short questions, and Chinese SimpleQA mainly has five properties (i.e., Chinese, Diverse, High-quality, Static, Easy-to-evaluate). Specifically, first, we focus on the Chinese language over 6 major topics with 99 diverse subtopics. Second, we conduct a comprehensive quality control process to achieve high-quality questions and answers, where the reference answers are static and cannot be changed over time. Third, following SimpleQA, the questions and answers are very short, and the grading process is easy-to-evaluate based on OpenAI API. Based on Chinese SimpleQA, we perform a comprehensive evaluation on the factuality abilities of existing LLMs. Finally, we hope that Chinese SimpleQA could guide the developers to better understand the Chinese factuality abilities of their models and facilitate the growth of foundation models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.07140
Document Type :
Working Paper