Back to Search Start Over

FactCHD: Benchmarking Fact-Conflicting Hallucination Detection

Authors :
Chen, Xiang
Song, Duanzheng
Gui, Honghao
Wang, Chenxi
Zhang, Ningyu
Jiang, Yong
Huang, Fei
Lv, Chengfei
Zhang, Dan
Chen, Huajun
Publication Year :
2023

Abstract

Despite their impressive generative capabilities, LLMs are hindered by fact-conflicting hallucinations in real-world applications. The accurate identification of hallucinations in texts generated by LLMs, especially in complex inferential scenarios, is a relatively unexplored area. To address this gap, we present FactCHD, a dedicated benchmark designed for the detection of fact-conflicting hallucinations from LLMs. FactCHD features a diverse dataset that spans various factuality patterns, including vanilla, multi-hop, comparison, and set operation. A distinctive element of FactCHD is its integration of fact-based evidence chains, significantly enhancing the depth of evaluating the detectors' explanations. Experiments on different LLMs expose the shortcomings of current approaches in detecting factual errors accurately. Furthermore, we introduce Truth-Triangulator that synthesizes reflective considerations by tool-enhanced ChatGPT and LoRA-tuning based on Llama2, aiming to yield more credible detection through the amalgamation of predictive results and evidence. The benchmark dataset is available at https://github.com/zjunlp/FactCHD.<br />Comment: IJCAI 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.12086
Document Type :
Working Paper