Back to Search Start Over

Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus

Authors :
Zhang, Tianhang
Qiu, Lin
Guo, Qipeng
Deng, Cheng
Zhang, Yue
Zhang, Zheng
Zhou, Chenghu
Wang, Xinbing
Fu, Luoyi
Publication Year :
2023

Abstract

Large Language Models (LLMs) have gained significant popularity for their impressive performance across diverse fields. However, LLMs are prone to hallucinate untruthful or nonsensical outputs that fail to meet user expectations in many real-world applications. Existing works for detecting hallucinations in LLMs either rely on external knowledge for reference retrieval or require sampling multiple responses from the LLM for consistency verification, making these methods costly and inefficient. In this paper, we propose a novel reference-free, uncertainty-based method for detecting hallucinations in LLMs. Our approach imitates human focus in factuality checking from three aspects: 1) focus on the most informative and important keywords in the given text; 2) focus on the unreliable tokens in historical context which may lead to a cascade of hallucinations; and 3) focus on the token properties such as token type and token frequency. Experimental results on relevant datasets demonstrate the effectiveness of our proposed method, which achieves state-of-the-art performance across all the evaluation metrics and eliminates the need for additional information.<br />Comment: Accepted by EMNLP 2023 (main conference)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.13230
Document Type :
Working Paper