Back to Search Start Over

Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs

Authors :
Nguyen, Minh-Vuong
Luo, Linhao
Shiri, Fatemeh
Phung, Dinh
Li, Yuan-Fang
Vu, Thuy-Trang
Haffari, Gholamreza
Publication Year :
2024

Abstract

Large language models (LLMs) demonstrate strong reasoning abilities when prompted to generate chain-of-thought (CoT) explanations alongside answers. However, previous research on evaluating LLMs has solely focused on answer accuracy, neglecting the correctness of the generated CoT. In this paper, we delve deeper into the CoT reasoning capabilities of LLMs in multi-hop question answering by utilizing knowledge graphs (KGs). We propose a novel discriminative and generative CoT evaluation paradigm to assess LLMs' knowledge of reasoning and the accuracy of the generated CoT. Through experiments conducted on 5 different families of LLMs across 2 multi-hop question-answering datasets, we find that LLMs possess sufficient knowledge to perform reasoning. However, there exists a significant disparity between answer accuracy and faithfulness of the CoT reasoning generated by LLMs, indicating that they often arrive at correct answers through incorrect reasoning.<br />Comment: Minh-Vuong Nguyen and Linhao Luo are co-first authors and contributed equally to the preparation of this manuscript. Accepted to ACL24-Findings

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.11199
Document Type :
Working Paper