Back to Search Start Over

Counter-intuitive: Large Language Models Can Better Understand Knowledge Graphs Than We Thought

Authors :
Dai, Xinbang
Hua, Yuncheng
Wu, Tongtong
Sheng, Yang
Ji, Qiu
Qi, Guilin
Dai, Xinbang
Hua, Yuncheng
Wu, Tongtong
Sheng, Yang
Ji, Qiu
Qi, Guilin
Publication Year :
2024

Abstract

Although the method of enhancing large language models' (LLMs') reasoning ability and reducing their hallucinations through the use of knowledge graphs (KGs) has received widespread attention, the exploration of how to enable LLMs to integrate the structured knowledge in KGs on-the-fly remains inadequate. Researchers often co-train KG embeddings and LLM parameters to equip LLMs with the ability of comprehending KG knowledge. However, this resource-hungry training paradigm significantly increases the model learning cost and is also unsuitable for non-open-source, black-box LLMs. In this paper, we employ complex question answering (CQA) as a task to assess the LLM's ability of comprehending KG knowledge. We conducted a comprehensive comparison of KG knowledge injection methods (from triples to natural language text), aiming to explore the optimal prompting method for supplying KG knowledge to LLMs, thereby enhancing their comprehension of KG. Contrary to our initial expectations, our analysis revealed that LLMs effectively handle messy, noisy, and linearized KG knowledge, outperforming methods that employ well-designed natural language (NL) textual prompts. This counter-intuitive finding provides substantial insights for future research on LLMs' comprehension of structured knowledge.<br />Comment: 13 pages

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438526558
Document Type :
Electronic Resource