Back to Search Start Over

Rule or Story, Which is a Better Commonsense Expression for Talking with Large Language Models?

Authors :
Bian, Ning
Han, Xianpei
Lin, Hongyu
Lu, Yaojie
He, Ben
Sun, Le
Publication Year :
2024

Abstract

Building machines with commonsense has been a longstanding challenge in NLP due to the reporting bias of commonsense rules and the exposure bias of rule-based commonsense reasoning. In contrast, humans convey and pass down commonsense implicitly through stories. This paper investigates the inherent commonsense ability of large language models (LLMs) expressed through storytelling. We systematically investigate and compare stories and rules for retrieving and leveraging commonsense in LLMs. Experimental results on 28 commonsense QA datasets show that stories outperform rules as the expression for retrieving commonsense from LLMs, exhibiting higher generation confidence and commonsense accuracy. Moreover, stories are the more effective commonsense expression for answering questions regarding daily events, while rules are more effective for scientific questions. This aligns with the reporting bias of commonsense in text corpora. We further show that the correctness and relevance of commonsense stories can be further improved via iterative self-supervised fine-tuning. These findings emphasize the importance of using appropriate language to express, retrieve, and leverage commonsense for LLMs, highlighting a promising direction for better exploiting their commonsense abilities.<br />Comment: Accepted to ACL 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.14355
Document Type :
Working Paper