Back to Search Start Over

Evaluating the Factual Consistency of Large Language Models Through News Summarization

Authors :
Tam, Derek
Mascarenhas, Anisha
Zhang, Shiyue
Kwan, Sarah
Bansal, Mohit
Raffel, Colin
Tam, Derek
Mascarenhas, Anisha
Zhang, Shiyue
Kwan, Sarah
Bansal, Mohit
Raffel, Colin
Publication Year :
2022

Abstract

While large language models (LLMs) have proven to be effective on a large variety of tasks, they are also known to hallucinate information. To measure whether an LLM prefers factually consistent continuations of its input, we propose a new benchmark called FIB(Factual Inconsistency Benchmark) that focuses on the task of summarization. Specifically, our benchmark involves comparing the scores an LLM assigns to a factually consistent versus a factually inconsistent summary for an input news article. For factually consistent summaries, we use human-written reference summaries that we manually verify as factually consistent. To generate summaries that are factually inconsistent, we generate summaries from a suite of summarization models that we have manually annotated as factually inconsistent. A model's factual consistency is then measured according to its accuracy, i.e.\ the proportion of documents where it assigns a higher score to the factually consistent summary. To validate the usefulness of FIB, we evaluate 23 large language models ranging from 1B to 176B parameters from six different model families including BLOOM and OPT. We find that existing LLMs generally assign a higher score to factually consistent summaries than to factually inconsistent summaries. However, if the factually inconsistent summaries occur verbatim in the document, then LLMs assign a higher score to these factually inconsistent summaries than factually consistent summaries. We validate design choices in our benchmark including the scoring method and source of distractor summaries. Our code and benchmark data can be found at https://github.com/r-three/fib.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1381582729
Document Type :
Electronic Resource