Back to Search Start Over

Improved Natural Language Generation via Loss Truncation

Authors :
Daniel Kang
Tatsunori Hashimoto
Source :
ACL
Publication Year :
2020

Abstract

Neural language models are usually trained to match the distributional properties of a large-scale corpus by minimizing the log loss. While straightforward to optimize, this approach forces the model to reproduce all variations in the dataset, including noisy and invalid references (e.g., misannotation and hallucinated facts). Worse, the commonly used log loss is overly sensitive to such phenomena and even a small fraction of noisy data can degrade performance. In this work, we show that the distinguishability of the models and reference serves as a principled and robust alternative for handling invalid references. To optimize distinguishability, we propose loss truncation, which adaptively removes high loss examples during training. We show this is as easy to optimize as log loss and tightly bounds distinguishability under noise. Empirically, we demonstrate that loss truncation outperforms existing baselines on distinguishability on a summarization task, and show that samples generated by the loss truncation model have factual accuracy ratings that exceed those of baselines and match human references.<br />ACL 2020 Camera Ready Submission

Details

Language :
English
Database :
OpenAIRE
Journal :
ACL
Accession number :
edsair.doi.dedup.....aa00e7dee8acc225edcee80c8d88cbaa