Back to Search
Start Over
Improved Natural Language Generation via Loss Truncation
- Source :
- ACL
- Publication Year :
- 2020
-
Abstract
- Neural language models are usually trained to match the distributional properties of a large-scale corpus by minimizing the log loss. While straightforward to optimize, this approach forces the model to reproduce all variations in the dataset, including noisy and invalid references (e.g., misannotation and hallucinated facts). Worse, the commonly used log loss is overly sensitive to such phenomena and even a small fraction of noisy data can degrade performance. In this work, we show that the distinguishability of the models and reference serves as a principled and robust alternative for handling invalid references. To optimize distinguishability, we propose loss truncation, which adaptively removes high loss examples during training. We show this is as easy to optimize as log loss and tightly bounds distinguishability under noise. Empirically, we demonstrate that loss truncation outperforms existing baselines on distinguishability on a summarization task, and show that samples generated by the loss truncation model have factual accuracy ratings that exceed those of baselines and match human references.<br />ACL 2020 Camera Ready Submission
- Subjects :
- FOS: Computer and information sciences
Computer Science - Machine Learning
Computer Science - Computation and Language
Truncation
Computer science
05 social sciences
Natural language generation
010501 environmental sciences
01 natural sciences
Automatic summarization
Machine Learning (cs.LG)
Task (computing)
Hallucinating
0502 economics and business
Fraction (mathematics)
Language model
050207 economics
Algorithm
Computation and Language (cs.CL)
0105 earth and related environmental sciences
Subjects
Details
- Language :
- English
- Database :
- OpenAIRE
- Journal :
- ACL
- Accession number :
- edsair.doi.dedup.....aa00e7dee8acc225edcee80c8d88cbaa