Back to Search Start Over

Overprotective Training Environments Fall Short at Testing Time: Let Models Contribute to Their Own Training

Authors :
Testoni, Alberto
Bernardi, Raffaella
Publication Year :
2021

Abstract

Despite important progress, conversational systems often generate dialogues that sound unnatural to humans. We conjecture that the reason lies in their different training and testing conditions: agents are trained in a controlled "lab" setting but tested in the "wild". During training, they learn to generate an utterance given the human dialogue history. On the other hand, during testing, they must interact with each other, and hence deal with noisy data. We propose to fill this gap by training the model with mixed batches containing both samples of human and machine-generated dialogues. We assess the validity of the proposed method on GuessWhat?!, a visual referential game.<br />Comment: This paper has been published in the Proceedings of the Seventh Italian Conference on Computational Linguistics, CLiC-it 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2103.11145
Document Type :
Working Paper