Back to Search Start Over

StyleDGPT: Stylized Response Generation with Pre-trained Language Models

Authors :
Yang, Ze
Wu, Wei
Xu, Can
Liang, Xinnian
Bai, Jiaqi
Wang, Liran
Wang, Wei
Li, Zhoujun
Publication Year :
2020

Abstract

Generating responses following a desired style has great potentials to extend applications of open-domain dialogue systems, yet is refrained by lacking of parallel data for training. In this work, we explore the challenging task with pre-trained language models that have brought breakthrough to various natural language tasks. To this end, we introduce a KL loss and a style classifier to the fine-tuning step in order to steer response generation towards the target style in both a word-level and a sentence-level. Comprehensive empirical studies with two public datasets indicate that our model can significantly outperform state-of-the-art methods in terms of both style consistency and contextual coherence.<br />Comment: Findings of EMNLP2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2010.02569
Document Type :
Working Paper