Back to Search Start Over

A Bichannel Transformer with Context Encoding for Document-Driven Conversation Generation in Social Media

Authors :
Qingchuan Zhang
Cai Yuanyuan
Ke Li
Haitao Xiong
Min Zuo
Source :
Complexity, Vol 2020 (2020)
Publication Year :
2020
Publisher :
Hindawi-Wiley, 2020.

Abstract

Along with the development of social media on the internet, dialogue systems are becoming more and more intelligent to meet users’ needs for communication, emotion, and social intercourse. Previous studies usually use sequence-to-sequence learning with recurrent neural networks for response generation. However, recurrent-based learning models heavily suffer from the problem of long-distance dependencies in sequences. Moreover, some models neglect crucial information in the dialogue contexts, which leads to uninformative and inflexible responses. To address these issues, we present a bichannel transformer with context encoding (BCTCE) for document-driven conversation. This conversational generator consists of a context encoder, an utterance encoder, and a decoder with attention mechanism. The encoders aim to learn the distributed representation of input texts. The multihop attention mechanism is used in BCTCE to capture the interaction between documents and dialogues. We evaluate the proposed BCTCE by both automatic evaluation and human judgment. The experimental results on the dataset CMU_DoG indicate that the proposed model yields significant improvements over the state-of-the-art baselines on most of the evaluation metrics, and the generated responses of BCTCE are more informative and more relevant to dialogues than baselines.

Details

Language :
English
ISSN :
10990526 and 10762787
Volume :
2020
Database :
OpenAIRE
Journal :
Complexity
Accession number :
edsair.doi.dedup.....84250a10aa2588ee4836cbe431debbaf