Back to Search Start Over

An Efficient Approach to Encoding Context for Spoken Language Understanding

Authors :
Raghav Gupta
Abhinav Rastogi
Dilek Hakkani-Tur
Source :
INTERSPEECH
Publication Year :
2018

Abstract

In task-oriented dialogue systems, spoken language understanding, or SLU, refers to the task of parsing natural language user utterances into semantic frames. Making use of context from prior dialogue history holds the key to more effective SLU. State of the art approaches to SLU use memory networks to encode context by processing multiple utterances from the dialogue at each turn, resulting in significant trade-offs between accuracy and computational efficiency. On the other hand, downstream components like the dialogue state tracker (DST) already keep track of the dialogue state, which can serve as a summary of the dialogue history. In this work, we propose an efficient approach to encoding context from prior utterances for SLU. More specifically, our architecture includes a separate recurrent neural network (RNN) based encoding module that accumulates dialogue context to guide the frame parsing sub-tasks and can be shared between SLU and DST. In our experiments, we demonstrate the effectiveness of our approach on dialogues from two domains.<br />Submitted to INTERSPEECH 2018

Details

Language :
English
Database :
OpenAIRE
Journal :
INTERSPEECH
Accession number :
edsair.doi.dedup.....9c4e4bddc991476834dd8b9aadd7a238