1. Multi-Classification Model for Spoken Language Understanding
- Author
-
Chaohong Tan and Zhen-Hua Ling
- Subjects
Computer science ,business.industry ,05 social sciences ,Multi-task learning ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Focus (linguistics) ,Set (abstract data type) ,0502 economics and business ,Edit distance ,Artificial intelligence ,050207 economics ,Tuple ,business ,computer ,Encoder ,Natural language processing ,Utterance ,0105 earth and related environmental sciences ,Spoken language - Abstract
The spoken language understanding (SLU) is an important part of spoken dialogue system (SDS). In the paper, we focus on how to extract a set of act-slot-value tuples from users’ utterances in the 1st Chinese Audio-Textual Spoken Language Understanding Challenge (CATSLU). This paper adopts the pretrained BERT model to encode users’ utterances and builds multiple classifiers to get the required tuples. In our framework, finding acts and values of slots are recognized as classification tasks respectively. Such multi-task training is expected to help the encoder to get better understanding of the utterance. Since the system is built on the transcriptions given by automatic speech recognition (ASR), some tricks are applied to correct the errors of the tuples. We also found that using the minimum edit distance (MED) between results and candidates to rebuild the tuples was beneficial in our experiments.
- Published
- 2019