1. Multi-level information and automatic dialog act detection in human–human spoken dialogs
- Author
-
Delphine Tribout, Lori Lamel, and Sophie Rosset
- Subjects
Linguistics and Language ,Computer science ,Semantic interpretation ,Speech recognition ,Semantic analysis (machine learning) ,media_common.quotation_subject ,02 engineering and technology ,computer.software_genre ,Language and Linguistics ,Dialog act ,0202 electrical engineering, electronic engineering, information engineering ,Conversation ,Dialog system ,Dialog box ,media_common ,060201 languages & linguistics ,business.industry ,Communication ,06 humanities and the arts ,Computer Science Applications ,Modeling and Simulation ,0602 languages and literature ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,computer ,Software ,Word (computer architecture) ,Natural language processing ,Utterance - Abstract
This paper reports studies on annotating and automatically detecting dialog acts in human-human spoken dialogs. The work reposes on three hypotheses: first, the succession of dialog acts is strongly constrained; second, the initial word and semantic class of word are more important for identifying dialog acts than the complete exact word sequence of an utterance; third, most of the important information is encoded in specific entities. A memory based learning approach is used to detect dialog acts. For each utterance unit, eight dialog acts are systematically annotated. Experiments have been conducted using different levels of information, with and without the use of dialog history information. In order to assess the generality of the method, the specific entity tag based model trained on a French corpus was tested on an English corpus for a similar task and on a French corpus from a different domain. A correct dialog act detection rate of about 86% is obtained for the same domain/language condition and 77% for the cross-language or cross-domain conditions.
- Published
- 2008
- Full Text
- View/download PDF