1. On the Use of Semantically-Aligned Speech Representations for Spoken Language Understanding
- Author
-
Gaelle Laperriere, Valentin Pelloin, Mickael Rouvier, Themos Stafylakis, and Yannick Esteve
- Subjects
FOS: Computer and information sciences ,Sound (cs.SD) ,Computer Science - Computation and Language ,Audio and Speech Processing (eess.AS) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computation and Language (cs.CL) ,Computer Science - Sound ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
In this paper we examine the use of semantically-aligned speech representations for end-to-end spoken language understanding (SLU). We employ the recently-introduced SAMU-XLSR model, which is designed to generate a single embedding that captures the semantics at the utterance level, semantically aligned across different languages. This model combines the acoustic frame-level speech representation learning model (XLS-R) with the Language Agnostic BERT Sentence Embedding (LaBSE) model. We show that the use of the SAMU-XLSR model instead of the initial XLS-R model improves significantly the performance in the framework of end-to-end SLU. Finally, we present the benefits of using this model towards language portability in SLU., Comment: Accepted in IEEE SLT 2022. This work was performed using HPC resources from GENCI/IDRIS (grant 2022 AD011012565) and received funding from the EU H2020 research and innovation programme under the Marie Sklodowska-Curie ESPERANTO project (grant agreement No 101007666), through the SELMA project (grant No 957017) and from the French ANR through the AISSPER project (ANR-19-CE23-0004)
- Published
- 2022
- Full Text
- View/download PDF