Back to Search Start Over

WHISMA: A Speech-LLM to Perform Zero-shot Spoken Language Understanding

Authors :
Li, Mohan
Do, Cong-Thanh
Keizer, Simon
Farag, Youmna
Stoyanchev, Svetlana
Doddipatla, Rama
Publication Year :
2024

Abstract

Speech large language models (speech-LLMs) integrate speech and text-based foundation models to provide a unified framework for handling a wide range of downstream tasks. In this paper, we introduce WHISMA, a speech-LLM tailored for spoken language understanding (SLU) that demonstrates robust performance in various zero-shot settings. WHISMA combines the speech encoder from Whisper with the Llama-3 LLM, and is fine-tuned in a parameter-efficient manner on a comprehensive collection of SLU-related datasets. Our experiments show that WHISMA significantly improves the zero-shot slot filling performance on the SLURP benchmark, achieving a relative gain of 26.6% compared to the current state-of-the-art model. Furthermore, to evaluate WHISMA's generalisation capabilities to unseen domains, we develop a new task-agnostic benchmark named SLU-GLUE. The evaluation results indicate that WHISMA outperforms an existing speech-LLM (Qwen-Audio) with a relative gain of 33.0%.<br />Comment: accepted to SLT 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.16423
Document Type :
Working Paper