Back to Search Start Over

Language-Queried Target Sound Extraction Without Parallel Training Data

Authors :
Ma, Hao
Peng, Zhiyuan
Li, Xu
Li, Yukai
Shao, Mingjie
Kong, Qiuqiang
Liu, Ju
Publication Year :
2024

Abstract

Language-queried target sound extraction (TSE) aims to extract specific sounds from mixtures based on language queries. Traditional fully-supervised training schemes require extensively annotated parallel audio-text data, which are labor-intensive. We introduce a language-free training scheme, requiring only unlabelled audio clips for TSE model training by utilizing the multi-modal representation alignment nature of the contrastive language-audio pre-trained model (CLAP). In a vanilla language-free training stage, target audio is encoded using the pre-trained CLAP audio encoder to form a condition embedding for the TSE model, while during inference, user language queries are encoded by CLAP text encoder. This straightforward approach faces challenges due to the modality gap between training and inference queries and information leakage from direct exposure to target audio during training. To address this, we propose a retrieval-augmented strategy. Specifically, we create an embedding cache using audio captions generated by a large language model (LLM). During training, target audio embeddings retrieve text embeddings from this cache to use as condition embeddings, ensuring consistent modalities between training and inference and eliminating information leakage. Extensive experiment results show that our retrieval-augmented approach achieves consistent and notable performance improvements over existing state-of-the-art with better generalizability.<br />Comment: Submitted to ICASSP 2025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.09398
Document Type :
Working Paper