Back to Search
Start Over
Continual Self-Supervised Domain Adaptation for End-to-End Speaker Diarization
- Source :
- IEEE Spoken Language Technology Workshop (SLT 2022), IEEE Spoken Language Technology Workshop (SLT 2022), IEEE Speech and Language Processing Technical Committee, Jan 2023, Doha, Qatar. à paraître
- Publication Year :
- 2023
- Publisher :
- IEEE, 2023.
-
Abstract
- International audience; In conventional domain adaptation for speaker diarization, a large collection of annotated conversations from the target domain is required. In this work, we propose a novel continual training scheme for domain adaptation of an end-to-end speaker diarization system, which processes one conversation at a time and benefits from full self-supervision thanks to pseudo-labels. The qualities of our method allow for autonomous adaptation (e.g. of a voice assistant to a new household), while also avoiding permanent storage of possibly sensitive user conversations. We experiment extensively on the 11 domains of the DIHARD III corpus and show the effectiveness of our approach with respect to a pre-trained baseline, achieving a relative 17% performance improvement. We also find that data augmentation and a well-defined target domain are key factors to avoid divergence and to benefit from transfer.
- Subjects :
- Self-supervised learning
Domain adaptation
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing
End-to-end speaker diarization
Continual learning
[INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE]
[INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL]
[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- 2022 IEEE Spoken Language Technology Workshop (SLT)
- Accession number :
- edsair.doi.dedup.....56d2b4ceebd2d3a3e1dc45cce747aa9a
- Full Text :
- https://doi.org/10.1109/slt54892.2023.10023195