Back to Search
Start Over
MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations
- Publication Year :
- 2023
-
Abstract
- Contrastive self-supervised learning has gained attention for its ability to create high-quality representations from large unlabelled data sets. A key reason that these powerful features enable data-efficient learning of downstream tasks is that they provide augmentation invariance, which is often a useful inductive bias. However, the amount and type of invariances preferred is not known apriori, and varies across different downstream tasks. We therefore propose a multi-task self-supervised framework (MT-SLVR) that learns both variant and invariant features in a parameter-efficient manner. Our multi-task representation provides a strong and flexible feature that benefits diverse downstream tasks. We evaluate our approach on few-shot classification tasks drawn from a variety of audio domains and demonstrate improved classification performance on all of them<br />Comment: Last author version accepted to InterSpeech23. 5 pages
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2305.17191
- Document Type :
- Working Paper