Back to Search Start Over

Split to Merge: Unifying Separated Modalities for Unsupervised Domain Adaptation

Authors :
Li, Xinyao
Li, Yuke
Du, Zhekai
Li, Fengling
Lu, Ke
Li, Jingjing
Publication Year :
2024

Abstract

Large vision-language models (VLMs) like CLIP have demonstrated good zero-shot learning performance in the unsupervised domain adaptation task. Yet, most transfer approaches for VLMs focus on either the language or visual branches, overlooking the nuanced interplay between both modalities. In this work, we introduce a Unified Modality Separation (UniMoS) framework for unsupervised domain adaptation. Leveraging insights from modality gap studies, we craft a nimble modality separation network that distinctly disentangles CLIP's features into language-associated and vision-associated components. Our proposed Modality-Ensemble Training (MET) method fosters the exchange of modality-agnostic information while maintaining modality-specific nuances. We align features across domains using a modality discriminator. Comprehensive evaluations on three benchmarks reveal our approach sets a new state-of-the-art with minimal computational costs. Code: https://github.com/TL-UESTC/UniMoS<br />Comment: CVPR 2024 camera ready

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.06946
Document Type :
Working Paper