Back to Search Start Over

Deep Multi-Modality Adversarial Networks for Unsupervised Domain Adaptation.

Authors :
Ma, Xinhong
Zhang, Tianzhu
Xu, Changsheng
Source :
IEEE Transactions on Multimedia; Sep2019, Vol. 21 Issue 9, p2419-2431, 13p
Publication Year :
2019

Abstract

Unsupervised domain adaptation aims to transfer domain knowledge from existing well-defined tasks to new ones where labels are unavailable. In the real-world applications, domain discrepancy is usually uncontrollable especially for multi-modality data. Therefore, it is significantly motivated to deal with a multi-modality domain adaptation task. As labels are unavailable in a target domain, how to learn semantic multi-modality representations and successfully adapt the classifier from a source to the target domain remain open challenges in a multi-modality domain adaptation task. To deal with these issues, we propose a multi-modality adversarial network (MMAN), which applies stacked attention to learn semantic multi-modality representations and reduces domain discrepancy via adversarial training. Unlike the previous domain adaptation methods, which cannot make full use of source domain categories information, multi-channel constraint is employed to capture fine-grained categories of knowledge that could enhance the discrimination of target samples and boost target performance on single-modality and multi-modality domain adaptation problems. We apply the proposed MMAN to two applications including cross-domain object recognition and cross-domain social event recognition. The extensive experimental evaluations demonstrate the effectiveness of the proposed model for unsupervised domain adaptation. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15209210
Volume :
21
Issue :
9
Database :
Complementary Index
Journal :
IEEE Transactions on Multimedia
Publication Type :
Academic Journal
Accession number :
138275600
Full Text :
https://doi.org/10.1109/TMM.2019.2902100