Back to Search Start Over

Adversarial Entropy Optimization for Unsupervised Domain Adaptation.

Authors :
Ma, Ao
Li, Jingjing
Lu, Ke
Zhu, Lei
Shen, Heng Tao
Source :
IEEE Transactions on Neural Networks & Learning Systems; Nov2022, Vol. 33 Issue 11, p6263-6274, 12p
Publication Year :
2022

Abstract

Domain adaptation is proposed to deal with the challenging problem where the probability distribution of the training source is different from the testing target. Recently, adversarial learning has become the dominating technique for domain adaptation. Usually, adversarial domain adaptation methods simultaneously train a feature learner and a domain discriminator to learn domain-invariant features. Accordingly, how to effectively train the domain-adversarial model to learn domain-invariant features becomes a challenge in the community. To this end, we propose in this article a novel domain adaptation scheme named adversarial entropy optimization (AEO) to address the challenge. Specifically, we minimize the entropy when samples are from the independent distributions of source domain or target domain to improve the discriminability of the model. At the same time, we maximize the entropy when features are from the combined distribution of source domain and target domain so that the domain discriminator can be confused and the transferability of representations can be promoted. This minimax regime is well matched with the core idea of adversarial learning, empowering our model with transferability as well as discriminability for domain adaptation tasks. Also, AEO is flexible and compatible with different deep networks and domain adaptation frameworks. Experiments on five data sets show that our method can achieve state-of-the-art performance across diverse domain adaptation tasks. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
2162237X
Volume :
33
Issue :
11
Database :
Complementary Index
Journal :
IEEE Transactions on Neural Networks & Learning Systems
Publication Type :
Periodical
Accession number :
160690157
Full Text :
https://doi.org/10.1109/TNNLS.2021.3073119