Back to Search Start Over

Domain Conditioned Adaptation Network

Authors :
Li, Shuang
Liu, Chi Harold
Lin, Qiuxia
Xie, Binhui
Ding, Zhengming
Huang, Gao
Tang, Jian
Publication Year :
2020

Abstract

Tremendous research efforts have been made to thrive deep domain adaptation (DA) by seeking domain-invariant features. Most existing deep DA models only focus on aligning feature representations of task-specific layers across domains while integrating a totally shared convolutional architecture for source and target. However, we argue that such strongly-shared convolutional layers might be harmful for domain-specific feature learning when source and target data distribution differs to a large extent. In this paper, we relax a shared-convnets assumption made by previous DA methods and propose a Domain Conditioned Adaptation Network (DCAN), which aims to excite distinct convolutional channels with a domain conditioned channel attention mechanism. As a result, the critical low-level domain-dependent knowledge could be explored appropriately. As far as we know, this is the first work to explore the domain-wise convolutional channel activation for deep DA networks. Moreover, to effectively align high-level feature distributions across two domains, we further deploy domain conditioned feature correction blocks after task-specific layers, which will explicitly correct the domain discrepancy. Extensive experiments on three cross-domain benchmarks demonstrate the proposed approach outperforms existing methods by a large margin, especially on very tough cross-domain learning tasks.<br />Comment: Accepted by AAAI 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2005.06717
Document Type :
Working Paper