Back to Search Start Over

Faster Domain Adaptation Networks.

Authors :
Li, Jingjing
Jing, Mengmeng
Su, Hongzu
Lu, Ke
Zhu, Lei
Shen, Heng Tao
Source :
IEEE Transactions on Knowledge & Data Engineering; Dec2022, Vol. 34 Issue 12, p5770-5783, 14p
Publication Year :
2022

Abstract

It is widely acknowledged that the success of deep learning is built upon large-scale training data and tremendous computing power. However, the data and computing power are not always available for many real-world applications. In this paper, we address the machine learning problem where it lacks training data and limits computing power. Specifically, we investigate domain adaptation which is able to transfer knowledge from one labeled source domain to an unlabeled target domain, so that we do not need much training data from the target domain. At the same time, we consider the situation that the running environment is confined, e.g., in edge computing the end device has very limited running resources. Technically, we present the Faster Domain Adaptation (FDA) protocol and further report two paradigms of FDA: early stopping and amid skipping. The former accelerates domain adaptation by multiple early exit points. The latter speeds up the adaptation by wisely skip several amid neural network blocks. Extensive experiments on standard benchmarks verify that our method is able to achieve the comparable and even better accuracy but employ much less computing resources. To the best of our knowledge, there are very few works which investigated accelerating knowledge adaptation in the community. This work is expected to inspire the topic for more discussion. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10414347
Volume :
34
Issue :
12
Database :
Complementary Index
Journal :
IEEE Transactions on Knowledge & Data Engineering
Publication Type :
Academic Journal
Accession number :
160692076
Full Text :
https://doi.org/10.1109/TKDE.2021.3060473