Back to Search Start Over

Selected confidence sample labeling for domain adaptation.

Authors :
Zheng, Zefeng
Teng, Shaohua
Wu, Naiqi
Teng, Luyao
Zhang, Wei
Fei, Lunke
Source :
Neurocomputing. Oct2023, Vol. 555, pN.PAG-N.PAG. 1p.
Publication Year :
2023

Abstract

Unsupervised Domain adaptation (UDA) aims to transfer knowledge from the labeled source domain to the unlabeled target domain. Recently, to achieve reliable knowledge learning, progressively labeling (PL) is proposed to select reliable target samples for training. Although PL achieves fruitful results, there are two problems that limit its performance: (a) PL may neglect to filter out the uncertain samples that lie in the classification boundaries, which might select low-quality target samples for training, and lead to error accumulations; and (b) PL might overlook the consistent selection in the sample selection stage during iteration, which might result in unstable sample selection. To cope with these problems, we propose a novel method called Selected Confidence Sample Labeling (SCSL). SCSL consists of three parts: Discriminative Progressively Labeling (DPL), Consistency Strategy (CS), and Differential Learning (DL). First, DPL selects the high-confidence target samples with the maximum probability difference between the highest and the second-highest probabilities of classification. In this way, the uncertain samples are filtered out, while the qualities of the selected target samples are ensured. Second, CS includes a group of consistency strategies to make the selected high-confidence target samples near to the class-centroids. This further improves the confidence degree of the selected target samples, and ensures that the proposed model does not remove or replace the selected target samples during iteration. At last, DL is a bi-strategic training approach that applies CS and top- k fuzzy probability clustering to train the high-confidence and the remaining target samples, respectively. In doing so, all the target samples are trained simultaneously, and the generalization of the model is improved. Extensive experiments on four benchmark datasets with the comparison of several advanced algorithms demonstrate the superiority of SCSL. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
555
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
170721238
Full Text :
https://doi.org/10.1016/j.neucom.2023.126624