Back to Search Start Over

Self-Paced Co-Training of Graph Neural Networks for Semi-Supervised Node Classification.

Authors :
Gong M
Zhou H
Qin AK
Liu W
Zhao Z
Source :
IEEE transactions on neural networks and learning systems [IEEE Trans Neural Netw Learn Syst] 2023 Nov; Vol. 34 (11), pp. 9234-9247. Date of Electronic Publication: 2023 Oct 27.
Publication Year :
2023

Abstract

Graph neural networks (GNNs) have demonstrated great success in many graph data-based applications. The impressive behavior of GNNs typically relies on the availability of a sufficient amount of labeled data for model training. However, in practice, obtaining a large number of annotations is prohibitively labor-intensive and even impossible. Co-training is a popular semi-supervised learning (SSL) paradigm, which trains multiple models based on a common training set while augmenting the limited amount of labeled data used for training each model via the pseudolabeled data generated from the prediction results of other models. Most of the existing co-training works do not control the quality of pseudolabeled data when using them. Therefore, the inaccurate pseudolabels generated by immature models in the early stage of the training process are likely to cause noticeable errors when they are used for augmenting the training data for other models. To address this issue, we propose a self-paced co-training for the GNN (SPC-GNN) framework for semi-supervised node classification. This framework trains multiple GNNs with the same or different structures on different representations of the same training data. Each GNN carries out SSL by using both the originally available labeled data and the augmented pseudolabeled data generated from other GNNs. To control the quality of pseudolabels, a self-paced label augmentation strategy is designed to make the pseudolabels generated at a higher confidence level to be utilized earlier during training such that the negative impact of inaccurate pseudolabels on training data augmentation, and accordingly, the subsequent training process can be mitigated. Finally, each of the trained GNN is evaluated on a validation set, and the best-performing one is chosen as the output. To improve the training effectiveness of the framework, we devise a pretraining followed by a two-step optimization scheme to train GNNs. Experimental results on the node classification task demonstrate that the proposed framework achieves significant improvement over the state-of-the-art SSL methods.

Details

Language :
English
ISSN :
2162-2388
Volume :
34
Issue :
11
Database :
MEDLINE
Journal :
IEEE transactions on neural networks and learning systems
Publication Type :
Academic Journal
Accession number :
35312623
Full Text :
https://doi.org/10.1109/TNNLS.2022.3157688