Loading…
Self-Paced Co-Training of Graph Neural Networks for Semi-Supervised Node Classification
Graph neural networks (GNNs) have demonstrated great success in many graph data-based applications. The impressive behavior of GNNs typically relies on the availability of a sufficient amount of labeled data for model training. However, in practice, obtaining a large number of annotations is prohibi...
Saved in:
Published in: | IEEE transaction on neural networks and learning systems 2023-11, Vol.PP (11), p.1-14 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Graph neural networks (GNNs) have demonstrated great success in many graph data-based applications. The impressive behavior of GNNs typically relies on the availability of a sufficient amount of labeled data for model training. However, in practice, obtaining a large number of annotations is prohibitively labor-intensive and even impossible. Co-training is a popular semi-supervised learning (SSL) paradigm, which trains multiple models based on a common training set while augmenting the limited amount of labeled data used for training each model via the pseudolabeled data generated from the prediction results of other models. Most of the existing co-training works do not control the quality of pseudolabeled data when using them. Therefore, the inaccurate pseudolabels generated by immature models in the early stage of the training process are likely to cause noticeable errors when they are used for augmenting the training data for other models. To address this issue, we propose a self-paced co-training for the GNN (SPC-GNN) framework for semi-supervised node classification. This framework trains multiple GNNs with the same or different structures on different representations of the same training data. Each GNN carries out SSL by using both the originally available labeled data and the augmented pseudolabeled data generated from other GNNs. To control the quality of pseudolabels, a self-paced label augmentation strategy is designed to make the pseudolabels generated at a higher confidence level to be utilized earlier during training such that the negative impact of inaccurate pseudolabels on training data augmentation, and accordingly, the subsequent training process can be mitigated. Finally, each of the trained GNN is evaluated on a validation set, and the best-performing one is chosen as the output. To improve the training effectiveness of the framework, we devise a pretraining followed by a two-step optimization scheme to train GNNs. Experimental results on the node classification task demonstrate that the proposed framework achieves significant improvement over the state-of-the-art SSL methods. |
---|---|
ISSN: | 2162-237X 2162-2388 |
DOI: | 10.1109/TNNLS.2022.3157688 |