Back to Search
Start Over
The generalization error of graph convolutional networks may enlarge with more layers.
- Source :
-
Neurocomputing . Feb2021, Vol. 424, p97-106. 10p. - Publication Year :
- 2021
-
Abstract
- Graph Neural Networks(GNNs) are powerful methods to analyze the non-Euclidean data. As a dominant type of GNN, Graph Convolutional Networks(GCNs) have wide applications. However, the analysis of the generalization error for GCNs with multilayer is limited. Based on the review of single-layer GCNs, this paper analyzes the generalization error of two-layers GCNs and extends the conclusion to the general GCNs models. Firstly, this paper examines two-layers GCNs and obtains the stability of the GCNs algorithm. And then, based on this algorithmic stability, the generalization stability of multilayer GCNs is obtained. This paper shows that the algorithmic stability of GCNs depends upon the graph filters and its product with node features as well as the training procedure. Furthermore, the generalization error gap of GCNs tends to be enlarged with more layers, which can interpret why GCNs with deeper layers have relatively poorer performance in test datasets. [ABSTRACT FROM AUTHOR]
- Subjects :
- *GENERALIZATION
*ALGORITHMS
*FILTERS & filtration
Subjects
Details
- Language :
- English
- ISSN :
- 09252312
- Volume :
- 424
- Database :
- Academic Search Index
- Journal :
- Neurocomputing
- Publication Type :
- Academic Journal
- Accession number :
- 148202677
- Full Text :
- https://doi.org/10.1016/j.neucom.2020.10.109