Back to Search Start Over

Relational Graph Convolutional Networks Do Not Learn Sound Rules

Authors :
Morris, Matthew
Cucala, David J. Tena
Grau, Bernardo Cuenca
Horrocks, Ian
Publication Year :
2024

Abstract

Graph neural networks (GNNs) are frequently used to predict missing facts in knowledge graphs (KGs). Motivated by the lack of explainability for the outputs of these models, recent work has aimed to explain their predictions using Datalog, a widely used logic-based formalism. However, such work has been restricted to certain subclasses of GNNs. In this paper, we consider one of the most popular GNN architectures for KGs, R-GCN, and we provide two methods to extract rules that explain its predictions and are sound, in the sense that each fact derived by the rules is also predicted by the GNN, for any input dataset. Furthermore, we provide a method that can verify that certain classes of Datalog rules are not sound for the R-GCN. In our experiments, we train R-GCNs on KG completion benchmarks, and we are able to verify that no Datalog rule is sound for these models, even though the models often obtain high to near-perfect accuracy. This raises some concerns about the ability of R-GCN models to generalise and about the explainability of their predictions. We further provide two variations to the training paradigm of R-GCN that encourage it to learn sound rules and find a trade-off between model accuracy and the number of learned sound rules.<br />Comment: Full version (with appendices) of paper accepted to KR 2024 (21st International Conference on Principles of Knowledge Representation and Reasoning)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.10261
Document Type :
Working Paper