Back to Search Start Over

How Does Knowledge Graph Embedding Extrapolate to Unseen Data: A Semantic Evidence View

Authors :
Li, Ren
Cao, Yanan
Zhu, Qiannan
Bi, Guanqun
Fang, Fang
Liu, Yi
Li, Qian
Source :
Proceedings of the AAAI Conference on Artificial Intelligence. 36:5781-5791
Publication Year :
2022
Publisher :
Association for the Advancement of Artificial Intelligence (AAAI), 2022.

Abstract

Knowledge Graph Embedding (KGE) aims to learn representations for entities and relations. Most KGE models have gained great success, especially on extrapolation scenarios. Specifically, given an unseen triple (h, r, t), a trained model can still correctly predict t from (h, r, ?), or h from (?, r, t), such extrapolation ability is impressive. However, most existing KGE works focus on the design of delicate triple modeling function, which mainly tells us how to measure the plausibility of observed triples, but offers limited explanation of why the methods can extrapolate to unseen data, and what are the important factors to help KGE extrapolate. Therefore in this work, we attempt to study the KGE extrapolation of two problems: 1. How does KGE extrapolate to unseen data? 2. How to design the KGE model with better extrapolation ability? For the problem 1, we first discuss the impact factors for extrapolation and from relation, entity and triple level respectively, propose three Semantic Evidences (SEs), which can be observed from train set and provide important semantic information for extrapolation. Then we verify the effectiveness of SEs through extensive experiments on several typical KGE methods. For the problem 2, to make better use of the three levels of SE, we propose a novel GNN-based KGE model, called Semantic Evidence aware Graph Neural Network (SE-GNN). In SE-GNN, each level of SE is modeled explicitly by the corresponding neighbor pattern, and merged sufficiently by the multi-layer aggregation, which contributes to obtaining more extrapolative knowledge representation. Finally, through extensive experiments on FB15k-237 and WN18RR datasets, we show that SE-GNN achieves state-of-the-art performance on Knowledge Graph Completion task and performs a better extrapolation ability. Our code is available at https://github.com/renli1024/SE-GNN.<br />Comment: Accepted by AAAI'22

Details

ISSN :
23743468 and 21595399
Volume :
36
Database :
OpenAIRE
Journal :
Proceedings of the AAAI Conference on Artificial Intelligence
Accession number :
edsair.doi.dedup.....28f7bee4b5f6b23f8cd719bef55650f4