Sorry, I don't understand your search. ×
Back to Search Start Over

Faithiful Embeddings for EL++ Knowledge Bases

Authors :
Xiong, Bo
Potyka, Nico
Tran, Trung-Kien
Nayyeri, Mojtaba
Staab, Steffen
Xiong, Bo
Potyka, Nico
Tran, Trung-Kien
Nayyeri, Mojtaba
Staab, Steffen
Publication Year :
2022

Abstract

Recently, increasing efforts are put into learning continual representations for symbolic knowledge bases (KBs). However, these approaches either only embed the data-level knowledge (ABox) or suffer from inherent limitations when dealing with concept-level knowledge (TBox), i.e., they cannot faithfully model the logical structure present in the KBs. We present BoxEL, a geometric KB embedding approach that allows for better capturing the logical structure (i.e., ABox and TBox axioms) in the description logic EL++. BoxEL models concepts in a KB as axis-parallel boxes that are suitable for modeling concept intersection, entities as points inside boxes, and relations between concepts/entities as affine transformations. We show theoretical guarantees (soundness) of BoxEL for preserving logical structure. Namely, the learned model of BoxEL embedding with loss 0 is a (logical) model of the KB. Experimental results on (plausible) subsumption reasonings and a real-world application for protein-protein prediction show that BoxEL outperforms traditional knowledge graph embedding methods as well as state-of-the-art EL++ embedding approaches.<br />Comment: Published in ISWC'22

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1333745811
Document Type :
Electronic Resource