Back to Search
Start Over
MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid
- Source :
- ACM MM 2023
- Publication Year :
- 2022
-
Abstract
- Multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) whose entities are associated with relevant images. However, current MMEA algorithms rely on KG-level modality fusion strategies for multi-modal entity representation, which ignores the variations of modality preferences of different entities, thus compromising robustness against noise in modalities such as blurry images and relations. This paper introduces MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, which dynamically predicts the mutual correlation coefficients among modalities for more fine-grained entity-level modality fusion and alignment. Experimental results demonstrate that our model not only achieves SOTA performance in multiple training scenarios, including supervised, unsupervised, iterative, and low-resource settings, but also has a limited number of parameters, efficient runtime, and interpretability. Our code is available at https://github.com/zjukg/MEAformer.<br />Comment: ACM Multimedia 2023 Accpeted, Repo: https://github.com/zjukg/MEAformer
Details
- Database :
- arXiv
- Journal :
- ACM MM 2023
- Publication Type :
- Report
- Accession number :
- edsarx.2212.14454
- Document Type :
- Working Paper