Back to Search Start Over

Leveraging large language models for efficient representation learning for entity resolution

Authors :
Xu, Xiaowei
Foua, Bi T.
Wang, Xingqiao
Gunasekaran, Vivek
Talburt, John R.
Publication Year :
2024

Abstract

In this paper, the authors propose TriBERTa, a supervised entity resolution system that utilizes a pre-trained large language model and a triplet loss function to learn representations for entity matching. The system consists of two steps: first, name entity records are fed into a Sentence Bidirectional Encoder Representations from Transformers (SBERT) model to generate vector representations, which are then fine-tuned using contrastive learning based on a triplet loss function. Fine-tuned representations are used as input for entity matching tasks, and the results show that the proposed approach outperforms state-of-the-art representations, including SBERT without fine-tuning and conventional Term Frequency-Inverse Document Frequency (TF-IDF), by a margin of 3 - 19%. Additionally, the representations generated by TriBERTa demonstrated increased robustness, maintaining consistently higher performance across a range of datasets. The authors also discussed the importance of entity resolution in today's data-driven landscape and the challenges that arise when identifying and reconciling duplicate data across different sources. They also described the ER process, which involves several crucial steps, including blocking, entity matching, and clustering.<br />Comment: 22 pages and 12 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.10629
Document Type :
Working Paper