1. Evaluating LLMs on Entity Disambiguation in Tables
- Author
-
Belotti, Federico, Dadda, Fabio, Cremaschi, Marco, Avogadro, Roberto, and Palmonari, Matteo
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Tables are crucial containers of information, but understanding their meaning may be challenging. Over the years, there has been a surge in interest in data-driven approaches based on deep learning that have increasingly been combined with heuristic-based ones. In the last period, the advent of \acf{llms} has led to a new category of approaches for table annotation. However, these approaches have not been consistently evaluated on a common ground, making evaluation and comparison difficult. This work proposes an extensive evaluation of four STI SOTA approaches: Alligator (formerly s-elbat), Dagobah, TURL, and TableLlama; the first two belong to the family of heuristic-based algorithms, while the others are respectively encoder-only and decoder-only Large Language Models (LLMs). We also include in the evaluation both GPT-4o and GPT-4o-mini, since they excel in various public benchmarks. The primary objective is to measure the ability of these approaches to solve the entity disambiguation task with respect to both the performance achieved on a common-ground evaluation setting and the computational and cost requirements involved, with the ultimate aim of charting new research paths in the field., Comment: 13 pages, 6 figures; fixed avg. accuracy-over-price plot for GPT families, fixed typos in table referencing, added evaluation and inference subsubsection
- Published
- 2024