Back to Search Start Over

Bridging the Evaluation Gap: Leveraging Large Language Models for Topic Model Evaluation

Authors :
Tan, Zhiyin
D'Souza, Jennifer
Publication Year :
2025

Abstract

This study presents a framework for automated evaluation of dynamically evolving topic taxonomies in scientific literature using Large Language Models (LLMs). In digital library systems, topic modeling plays a crucial role in efficiently organizing and retrieving scholarly content, guiding researchers through complex knowledge landscapes. As research domains proliferate and shift, traditional human centric and static evaluation methods struggle to maintain relevance. The proposed approach harnesses LLMs to measure key quality dimensions, such as coherence, repetitiveness, diversity, and topic-document alignment, without heavy reliance on expert annotators or narrow statistical metrics. Tailored prompts guide LLM assessments, ensuring consistent and interpretable evaluations across various datasets and modeling techniques. Experiments on benchmark corpora demonstrate the method's robustness, scalability, and adaptability, underscoring its value as a more holistic and dynamic alternative to conventional evaluation strategies.<br />Comment: accepted by IRCDL 2025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.07352
Document Type :
Working Paper