Back to Search Start Over

FaMTEB: Massive Text Embedding Benchmark in Persian Language

Authors :
Zinvandi, Erfan
Alikhani, Morteza
Sarmadi, Mehran
Pourbahman, Zahra
Arvin, Sepehr
Kazemi, Reza
Amini, Arash
Publication Year :
2025

Abstract

In this paper, we introduce a comprehensive benchmark for Persian (Farsi) text embeddings, built upon the Massive Text Embedding Benchmark (MTEB). Our benchmark includes 63 datasets spanning seven different tasks: classification, clustering, pair classification, reranking, retrieval, summary retrieval, and semantic textual similarity. The datasets are formed as a combination of existing, translated, and newly generated data, offering a diverse evaluation framework for Persian language models. Given the increasing use of text embedding models in chatbots, evaluation datasets are becoming inseparable ingredients in chatbot challenges and Retrieval-Augmented Generation systems. As a contribution, we include chatbot evaluation datasets in the MTEB benchmark for the first time. In addition, in this paper, we introduce the new task of summary retrieval which is not part of the tasks included in standard MTEB. Another contribution of this paper is the introduction of a substantial number of new Persian language NLP datasets suitable for training and evaluation, some of which have no previous counterparts in Persian. We evaluate the performance of several Persian and multilingual embedding models in a range of tasks. This work introduces an open-source benchmark with datasets, code and a public leaderboard.<br />Comment: to appear in ACL 2025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.11571
Document Type :
Working Paper