Back to Search Start Over

MathHay: An Automated Benchmark for Long-Context Mathematical Reasoning in LLMs

Authors :
Wang, Lei
Dong, Shan
Xu, Yuhui
Dong, Hanze
Wang, Yalu
Saha, Amrita
Lim, Ee-Peng
Xiong, Caiming
Sahoo, Doyen
Publication Year :
2024

Abstract

Recent large language models (LLMs) have demonstrated versatile capabilities in long-context scenarios. Although some recent benchmarks have been developed to evaluate the long-context capabilities of LLMs, there is a lack of benchmarks evaluating the mathematical reasoning abilities of LLMs over long contexts, which is crucial for LLMs' application in real-world scenarios. In this paper, we introduce MathHay, an automated benchmark designed to assess the long-context mathematical reasoning capabilities of LLMs. Unlike previous benchmarks like Needle in a Haystack, which focus primarily on information retrieval within long texts, MathHay demands models with both information-seeking and complex mathematical reasoning abilities. We conduct extensive experiments on MathHay to assess the long-context mathematical reasoning abilities of eight top-performing LLMs. Even the best-performing model, Gemini-1.5-Pro-002, still struggles with mathematical reasoning over long contexts, achieving only 51.26% accuracy at 128K tokens. This highlights the significant room for improvement on the MathHay benchmark.<br />Comment: Work-in-Progress

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.04698
Document Type :
Working Paper