Back to Search Start Over

TACT: Advancing Complex Aggregative Reasoning with Information Extraction Tools

Authors :
Caciularu, Avi
Jacovi, Alon
Ben-David, Eyal
Goldshtein, Sasha
Schuster, Tal
Herzig, Jonathan
Elidan, Gal
Globerson, Amir
Publication Year :
2024

Abstract

Large Language Models (LLMs) often do not perform well on queries that require the aggregation of information across texts. To better evaluate this setting and facilitate modeling efforts, we introduce TACT - Text And Calculations through Tables, a dataset crafted to evaluate LLMs' reasoning and computational abilities using complex instructions. TACT contains challenging instructions that demand stitching information scattered across one or more texts, and performing complex integration on this information to generate the answer. We construct this dataset by leveraging an existing dataset of texts and their associated tables. For each such tables, we formulate new queries, and gather their respective answers. We demonstrate that all contemporary LLMs perform poorly on this dataset, achieving an accuracy below 38\%. To pinpoint the difficulties and thoroughly dissect the problem, we analyze model performance across three components: table-generation, Pandas command-generation, and execution. Unexpectedly, we discover that each component presents substantial challenges for current LLMs. These insights lead us to propose a focused modeling framework, which we refer to as IE as a tool. Specifically, we propose to add "tools" for each of the above steps, and implement each such tool with few-shot prompting. This approach shows an improvement over existing prompting techniques, offering a promising direction for enhancing model capabilities in these tasks.<br />Comment: Website (https://tact-benchmark.github.io), Huggingface (https://huggingface.co/datasets/google/TACT)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.03618
Document Type :
Working Paper