Back to Search Start Over

MAIN-RAG: Multi-Agent Filtering Retrieval-Augmented Generation

Authors :
Chang, Chia-Yuan
Jiang, Zhimeng
Rakesh, Vineeth
Pan, Menghai
Yeh, Chin-Chia Michael
Wang, Guanchu
Hu, Mingzhi
Xu, Zhichao
Zheng, Yan
Das, Mahashweta
Zou, Na
Publication Year :
2024

Abstract

Large Language Models (LLMs) are becoming essential tools for various natural language processing tasks but often suffer from generating outdated or incorrect information. Retrieval-Augmented Generation (RAG) addresses this issue by incorporating external, real-time information retrieval to ground LLM responses. However, the existing RAG systems frequently struggle with the quality of retrieval documents, as irrelevant or noisy documents degrade performance, increase computational overhead, and undermine response reliability. To tackle this problem, we propose Multi-Agent Filtering Retrieval-Augmented Generation (MAIN-RAG), a training-free RAG framework that leverages multiple LLM agents to collaboratively filter and score retrieved documents. Specifically, MAIN-RAG introduces an adaptive filtering mechanism that dynamically adjusts the relevance filtering threshold based on score distributions, effectively minimizing noise while maintaining high recall of relevant documents. The proposed approach leverages inter-agent consensus to ensure robust document selection without requiring additional training data or fine-tuning. Experimental results across four QA benchmarks demonstrate that MAIN-RAG consistently outperforms traditional RAG approaches, achieving a 2-11% improvement in answer accuracy while reducing the number of irrelevant retrieved documents. Quantitative analysis further reveals that our approach achieves superior response consistency and answer accuracy over baseline methods, offering a competitive and practical alternative to training-based solutions.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.00332
Document Type :
Working Paper