Back to Search Start Over

QRMeM: Unleash the Length Limitation through Question then Reflection Memory Mechanism

Authors :
Wang, Bo
Huang, Heyan
Cao, Yixin
Ying, Jiahao
Tang, Wei
Feng, Chong
Publication Year :
2024

Abstract

While large language models (LLMs) have made notable advancements in natural language processing, they continue to struggle with processing extensive text. Memory mechanism offers a flexible solution for managing long contexts, utilizing techniques such as compression, summarization, and structuring to facilitate nuanced and efficient handling of large volumes of text. However, existing techniques face challenges with static knowledge integration, leading to insufficient adaptation to task-specific needs and missing multi-segmentation relationships, which hinders the dynamic reorganization and logical combination of relevant segments during the response process. To address these issues, we introduce a novel strategy, Question then Reflection Memory Mechanism (QRMeM), incorporating a dual-structured memory pool. This pool synergizes static textual content with structured graph guidance, fostering a reflective trial-and-error approach for navigating and identifying relevant segments. Our evaluation across multiple-choice questions (MCQ) and multi-document question answering (Multi-doc QA) benchmarks showcases QRMeM enhanced performance compared to existing approaches.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.13167
Document Type :
Working Paper