Back to Search Start Over

Enhancing Large Language Model with Self-Controlled Memory Framework

Authors :
Wang, Bing
Liang, Xinnian
Yang, Jian
Huang, Hui
Wu, Shuangzhi
Wu, Peihao
Lu, Lu
Ma, Zejun
Li, Zhoujun
Publication Year :
2023

Abstract

Large Language Models (LLMs) are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information. To address this limitation, in this paper, we propose the Self-Controlled Memory (SCM) framework to enhance the ability of LLMs to maintain long-term memory and recall relevant information. Our SCM framework comprises three key components: an LLM-based agent serving as the backbone of the framework, a memory stream storing agent memories, and a memory controller updating memories and determining when and how to utilize memories from memory stream. Additionally, the proposed SCM is able to process ultra-long texts without any modification or fine-tuning, which can integrate with any instruction following LLMs in a plug-and-play paradigm. Furthermore, we annotate a dataset to evaluate the effectiveness of SCM for handling lengthy inputs. The annotated dataset covers three tasks: long-term dialogues, book summarization, and meeting summarization. Experimental results demonstrate that our method achieves better retrieval recall and generates more informative responses compared to competitive baselines in long-term dialogues. (https://github.com/wbbeyourself/SCM4LLMs)<br />Comment: under preview

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2304.13343
Document Type :
Working Paper