Back to Search Start Over

Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures

Authors :
Qin, Ruiyang
Yan, Zheyu
Zeng, Dewen
Jia, Zhenge
Liu, Dancheng
Liu, Jianbo
Zheng, Zhi
Cao, Ningyuan
Ni, Kai
Xiong, Jinjun
Shi, Yiyu
Publication Year :
2024

Abstract

Large Language Models (LLMs) deployed on edge devices learn through fine-tuning and updating a certain portion of their parameters. Although such learning methods can be optimized to reduce resource utilization, the overall required resources remain a heavy burden on edge devices. Instead, Retrieval-Augmented Generation (RAG), a resource-efficient LLM learning method, can improve the quality of the LLM-generated content without updating model parameters. However, the RAG-based LLM may involve repetitive searches on the profile data in every user-LLM interaction. This search can lead to significant latency along with the accumulation of user data. Conventional efforts to decrease latency result in restricting the size of saved user data, thus reducing the scalability of RAG as user data continuously grows. It remains an open question: how to free RAG from the constraints of latency and scalability on edge devices? In this paper, we propose a novel framework to accelerate RAG via Computing-in-Memory (CiM) architectures. It accelerates matrix multiplications by performing in-situ computation inside the memory while avoiding the expensive data transfer between the computing unit and memory. Our framework, Robust CiM-backed RAG (RoCR), utilizing a novel contrastive learning-based training method and noise-aware training, can enable RAG to efficiently search profile data with CiM. To the best of our knowledge, this is the first work utilizing CiM to accelerate RAG.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.04700
Document Type :
Working Paper