Back to Search Start Over

GPT Semantic Cache: Reducing LLM Costs and Latency via Semantic Embedding Caching

Authors :
Regmi, Sajal
Pun, Chetan Phakami
Publication Year :
2024

Abstract

Large Language Models (LLMs), such as GPT (Radford et al., 2019), have significantly advanced artificial intelligence by enabling sophisticated natural language understanding and generation. However, the high computational and financial costs associated with frequent API calls to these models present a substantial bottleneck, especially for applications like customer service chatbots that handle repetitive queries. In this paper, we introduce GPT Semantic Cache, a method that leverages semantic caching of query embeddings in in-memory storage (Redis). By storing embeddings of user queries, our approach efficiently identifies semantically similar questions, allowing for the retrieval of pre-generated responses without redundant API calls to the LLM. This technique reduces operational costs and improves response times, enhancing the efficiency of LLM-powered applications.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.05276
Document Type :
Working Paper