Back to Search Start Over

LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching

Authors :
Singh, Simranjit
Fore, Michael
Karatzas, Andreas
Lee, Chaehong
Jian, Yanan
Shangguan, Longfei
Yu, Fuxun
Anagnostopoulos, Iraklis
Stamoulis, Dimitrios
Publication Year :
2024

Abstract

As Large Language Models (LLMs) broaden their capabilities to manage thousands of API calls, they are confronted with complex data operations across vast datasets with significant overhead to the underlying system. In this work, we introduce LLM-dCache to optimize data accesses by treating cache operations as callable API functions exposed to the tool-augmented agent. We grant LLMs the autonomy to manage cache decisions via prompting, seamlessly integrating with existing function-calling mechanisms. Tested on an industry-scale massively parallel platform that spans hundreds of GPT endpoints and terabytes of imagery, our method improves Copilot times by an average of 1.24x across various LLMs and prompting techniques.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.06799
Document Type :
Working Paper