1. DroidSpeak: KV Cache Sharing for Cross-LLM Communication and Multi-LLM Serving
- Author
-
Liu, Yuhan, Huang, Yuyang, Yao, Jiayi, Gu, Zhuohan, Du, Kuntai, Li, Hanchen, Cheng, Yihua, Jiang, Junchen, Lu, Shan, Musuvathi, Madan, and Choukse, Esha
- Subjects
Computer Science - Multiagent Systems ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
Large Language Models (LLMs) are increasingly employed in complex workflows, where different LLMs and fine-tuned variants collaboratively address complex tasks. However, these systems face significant inefficiencies due to redundant context processing of the shared context. We propose DroidSpeak, a framework that optimizes context sharing between fine-tuned LLMs derived from the same foundational model. DroidSpeak identifies critical layers in the KV cache and selectively recomputes them, enabling effective reuse of intermediate data while maintaining high accuracy. Our approach balances computational efficiency and task fidelity, significantly reducing inference latency and throughput bottlenecks. Experiments on diverse datasets and model pairs demonstrate that DroidSpeak achieves up to 3x higher throughputs and 2.6x faster prefill times with negligible accuracy loss compared to full recomputation.
- Published
- 2024