Back to Search Start Over

SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification

Authors :
Miao, Xupeng
Oliaro, Gabriele
Zhang, Zhihao
Cheng, Xinhao
Wang, Zeyu
Wong, Rae Ying Yee
Chen, Zhuoming
Arfeen, Daiyaan
Abhyankar, Reyna
Jia, Zhihao
Publication Year :
2023
Publisher :
arXiv, 2023.

Abstract

The high computational and memory requirements of generative large language models (LLMs) make it challenging to serve them quickly and cheaply. This paper introduces SpecInfer, an LLM serving system that accelerates generative LLM inference with speculative inference and token tree verification. A key insight behind SpecInfer is to combine various collectively boost-tuned small language models to jointly predict the LLM's outputs; the predictions are organized as a token tree, whose nodes each represent a candidate token sequence. The correctness of all candidate token sequences represented by a token tree is verified by the LLM in parallel using a novel tree-based parallel decoding mechanism. SpecInfer uses an LLM as a token tree verifier instead of an incremental decoder, which significantly reduces the end-to-end latency and computational requirement for serving generative LLMs while provably preserving model quality.

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....22400c0508feb1a5f4f5b9ff3a6dad8a
Full Text :
https://doi.org/10.48550/arxiv.2305.09781