Back to Search Start Over

OTAS: An Elastic Transformer Serving System via Token Adaptation

Authors :
Chen, Jinyu
Xu, Wenchao
Hong, Zicong
Guo, Song
Wang, Haozhao
Zhang, Jie
Zeng, Deze
Publication Year :
2024

Abstract

Transformer model empowered architectures have become a pillar of cloud services that keeps reshaping our society. However, the dynamic query loads and heterogeneous user requirements severely challenge current transformer serving systems, which rely on pre-training multiple variants of a foundation model, i.e., with different sizes, to accommodate varying service demands. Unfortunately, such a mechanism is unsuitable for large transformer models due to the additional training costs and excessive I/O delay. In this paper, we introduce OTAS, the first elastic serving system specially tailored for transformer models by exploring lightweight token management. We develop a novel idea called token adaptation that adds prompting tokens to improve accuracy and removes redundant tokens to accelerate inference. To cope with fluctuating query loads and diverse user requests, we enhance OTAS with application-aware selective batching and online token adaptation. OTAS first batches incoming queries with similar service-level objectives to improve the ingress throughput. Then, to strike a tradeoff between the overhead of token increment and the potentials for accuracy improvement, OTAS adaptively adjusts the token execution strategy by solving an optimization problem. We implement and evaluate a prototype of OTAS with multiple datasets, which show that OTAS improves the system utility by at least 18.2%.<br />Comment: Accepted by INFOCOM '24

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.05031
Document Type :
Working Paper