Back to Search Start Over

MERGE: Fast Private Text Generation

Authors :
Liang, Zi
Wang, Pinghui
Zhang, Ruofei
Xu, Nuo
Xing, Lifeng
Zhang, Shuo
Publication Year :
2023

Abstract

The drastic increase in language models' parameters has led to a new trend of deploying models in cloud servers, raising growing concerns about private inference for Transformer-based models. Existing two-party privacy-preserving techniques, however, only take into account natural language understanding (NLU) scenarios. Private inference in natural language generation (NLG), crucial for applications like translation and code completion, remains underexplored.In addition, previous privacy-preserving techniques suffer from convergence issues during model training and exhibit poor inference speed when used with NLG models due to the neglect of time-consuming operations in auto-regressive generations. To address these issues, we propose a fast private text generation framework for Transformer-based language models, namely MERGE.MERGE reuses the output hidden state as the word embedding to bypass the embedding computation and reorganize the linear operations in the Transformer module to accelerate the forward procedure. Extensive experiments show that MERGE achieves a 26.5x speedup to the vanilla encrypted model under the sequence length 512, and reduces 80\% communication cost, with an up to 10x speedup to state-of-the-art approximated models.<br />Comment: Accepted by AAAI 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.15769
Document Type :
Working Paper