Back to Search Start Over

Fine-Tuning and Deploying Large Language Models Over Edges: Issues and Approaches

Authors :
Dong, Yanjie
Fan, Xiaoyi
Wang, Fangxin
Li, Chengming
Leung, Victor C. M.
Hu, Xiping
Publication Year :
2024

Abstract

Since the invention of GPT2--1.5B in 2019, large language models (LLMs) have transitioned from specialized models to versatile foundation models. The LLMs exhibit impressive zero-shot ability, however, require fine-tuning on local datasets and significant resources for deployment. Traditional fine-tuning techniques with the first-order optimizers require substantial GPU memory that exceeds mainstream hardware capability. Therefore, memory-efficient methods are motivated to be investigated. Model compression techniques can reduce energy consumption, operational costs, and environmental impact so that to support sustainable artificial intelligence advancements. Additionally, large-scale foundation models have expanded to create images, audio, videos, and multi-modal contents, further emphasizing the need for efficient deployment. Therefore, we are motivated to present a comprehensive overview of the prevalent memory-efficient fine-tuning methods over the network edge. We also review the state-of-the-art literatures on model compression to provide a vision on deploying LLMs over the network edge.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.10691
Document Type :
Working Paper