Back to Search Start Over

Knowledge Fusion of Chat LLMs: A Preliminary Technical Report

Authors :
Wan, Fanqi
Yang, Ziyi
Zhong, Longguang
Quan, Xiaojun
Huang, Xinting
Bi, Wei
Publication Year :
2024

Abstract

Recently, FuseLLM introduced the concept of knowledge fusion to transfer the collective knowledge of multiple structurally varied LLMs into a target LLM through lightweight continual training. In this report, we extend the scalability and flexibility of the FuseLLM framework to realize the fusion of chat LLMs, resulting in FusionChat. FusionChat comprises two main stages. Firstly, we undertake knowledge fusion for structurally and scale-varied source LLMs to derive multiple target LLMs of identical structure and size via lightweight fine-tuning. Then, these target LLMs are merged within the parameter space, wherein we propose a novel method for determining the merging weights based on the variation ratio of parameter matrices before and after fine-tuning. We validate our approach using three prominent chat LLMs with diverse architectures and scales, namely NH2-Mixtral-8x7B, NH2-Solar-10.7B, and OpenChat-3.5-7B. Experimental results spanning various chat domains demonstrate the superiority of FusionChat-7B across a broad spectrum of chat LLMs at 7B and 34B scales, even surpassing GPT-3.5 (March) and approaching Mixtral-8x7B-Instruct.<br />Comment: Technical Report, work in progress

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.16107
Document Type :
Working Paper