Back to Search Start Over

Qwen Technical Report

Authors :
Bai, Jinze
Bai, Shuai
Chu, Yunfei
Cui, Zeyu
Dang, Kai
Deng, Xiaodong
Fan, Yang
Ge, Wenbin
Han, Yu
Huang, Fei
Hui, Binyuan
Ji, Luo
Li, Mei
Lin, Junyang
Lin, Runji
Liu, Dayiheng
Liu, Gao
Lu, Chengqiang
Lu, Keming
Ma, Jianxin
Men, Rui
Ren, Xingzhang
Ren, Xuancheng
Tan, Chuanqi
Tan, Sinan
Tu, Jianhong
Wang, Peng
Wang, Shijie
Wang, Wei
Wu, Shengguang
Xu, Benfeng
Xu, Jin
Yang, An
Yang, Hao
Yang, Jian
Yang, Shusheng
Yao, Yang
Yu, Bowen
Yuan, Hongyi
Yuan, Zheng
Zhang, Jianwei
Zhang, Xingxuan
Zhang, Yichang
Zhang, Zhenru
Zhou, Chang
Zhou, Jingren
Zhou, Xiaohuan
Zhu, Tianhang
Publication Year :
2023

Abstract

Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.<br />Comment: 59 pages, 5 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.16609
Document Type :
Working Paper