Back to Search Start Over

Birbal: An efficient 7B instruct-model fine-tuned with curated datasets

Authors :
Jindal, Ashvini Kumar
Rajpoot, Pawan Kumar
Parikh, Ankur
Publication Year :
2024

Abstract

LLMOps incur significant costs due to hardware requirements, hindering their widespread accessibility. Additionally, a lack of transparency in model training methods and data contributes to the majority of models being non-reproducible. To tackle these challenges, the LLM Efficiency Challenge was introduced at NeurIPS Workshop, aiming to adapt foundation models on a diverse set of tasks via fine-tuning on a single GPU (RTX 4090 or A100 with 40GB) within a 24-hour timeframe. In this system description paper, we introduce Birbal, our Mistral-7B based winning model, fine-tuned on a single RTX 4090 for 16 hours. Birbal's success lies in curating high-quality instructions covering diverse tasks, resulting in a 35% performance improvement over second-best Qwen-14B based submission.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.02247
Document Type :
Working Paper