Back to Search Start Over

Accelerating Neural Network Training: An Analysis of the AlgoPerf Competition

Authors :
Kasimbeg, Priya
Schneider, Frank
Eschenhagen, Runa
Bae, Juhan
Sastry, Chandramouli Shama
Saroufim, Mark
Feng, Boyuan
Wright, Less
Yang, Edward Z.
Nado, Zachary
Medapati, Sourabh
Hennig, Philipp
Rabbat, Michael
Dahl, George E.
Publication Year :
2025

Abstract

The goal of the AlgoPerf: Training Algorithms competition is to evaluate practical speed-ups in neural network training achieved solely by improving the underlying training algorithms. In the external tuning ruleset, submissions must provide workload-agnostic hyperparameter search spaces, while in the self-tuning ruleset they must be completely hyperparameter-free. In both rulesets, submissions are compared on time-to-result across multiple deep learning workloads, training on fixed hardware. This paper presents the inaugural AlgoPerf competition's results, which drew 18 diverse submissions from 10 teams. Our investigation reveals several key findings: (1) The winning submission in the external tuning ruleset, using Distributed Shampoo, demonstrates the effectiveness of non-diagonal preconditioning over popular methods like Adam, even when compared on wall-clock runtime. (2) The winning submission in the self-tuning ruleset, based on the Schedule Free AdamW algorithm, demonstrates a new level of effectiveness for completely hyperparameter-free training algorithms. (3) The top-scoring submissions were surprisingly robust to workload changes. We also discuss the engineering challenges encountered in ensuring a fair comparison between different training algorithms. These results highlight both the significant progress so far, and the considerable room for further improvements.<br />Comment: ICLR 2025; 23 pages, 5 figures, 8 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.15015
Document Type :
Working Paper