Back to Search Start Over

MLPerf Tiny Benchmark

Authors :
Banbury, Colby
Reddi, Vijay Janapa
Torelli, Peter
Holleman, Jeremy
Jeffries, Nat
Kiraly, Csaba
Montino, Pietro
Kanter, David
Ahmed, Sebastian
Pau, Danilo
Thakker, Urmish
Torrini, Antonio
Warden, Peter
Cordaro, Jay
Di Guglielmo, Giuseppe
Duarte, Javier
Gibellini, Stephen
Parekh, Videet
Tran, Honson
Tran, Nhan
Wenxu, Niu
Xuesong, Xu
Publication Year :
2021

Abstract

Advancements in ultra-low-power tiny machine learning (TinyML) systems promise to unlock an entirely new class of smart applications. However, continued progress is limited by the lack of a widely accepted and easily reproducible benchmark for these systems. To meet this need, we present MLPerf Tiny, the first industry-standard benchmark suite for ultra-low-power tiny machine learning systems. The benchmark suite is the collaborative effort of more than 50 organizations from industry and academia and reflects the needs of the community. MLPerf Tiny measures the accuracy, latency, and energy of machine learning inference to properly evaluate the tradeoffs between systems. Additionally, MLPerf Tiny implements a modular design that enables benchmark submitters to show the benefits of their product, regardless of where it falls on the ML deployment stack, in a fair and reproducible manner. The suite features four benchmarks: keyword spotting, visual wake words, image classification, and anomaly detection.<br />Comment: TinyML Benchmark

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2106.07597
Document Type :
Working Paper