Back to Search Start Over

NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems

Authors :
Yik, Jason
Van den Berghe, Korneel
den Blanken, Douwe
Bouhadjar, Younes
Fabre, Maxime
Hueber, Paul
Kleyko, Denis
Pacik-Nelson, Noah
Sun, Pao-Sheng Vincent
Tang, Guangzhi
Wang, Shenqi
Zhou, Biyan
Hasan Ahmed, Soikat
Vathakkattil Joseph, George
Leto, Benedetto
Micheli, Aurora
Mishra, Anurag Kumar
Lenz, Gregor
Sun, Tao
Ahmed, Zergham
Akl, Mahmoud
Anderson, Brian
Andreou, Andreas G.
Bartolozzi, Chiara
Basu, Arindam
Bogdan, Petrut
Bohte, Sander
Buckley, Sonia
Cauwenberghs, Gert
Chicca, Elisabetta
Corradi, Federico
de Croon, Guido
Danielescu, Andreea
Daram, Anurag
Davies, Mike
Demirag, Yigit
Eshraghian, Jason
Fischer, Tobias
Forest, Jeremy
Fra, Vittorio
Furber, Steve
Furlong, P. Michael
Gilpin, William
Gilra, Aditya
Gonzalez, Hector A.
Indiveri, Giacomo
Joshi, Siddharth
Karia, Vedant
Khacef, Lyes
Knight, James C.
Kriener, Laura
Kubendran, Rajkumar
Kudithipudi, Dhireesha
Liu, Yao-Hong
Liu, Shih-Chii
Ma, Haoyuan
Manohar, Rajit
Margarit-Taulé, Josep Maria
Mayr, Christian
Michmizos, Konstantinos
Muir, Dylan
Neftci, Emre
Nowotny, Thomas
Ottati, Fabrizio
Ozcelikkale, Ayca
Panda, Priyadarshini
Park, Jongkil
Payvand, Melika
Pehle, Christian
Petrovici, Mihai A.
Pierro, Alessandro
Posch, Christoph
Renner, Alpha
Sandamirskaya, Yulia
Schaefer, Clemens J.S.
van Schaik, André
Schemmel, Johannes
Schmidgall, Samuel
Schuman, Catherine
Seo, Jae-sun
Sheik, Sadique
Bam Shrestha, Sumit
Sifalakis, Manolis
Sironi, Amos
Stewart, Matthew
Stewart, Kenneth
Stewart, Terrence C.
Stratmann, Philipp
Timcheck, Jonathan
Tömen, Nergis
Urgese, Gianvito
Verhelst, Marian
Vineyard, Craig M.
Vogginger, Bernhard
Yousefzadeh, Amirreza
Tuz Zohora, Fatima
Frenkel, Charlotte
Janapa Reddi, Vijay
Yik, Jason
Van den Berghe, Korneel
den Blanken, Douwe
Bouhadjar, Younes
Fabre, Maxime
Hueber, Paul
Kleyko, Denis
Pacik-Nelson, Noah
Sun, Pao-Sheng Vincent
Tang, Guangzhi
Wang, Shenqi
Zhou, Biyan
Hasan Ahmed, Soikat
Vathakkattil Joseph, George
Leto, Benedetto
Micheli, Aurora
Mishra, Anurag Kumar
Lenz, Gregor
Sun, Tao
Ahmed, Zergham
Akl, Mahmoud
Anderson, Brian
Andreou, Andreas G.
Bartolozzi, Chiara
Basu, Arindam
Bogdan, Petrut
Bohte, Sander
Buckley, Sonia
Cauwenberghs, Gert
Chicca, Elisabetta
Corradi, Federico
de Croon, Guido
Danielescu, Andreea
Daram, Anurag
Davies, Mike
Demirag, Yigit
Eshraghian, Jason
Fischer, Tobias
Forest, Jeremy
Fra, Vittorio
Furber, Steve
Furlong, P. Michael
Gilpin, William
Gilra, Aditya
Gonzalez, Hector A.
Indiveri, Giacomo
Joshi, Siddharth
Karia, Vedant
Khacef, Lyes
Knight, James C.
Kriener, Laura
Kubendran, Rajkumar
Kudithipudi, Dhireesha
Liu, Yao-Hong
Liu, Shih-Chii
Ma, Haoyuan
Manohar, Rajit
Margarit-Taulé, Josep Maria
Mayr, Christian
Michmizos, Konstantinos
Muir, Dylan
Neftci, Emre
Nowotny, Thomas
Ottati, Fabrizio
Ozcelikkale, Ayca
Panda, Priyadarshini
Park, Jongkil
Payvand, Melika
Pehle, Christian
Petrovici, Mihai A.
Pierro, Alessandro
Posch, Christoph
Renner, Alpha
Sandamirskaya, Yulia
Schaefer, Clemens J.S.
van Schaik, André
Schemmel, Johannes
Schmidgall, Samuel
Schuman, Catherine
Seo, Jae-sun
Sheik, Sadique
Bam Shrestha, Sumit
Sifalakis, Manolis
Sironi, Amos
Stewart, Matthew
Stewart, Kenneth
Stewart, Terrence C.
Stratmann, Philipp
Timcheck, Jonathan
Tömen, Nergis
Urgese, Gianvito
Verhelst, Marian
Vineyard, Craig M.
Vogginger, Bernhard
Yousefzadeh, Amirreza
Tuz Zohora, Fatima
Frenkel, Charlotte
Janapa Reddi, Vijay
Source :
(2024) date: 2024-01-17, pp.28
Publication Year :
2024

Abstract

Neuromorphic computing shows promise for advancing computing efficiency and capabilities of AI applications using brain-inspired principles. However, the neuromorphic research field currently lacks standardized benchmarks, making it difficult to accurately measure technological advancements, compare performance with conventional methods, and identify promising future research directions. Prior neuromorphic computing benchmark efforts have not seen widespread adoption due to a lack of inclusive, actionable, and iterative benchmark design and guidelines. To address these shortcomings, we present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems. NeuroBench is a collaboratively-designed effort from an open community of nearly 100 co-authors across over 50 institutions in industry and academia, aiming to provide a representative structure for standardizing the evaluation of neuromorphic approaches. The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings. In this article, we present initial performance baselines across various model architectures on the algorithm track and outline the system track benchmark tasks and guidelines. NeuroBench is intended to continually expand its benchmarks and features to foster and track the progress made by the research community.

Details

Database :
OAIster
Journal :
(2024) date: 2024-01-17, pp.28
Notes :
Yik, Jason
Publication Type :
Electronic Resource
Accession number :
edsoai.on1446904675
Document Type :
Electronic Resource