Back to Search Start Over

Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference

Authors :
Gallo, Manuel Le
Lammie, Corey
Buechel, Julian
Carta, Fabio
Fagbohungbe, Omobayode
Mackin, Charles
Tsai, Hsinyu
Narayanan, Vijay
Sebastian, Abu
Maghraoui, Kaoutar El
Rasch, Malte J.
Source :
APL Machine Learning (2023) 1 (4): 041102
Publication Year :
2023

Abstract

Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics, and the non-ideal peripheral circuitry in AIMC chips, require adapting DNNs to be deployed on such hardware to achieve equivalent accuracy to digital computing. In this tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit. The AIHWKit is a Python library that simulates inference and training of DNNs using AIMC. We present an in-depth description of the AIHWKit design, functionality, and best practices to properly perform inference and training. We also present an overview of the Analog AI Cloud Composer, a platform that provides the benefits of using the AIHWKit simulation in a fully managed cloud setting along with physical AIMC hardware access, freely available at https://aihw-composer.draco.res.ibm.com. Finally, we show examples on how users can expand and customize AIHWKit for their own needs. This tutorial is accompanied by comprehensive Jupyter Notebook code examples that can be run using AIHWKit, which can be downloaded from https://github.com/IBM/aihwkit/tree/master/notebooks/tutorial.

Details

Database :
arXiv
Journal :
APL Machine Learning (2023) 1 (4): 041102
Publication Type :
Report
Accession number :
edsarx.2307.09357
Document Type :
Working Paper
Full Text :
https://doi.org/10.1063/5.0168089