Back to Search
Start Over
Hyperdimensional Computing Exploiting Carbon Nanotube FETs, Resistive RAM, and Their Monolithic 3D Integration.
- Source :
- IEEE Journal of Solid-State Circuits; Nov2018, Vol. 53 Issue 11, p3183-3196, 14p
- Publication Year :
- 2018
-
Abstract
- The field of machine learning is witnessing rapid advances along several fronts: new machine learning models, new machine learning algorithms utilizing these models, new hardware architectures for these algorithms, and new technologies for creating energy-efficient implementations of such hardware architectures. Hyperdimensional (HD) computing represents one such model. Emerging nanotechnologies, such as carbon nanotube field-effect transistors (CNFETs), resistive random-access memory (RRAM), and their monolithic 3D integration, enable energy- and area-efficient hardware implementations of HD computing architectures. Such efficient implementations are achieved by exploiting several characteristics of the component nanotechnologies (e.g., energy-efficient logic circuits, dense memory, and incrementers naturally enabled by gradual reset of RRAM cells) and their monolithic 3D integration (enabling tight integration of logic and memory), as well as various characteristics of the HD computing model (e.g., embracing randomness that allows us to utilize rather than avoid inherent variations in RRAM and CNFETs, resilience to errors in the underlying hardware). We experimentally demonstrate and characterize an end-to-end HD computing nanosystem built using monolithic 3D integration of CNFETs and RRAM. Using our nanosystem, we experimentally demonstrate the pairwise classification of 21 languages with measured mean accuracy of up to 98% on >20 000 sentences (6.4 million characters), training using one text sample (~100 000 characters) per language, and resilient operation (98% accuracy) despite 78% of bits in HD representation being stuck at 0 or 1 in hardware. We also show that the monolithic 3D implementations of HD computing can have 35 $\times $ improved energy-execution time product for training and inference of language classification data sets (while using 3 $\times $ less area) compared to silicon CMOS implementations. [ABSTRACT FROM AUTHOR]
- Subjects :
- CARBON nanotubes
RANDOM access memory
Subjects
Details
- Language :
- English
- ISSN :
- 00189200
- Volume :
- 53
- Issue :
- 11
- Database :
- Complementary Index
- Journal :
- IEEE Journal of Solid-State Circuits
- Publication Type :
- Academic Journal
- Accession number :
- 132807408
- Full Text :
- https://doi.org/10.1109/JSSC.2018.2870560