Back to Search Start Over

Comparing the performance of Hebbian against backpropagation learning using convolutional neural networks.

Authors :
Lagani, Gabriele
Falchi, Fabrizio
Gennaro, Claudio
Amato, Giuseppe
Source :
Neural Computing & Applications; 4/15/2022, Vol. 34 Issue 8, p6503-6519, 17p
Publication Year :
2022

Abstract

In this paper, we investigate Hebbian learning strategies applied to Convolutional Neural Network (CNN) training. We consider two unsupervised learning approaches, Hebbian Winner-Takes-All (HWTA), and Hebbian Principal Component Analysis (HPCA). The Hebbian learning rules are used to train the layers of a CNN in order to extract features that are then used for classification, without requiring backpropagation (backprop). Experimental comparisons are made with state-of-the-art unsupervised (but backprop-based) Variational Auto-Encoder (VAE) training. For completeness,we consider two supervised Hebbian learning variants (Supervised Hebbian Classifiers—SHC, and Contrastive Hebbian Learning—CHL), for training the final classification layer, which are compared to Stochastic Gradient Descent training. We also investigate hybrid learning methodologies, where some network layers are trained following the Hebbian approach, and others are trained by backprop. We tested our approaches on MNIST, CIFAR10, and CIFAR100 datasets. Our results suggest that Hebbian learning is generally suitable for training early feature extraction layers, or to retrain higher network layers in fewer training epochs than backprop. Moreover, our experiments show that Hebbian learning outperforms VAE training, with HPCA performing generally better than HWTA. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09410643
Volume :
34
Issue :
8
Database :
Complementary Index
Journal :
Neural Computing & Applications
Publication Type :
Academic Journal
Accession number :
156934557
Full Text :
https://doi.org/10.1007/s00521-021-06701-4