Back to Search Start Over

Effective Surrogate Gradient Learning With High-Order Information Bottleneck for Spike-Based Machine Intelligence.

Authors :
Yang S
Chen B
Source :
IEEE transactions on neural networks and learning systems [IEEE Trans Neural Netw Learn Syst] 2023 Nov 22; Vol. PP. Date of Electronic Publication: 2023 Nov 22.
Publication Year :
2023
Publisher :
Ahead of Print

Abstract

Brain-inspired computing technique presents a promising approach to prompt the rapid development of artificial general intelligence (AGI). As one of the most critical aspects, spiking neural networks (SNNs) have demonstrated superiority for AGI, such as low power consumption. Effective training of SNNs with high generalization ability, high robustness, and low power consumption simultaneously is a significantly challenging problem for the development and success of applications of spike-based machine intelligence. In this research, we present a novel and flexible learning framework termed high-order spike-based information bottleneck (HOSIB) leveraging the surrogate gradient technique. The presented HOSIB framework, including second-order and third-order formation, i.e., second-order information bottleneck (SOIB) and third-order information bottleneck (TOIB), comprehensively explores the common latent architecture and the spike-based intrinsic information and discards the superfluous information in the data, which improves the generalization capability and robustness of SNN models. Specifically, HOSIB relies on the information bottleneck (IB) principle to prompt the sparse spike-based information representation and flexibly balance its exploitation and loss. Extensive classification experiments are conducted to empirically show the promising generalization ability of HOSIB. Furthermore, we apply the SOIB and TOIB algorithms in deep spiking convolutional networks to demonstrate their improvement in robustness with various categories of noise. The experimental results prove the HOSIB framework, especially TOIB, can achieve better generalization ability, robustness and power efficiency in comparison with the current representative studies.

Details

Language :
English
ISSN :
2162-2388
Volume :
PP
Database :
MEDLINE
Journal :
IEEE transactions on neural networks and learning systems
Publication Type :
Academic Journal
Accession number :
37991917
Full Text :
https://doi.org/10.1109/TNNLS.2023.3329525