1. Spiking neural network with local plasticity and sparse connectivity for audio classification
- Author
-
Rybka, Roman Борисович, Vlasov, Danila Sergeevich, Manzhurov, Alexander Igorevich, Serenko, Alexey Вячеславович, and Sboev, Alexander Georgievich
- Subjects
spiking neural network ,stdp ,sparse connectivity ,free spoken digits dataset ,audio classification ,Physics ,QC1-999 - Abstract
Purpose. Studying the possibility of implementing a data classification method based on a spiking neural network, which has a low number of connections and is trained based on local plasticity rules, such as Spike-Timing-Dependent Plasticity. Methods. As the basic architecture of a spiking neural network we use a network included an input layer and layers of excitatory and inhibitory spiking neurons (Leaky Integrate and Fire). Various options for organizing connections in the selected neural network are explored. We have proposed a method for organizing connectivity between layers of neurons, in which synaptic connections are formed with a certain probability, calculated on the basis of the spatial arrangement of neurons in the layers. In this case, a limited area of connectivity leads to a higher sparseness of connections in the overall network. We use frequency-based coding of data into spike trains, and logistic regression is used for decoding. Results. As a result, based on the proposed method of organizing connections, a set of spiking neural network architectures with different connectivity coefficients for different layers of the original network was implemented. A study of the resulting spiking network architectures was carried out using the Free Spoken Digits dataset, consisting of 3000 audio recordings corresponding to 10 classes of digits from 0 to 9. Conclusion. It is shown that the proposed method of organizing connections for the selected spiking neural network allows reducing the number of connections by up to 60% compared to a fully connected architecture. At the same time, the accuracy of solving the classification problem does not deteriorate and is 0.92...0.95 according to the F1 metric. This matches the accuracy of standard support vector machine, k-nearest neighbor, and random forest classifiers. The source code for this article is publicly available: https://github.com/sag111/Sparse-WTA-SNN.
- Published
- 2024
- Full Text
- View/download PDF