12 results on '"Zhang, Malu"'
Search Results
2. The maximum points-based supervised learning rule for spiking neural networks
- Author
-
Xie, Xiurui, Liu, Guisong, Cai, Qing, Qu, Hong, and Zhang, Malu
- Published
- 2019
- Full Text
- View/download PDF
3. Federated learning for spiking neural networks by hint-layer knowledge distillation.
- Author
-
Xie, Xiurui, Feng, Jingxuan, Liu, Guisong, Zhan, Qiugang, Liu, Zhetong, and Zhang, Malu
- Subjects
ARTIFICIAL neural networks ,FEDERATED learning ,KNOWLEDGE representation (Information theory) ,DISTILLATION - Abstract
In recent years, research on the federated Spiking Neural Network (SNN) has attracted increasing attention because of its advantages of low-power consumption and privacy security. However, existing federated SNN researches rely primarily on the Federated Average (FedAvg) strategy, transferring full network parameters between the server and clients and incurring substantial communication. To address this issue, we propose a Hint-layer Distillation-based Spiking Federated Learning (HDSFL) framework that reduces the communication cost by transferring knowledge and losslessly compressing the spiking tensor. To compensate for the information loss due to the binary representation of spikes in knowledge distillation, we introduce the hint-layer information instead of just soft-label distribution. To aggregate the knowledge effectively across clients on the server, we process the weighted knowledge aggregation on the spiking knowledge representation based on local performance. The experimental results on four classical and large-scale benchmarks show that our method reduces the communication cost by about 1–2 orders of magnitude compared to conventional methods, while achieving comparable accuracies. Especially on the CIFAR-10 dataset, our method achieves the same accuracy as FedAvg using only 20% of the communications, and consumes only 5.8% communications of FedAvg in a single round. • We propose a new federated SNN training framework (HDSFL) that can reduce communication costs by about 1–2 orders of magnitude without declining precision. • We design a distillation loss function that considers the knowledge distillation technique of the hint-layer. • We propose a new federated knowledge aggregation strategy based on the confidence of each client. • We design a spike tensor compression strategy for spike features in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Supervised learning in spiking neural networks with synaptic delay-weight plasticity.
- Author
-
Zhang, Malu, Wu, Jibin, Belatreche, Ammar, Pan, Zihan, Xie, Xiurui, Chua, Yansong, Li, Guoqi, Qu, Hong, and Li, Haizhou
- Subjects
- *
SUPERVISED learning , *NEUROPLASTICITY , *ACTION potentials , *ARTIFICIAL neural networks , *OPEN-ended questions , *TEMPORAL databases - Abstract
Spiking neurons encode information through their spiking temporal patterns. Although the precise spike-timing based encoding scheme has long been recognised, the exact mechanism that underlies the learning of such precise spike-timing in the brain remains an open question. Most of the existing learning methods for spiking neurons are based on synaptic weight adjustment. However, biological evidences suggest that synaptic delays can also be modulated to play an important role in the learning process. This paper investigates the viability of integrating synaptic delay plasticity into supervised learning and proposes a novel learning method that adjusts both the synaptic delays and weights of the learning neurons to make them fire precisely timed spikes, that is referred to as synaptic delay-weight plasticity. Remote Supervised Method (ReSuMe) and Perceptron Based Spiking Neuron Learning Rule (PBSNLR), two representative supervised learning methods, are studied to illustrate how the synaptic delay-weight plasticity works. The performance of the proposed learning method is thoroughly evaluated on synthetic data and is further demonstrated on real-world classification tasks. The experiments show that the synaptic delay-weight learning method outperforms the traditional synaptic weight learning methods in many ways. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. An Efficient Threshold-Driven Aggregate-Label Learning Algorithm for Multimodal Information Processing.
- Author
-
Zhang, Malu, Luo, Xiaoling, Chen, Yi, Wu, Jibin, Belatreche, Ammar, Pan, Zihan, Qu, Hong, and Li, Haizhou
- Abstract
The aggregate-label learning paradigm tackles the long-standing temporary credit assignment (TCA) problem in neuroscience and machine learning, enabling spiking neural networks to learn multimodal sensory clues with delayed feedback signals. However, the existing aggregate-label learning algorithms only work for single spiking neurons, and with low learning efficiency, which limit their real-world applicability. To address these limitations, in this article, we first propose an efficient threshold-driven plasticity algorithm for spiking neurons, namely ETDP. It enables spiking neurons to generate the desired number of spikes that match the magnitude of delayed feedback signals and to learn useful multimodal sensory clues embedded within spontaneous spiking activities. Furthermore, we extend the ETDP algorithm to support multi-layer spiking neural networks (SNNs), which significantly improves the applicability of aggregate-label learning algorithms. We also validate the multi-layer ETDP learning algorithm in a multimodal computation framework for audio-visual pattern recognition. Experimental results on both synthetic and realistic datasets show significant improvements in the learning efficiency and model capacity over the existing aggregate-label learning algorithms. It, therefore, provides many opportunities for solving real-world multimodal pattern recognition tasks with spiking neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. A Highly Effective and Robust Membrane Potential-Driven Supervised Learning Method for Spiking Neurons.
- Author
-
Zhang, Malu, Qu, Hong, Belatreche, Ammar, Chen, Yi, and Yi, Zhang
- Subjects
- *
SUPERVISED learning , *ARTIFICIAL neural networks - Abstract
Spiking neurons are becoming increasingly popular owing to their biological plausibility and promising computational properties. Unlike traditional rate-based neural models, spiking neurons encode information in the temporal patterns of the transmitted spike trains, which makes them more suitable for processing spatiotemporal information. One of the fundamental computations of spiking neurons is to transform streams of input spike trains into precisely timed firing activity. However, the existing learning methods, used to realize such computation, often result in relatively low accuracy performance and poor robustness to noise. In order to address these limitations, we propose a novel highly effective and robust membrane potential-driven supervised learning (MemPo-Learn) method, which enables the trained neurons to generate desired spike trains with higher precision, higher efficiency, and better noise robustness than the current state-of-the-art spiking neuron learning methods. While the traditional spike-driven learning methods use an error function based on the difference between the actual and desired output spike trains, the proposed MemPo-Learn method employs an error function based on the difference between the output neuron membrane potential and its firing threshold. The efficiency of the proposed learning method is further improved through the introduction of an adaptive strategy, called skip scan training strategy, that selectively identifies the time steps when to apply weight adjustment. The proposed strategy enables the MemPo-Learn method to effectively and efficiently learn the desired output spike train even when much smaller time steps are used. In addition, the learning rule of MemPo-Learn is improved further to help mitigate the impact of the input noise on the timing accuracy and reliability of the neuron firing dynamics. The proposed learning method is thoroughly evaluated on synthetic data and is further demonstrated on real-world classification tasks. Experimental results show that the proposed method can achieve high learning accuracy with a significant improvement in learning time and better robustness to different types of noise. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. A universal ANN-to-SNN framework for achieving high accuracy and low latency deep Spiking Neural Networks.
- Author
-
Wang, Yuchen, Liu, Hanwen, Zhang, Malu, Luo, Xiaoling, and Qu, Hong
- Subjects
- *
ARTIFICIAL neural networks , *OBJECT recognition (Computer vision) , *ACTION potentials , *BIOLOGICAL models - Abstract
Spiking Neural Networks (SNNs) have become one of the most prominent next-generation computational models owing to their biological plausibility, low power consumption, and the potential for neuromorphic hardware implementation. Among the various methods for obtaining available SNNs, converting Artificial Neural Networks (ANNs) into SNNs is the most cost-effective approach. The early challenges in ANN-to-SNN conversion work revolved around the susceptibility of converted SNNs to conversion errors. Some recent endeavors have attempted to mitigate these conversion errors by altering the original ANNs. Despite their ability to enhance the accuracy of SNNs, these methods lack generality and cannot be directly applied to convert the majority of existing ANNs. In this paper, we present a framework named DNISNM for converting ANN to SNN, with the aim of addressing conversion errors arising from differences in the discreteness and asynchrony of network transmission between ANN and SNN. The DNISNM consists of two mechanisms, Data-based Neuronal Initialization (DNI) and Signed Neuron with Memory (SNM), designed to respectively address errors stemming from discreteness and asynchrony disparities. This framework requires no additional modifications to the original ANN and can result in SNNs with improved accuracy performance, simultaneously ensuring universality, high precision, and low inference latency. We verify it experimentally on challenging object recognition datasets, including CIFAR10, CIFAR100, and ImageNet-1k. Experimental results show that the SNN converted by our framework has very high accuracy even at extremely low latency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Efficient spiking neural network design via neural architecture search.
- Author
-
Yan, Jiaqi, Liu, Qianhui, Zhang, Malu, Feng, Lang, Ma, De, Li, Haizhou, and Pan, Gang
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *ACTION potentials , *DEEP learning , *BUDGET , *ENERGY consumption - Abstract
Spiking neural networks (SNNs) are brain-inspired models that utilize discrete and sparse spikes to transmit information, thus having the property of energy efficiency. Recent advances in learning algorithms have greatly improved SNN performance due to the automation of feature engineering. While the choice of neural architecture plays a significant role in deep learning, the current SNN architectures are mainly designed manually, which is a time-consuming and error-prone process. In this paper, we propose a spiking neural architecture search (NAS) method that can automatically find efficient SNNs. To tackle the challenge of long search time faced by SNNs when utilizing NAS, the proposed NAS encodes candidate architectures in a branchless spiking supernet which significantly reduces the computation requirements in the search process. Considering that real-world tasks prefer efficient networks with optimal accuracy under a limited computational budget, we propose a Synaptic Operation (SynOps)-aware optimization to automatically find the computationally efficient subspace of the supernet. Experimental results show that, in less search time, our proposed NAS can find SNNs with higher accuracy and lower computational cost than state-of-the-art SNNs. We also conduct experiments to validate the search process and the trade-off between accuracy and computational cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. A new recursive least squares-based learning algorithm for spiking neurons.
- Author
-
Zhang, Yun, Qu, Hong, Luo, Xiaoling, Chen, Yi, Wang, Yuchen, Zhang, Malu, and Li, Zefang
- Subjects
- *
MACHINE learning , *ACTION potentials , *SUPERVISED learning , *ERROR functions , *SPATIOTEMPORAL processes - Abstract
Spiking neural networks (SNNs) are regarded as effective models for processing spatio-temporal information. However, their inherent complexity of temporal coding makes it an arduous task to put forward an effective supervised learning algorithm, which still puzzles researchers in this area. In this paper, we propose a Recursive Least Squares-Based Learning Rule (RLSBLR) for SNN to generate the desired spatio-temporal spike train. During the learning process of our method, the weight update is driven by the cost function defined by the difference between the membrane potential and the firing threshold. The amount of weight modification depends not only on the impact of the current error function, but also on the previous error functions which are evaluated by current weights. In order to improve the learning performance, we integrate a modified synaptic delay learning to the proposed RLSBLR. We conduct experiments in different settings, such as spiking lengths, number of inputs, firing rates, noises and learning parameters, to thoroughly investigate the performance of this learning algorithm. The proposed RLSBLR is compared with competitive algorithms of Perceptron-Based Spiking Neuron Learning Rule (PBSNLR) and Remote Supervised Method (ReSuMe). Experimental results demonstrate that the proposed RLSBLR can achieve higher learning accuracy, higher efficiency and better robustness against different types of noise. In addition, we apply the proposed RLSBLR to open source database TIDIGITS, and the results show that our algorithm has a good practical application performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. A two-stage spiking meta-learning method for few-shot classification.
- Author
-
Zhan, Qiugang, Wang, Bingchao, Jiang, Anning, Xie, Xiurui, Zhang, Malu, and Liu, Guisong
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *ENERGY consumption , *ELECTRICITY pricing , *CLASSIFICATION - Abstract
In recent years, deep spiking neural networks (SNNs) have demonstrated promising performance across various applications, owing to their low-power characteristics. Research on SNN meta-learning has enabled SNNs to reduce both label cost and computational power consumption in few-shot classification tasks. However, current SNN meta-learning methods still lag behind traditional artificial neural networks (ANNs) in terms of accuracy. In this work, we explore a two-stage metric-based SNN meta-learning framework that achieves the highest accuracy performance in SNN. This framework comprises a pre-training stage and a meta-training stage. During pre-training, a classification embedding SNN model (CESM) is trained to extract image features. Subsequently, in the meta-training stage, the meta embedding SNN model (MESM) employs the centered kernel alignment (CKA) method to measure the similarity between these learned features for meta-learning. We conduct extensive experiments on the Omniglot, tieredImageNet, and miniImageNet datasets, evaluating both CESM and MESM models. Experimental results demonstrate that the proposed framework improves performance by 5% on average compared to previous SNN meta-learning approaches. The proposed method surpasses the early classical ANN methods and further closes the gap with ANN state-of-the-art methods. • We explore an effective two-stage SNN meta-learning framework. • A CKA-based non-parametric classifier is proposed for better capturing spiking temporal feature similarities. • The framework outperforms the existing SNN meta-learning methods and has lower energy consumption than ANN. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Efficient training of supervised spiking neural networks via the normalized perceptron based learning rule.
- Author
-
Xie, Xiurui, Qu, Hong, Liu, Guisong, and Zhang, Malu
- Subjects
- *
ARTIFICIAL neural networks , *PERCEPTRONS , *PATTERN perception , *ALGORITHMS , *ELECTRIC potential , *ROBUST statistics - Abstract
The spiking neural networks (SNNs) are the third generation of artificial neural networks, which have made great achievements in the field of pattern recognition. However, the existing supervised training methods of SNNs are not efficient enough to meet the real-time requirement in most cases. To address this issue, the normalized perceptron based learning rule (NPBLR) is proposed in this paper for the supervised training of the multi-layer SNNs. Different from traditional methods, our algorithm only trains the selected misclassified time points and the target ones, employing the perceptron based neuron. Furthermore, the weight modification in our algorithm is normalized by a voltage based function, which is more efficient than the traditional time based method because the firing time is calculated by the voltage value. Superior to the traditional multi-layer algorithm ignoring the time accumulation of spikes, our algorithm defines the spiking activity of the postsynaptic neuron as the rate accumulation function of all presynaptic neurons in a specific time-frame. By these strategies, our algorithm overcomes some difficulties in the training of SNNs, e.g., the inefficient and no-fire problems. Comprehensive simulations are conducted both in single and multi-layer networks to investigate the learning performance of our algorithm, whose results demonstrate that our algorithm possesses higher learning efficiency and stronger parameter robustness than traditional algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
12. Bio-inspired Active Learning method in spiking neural network.
- Author
-
Zhan, Qiugang, Liu, Guisong, Xie, Xiurui, Zhang, Malu, and Sun, Guolin
- Subjects
- *
ARTIFICIAL neural networks , *ACTIVE learning - Abstract
Spiking neural networks (SNNs) have gained a lot of attention and achievements recently because of their low-power advantages on neuromorphic hardware. However, training deep SNNs still requires a large number of labeled data which are expensive to obtain. To address this issue, we propose an effective Bio-inspired Active Learning (BAL) method in this paper to reduce the training cost of SNN models. Specifically, bio-inspired behavior patterns of spiking neurons are defined to represent the internal states of SNN models for active learning. Then, an active learning sample selection strategy is proposed by leveraging the empirical and generalization pattern divergence in SNNs. By labeling selected samples and adding them to training, behavioral patterns can be optimized to improve the performance of neural networks. Comprehensive experiments are conducted on the CIFAR-10, SVHN, and Fashion-MNIST datasets with various sample proportions. The experimental results demonstrate that the proposed BAL achieves state-of-the-arts performance in SNNs compared with the existing active learning methods. • BAL exploits the active learning feasibility in spiking neural networks. • We propose the neuron behavior patterns based on the inner states of spiking neurons. • Experiments are conducted to demonstrate the effectiveness of BAL. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.