1. A regularization perspective based theoretical analysis for adversarial robustness of deep spiking neural networks.
- Author
-
Zhang, Hui, Cheng, Jian, Zhang, Jun, Liu, Hongyi, and Wei, Zhihui
- Subjects
- *
ARTIFICIAL neural networks , *POISSON processes , *SUM of squares , *MNEMONICS , *STOCHASTIC processes - Abstract
Spiking Neural Network (SNN) has been recognized as the third generation of neural networks. Conventionally, a SNN can be converted from a pre-trained Artificial Neural Network (ANN) with less computation and memory than training from scratch. But, these converted SNNs are vulnerable to adversarial attacks. Numerical experiments demonstrate that the SNN trained by optimizing the loss function will be more adversarial robust, but the theoretical analysis for the mechanism of robustness is lacking. In this paper, we provide a theoretical explanation by analyzing the expected risk function. Starting by modeling the stochastic process introduced by the Poisson encoder, we prove that there is a positive semidefinite regularizer. Perhaps surprisingly, this regularizer can make the gradients of the output with respect to input closer to zero, thus resulting in inherent robustness against adversarial attacks. Extensive experiments on the CIFAR10 and CIFAR100 datasets support our point of view. For example, we find that the sum of squares of the gradients of the converted SNNs is 13 ∼ 160 times that of the trained SNNs. And, the smaller the sum of the squares of the gradients, the smaller the degradation of accuracy under adversarial attack. • The adversarial robustness of SNNs based on rate encoder is explained mathematically. • A regularizer distinguishes the directly trained SNN from the converted SNN. • The coding length can also change the gradient of the input to affect the robustness. • We compare more types of the trained SNN with the converted SNN. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF