Back to Search Start Over

Analysis of Power-Oriented Fault Injection Attacks on Spiking Neural Networks

Authors :
Nagarajan, Karthikeyan
Li, Junde
Ensan, Sina Sayyah
Khan, Mohammad Nasim Imtiaz
Kannan, Sachhidh
Ghosh, Swaroop
Publication Year :
2022

Abstract

Spiking Neural Networks (SNN) are quickly gaining traction as a viable alternative to Deep Neural Networks (DNN). In comparison to DNNs, SNNs are more computationally powerful and provide superior energy efficiency. SNNs, while exciting at first appearance, contain security-sensitive assets (e.g., neuron threshold voltage) and vulnerabilities (e.g., sensitivity of classification accuracy to neuron threshold voltage change) that adversaries can exploit. We investigate global fault injection attacks by employing external power supplies and laser-induced local power glitches to corrupt crucial training parameters such as spike amplitude and neuron's membrane threshold potential on SNNs developed using common analog neurons. We also evaluate the impact of power-based attacks on individual SNN layers for 0% (i.e., no attack) to 100% (i.e., whole layer under attack). We investigate the impact of the attacks on digit classification tasks and find that in the worst-case scenario, classification accuracy is reduced by 85.65%. We also propose defenses e.g., a robust current driver design that is immune to power-oriented attacks, improved circuit sizing of neuron components to reduce/recover the adversarial accuracy degradation at the cost of negligible area and 25% power overhead. We also present a dummy neuron-based voltage fault injection detection system with 1% power and area overhead.<br />Comment: Design, Automation and Test in Europe Conference (DATE) 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2204.04768
Document Type :
Working Paper