Back to Search
Start Over
Defensive approximation: securing CNNs using approximate computing
- Source :
- 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS'21, 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS'21, Apr 2021, Virtual, USA, United States. pp.990-1003, ⟨10.1145/3445814.3446747⟩
- Publication Year :
- 2021
- Publisher :
- ACM, 2021.
-
Abstract
- In the past few years, an increasing number of machine-learning and deep learning structures, such as Convolutional Neural Networks (CNNs), have been applied to solving a wide range of real-life problems. However, these architectures are vulnerable to adversarial attacks. In this paper, we propose for the first time to use hardware-supported approximate computing to improve the robustness of machine learning classifiers. We show that our approximate computing implementation achieves robustness across a wide range of attack scenarios. Specifically, for black-box and grey-box attack scenarios, we show that successful adversarial attacks against the exact classifier have poor transferability to the approximate implementation. Surprisingly, the robustness advantages also apply to white-box attacks where the attacker has access to the internal implementation of the approximate classifier. We explain some of the possible reasons for this robustness through analysis of the internal operation of the approximate implementation. Furthermore, our approximate computing model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption of the CNN. We conducted extensive experiments on a set of strong adversarial attacks; We empirically show that the proposed implementation increases the robustness of a LeNet-5 and an Alexnet CNNs by up to 99% and 87%, respectively for strong grey-box adversarial attacks along with up to 67% saving in energy consumption due to the simpler nature of the approximate logic. We also show that a white-box attack requires a remarkably higher noise budget to fool the approximate classifier, causing an average of 4db degradation of the PSNR of the input image relative to the images that succeed in fooling the exact classifier<br />Comment: ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2021)
- Subjects :
- FOS: Computer and information sciences
Computer Science - Machine Learning
Computer Science - Cryptography and Security
Computer systems organization
Network reliability
Computer science
Embedded systems
0211 other engineering and technologies
02 engineering and technology
Convolutional neural network
[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]
Machine Learning (cs.LG)
Image (mathematics)
[INFO.INFO-NI]Computer Science [cs]/Networking and Internet Architecture [cs.NI]
[SPI]Engineering Sciences [physics]
Redundancy
Robustness (computer science)
Classifier (linguistics)
0202 electrical engineering, electronic engineering, information engineering
[INFO]Computer Science [cs]
021110 strategic, defence & security studies
business.industry
Embedded and cyber-physical systems
Deep learning
Robotics
Energy consumption
[SPI.TRON]Engineering Sciences [physics]/Electronics
020202 computer hardware & architecture
Dependable and fault-tolerant systems and networks
Range (mathematics)
Noise (video)
Artificial intelligence
Networks
business
[SPI.SIGNAL]Engineering Sciences [physics]/Signal and Image processing
Cryptography and Security (cs.CR)
Algorithm
Network properties
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems
- Accession number :
- edsair.doi.dedup.....4a4479b95de917d413f00f551ac66f7a
- Full Text :
- https://doi.org/10.1145/3445814.3446747