Back to Search Start Over

Defensive approximation: securing CNNs using approximate computing

Authors :
Mouna Baklouti
Mohamed Abid
Nael Abu-Ghazaleh
Khaled N. Khasawneh
Amira Guesmi
Tarek Frikha
Ihsen Alouani
Université de Sfax - University of Sfax
Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 (IEMN)
Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université Polytechnique Hauts-de-France (UPHF)-JUNIA (JUNIA)
COMmunications NUMériques - IEMN (COMNUM - IEMN)
Institut d’Électronique, de Microélectronique et de Nanotechnologie - Département Opto-Acousto-Électronique - UMR 8520 (IEMN-DOAE)
INSA Institut National des Sciences Appliquées Hauts-de-France (INSA Hauts-De-France)-Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 (IEMN)
Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université Polytechnique Hauts-de-France (UPHF)-JUNIA (JUNIA)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université Polytechnique Hauts-de-France (UPHF)-JUNIA (JUNIA)-INSA Institut National des Sciences Appliquées Hauts-de-France (INSA Hauts-De-France)-Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 (IEMN)
Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université Polytechnique Hauts-de-France (UPHF)-JUNIA (JUNIA)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université Polytechnique Hauts-de-France (UPHF)-JUNIA (JUNIA)
George Mason University [Fairfax]
University of California [Riverside] (UCR)
University of California
Partially supported by NSF grants CNS-1646641, CNS-1619322 and CNS-1955650
Université catholique de Lille (UCL)-Université catholique de Lille (UCL)
INSA Institut National des Sciences Appliquées Hauts-de-France (INSA Hauts-De-France)
Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Institut d’Électronique, de Microélectronique et de Nanotechnologie - UMR 8520 (IEMN)
Université catholique de Lille (UCL)-Université catholique de Lille (UCL)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Université Polytechnique Hauts-de-France (UPHF)-JUNIA (JUNIA)
University of California [Riverside] (UC Riverside)
University of California (UC)
Source :
26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS'21, 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS'21, Apr 2021, Virtual, USA, United States. pp.990-1003, ⟨10.1145/3445814.3446747⟩
Publication Year :
2021
Publisher :
ACM, 2021.

Abstract

In the past few years, an increasing number of machine-learning and deep learning structures, such as Convolutional Neural Networks (CNNs), have been applied to solving a wide range of real-life problems. However, these architectures are vulnerable to adversarial attacks. In this paper, we propose for the first time to use hardware-supported approximate computing to improve the robustness of machine learning classifiers. We show that our approximate computing implementation achieves robustness across a wide range of attack scenarios. Specifically, for black-box and grey-box attack scenarios, we show that successful adversarial attacks against the exact classifier have poor transferability to the approximate implementation. Surprisingly, the robustness advantages also apply to white-box attacks where the attacker has access to the internal implementation of the approximate classifier. We explain some of the possible reasons for this robustness through analysis of the internal operation of the approximate implementation. Furthermore, our approximate computing model maintains the same level in terms of classification accuracy, does not require retraining, and reduces resource utilization and energy consumption of the CNN. We conducted extensive experiments on a set of strong adversarial attacks; We empirically show that the proposed implementation increases the robustness of a LeNet-5 and an Alexnet CNNs by up to 99% and 87%, respectively for strong grey-box adversarial attacks along with up to 67% saving in energy consumption due to the simpler nature of the approximate logic. We also show that a white-box attack requires a remarkably higher noise budget to fool the approximate classifier, causing an average of 4db degradation of the PSNR of the input image relative to the images that succeed in fooling the exact classifier<br />Comment: ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2021)

Details

Database :
OpenAIRE
Journal :
Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems
Accession number :
edsair.doi.dedup.....4a4479b95de917d413f00f551ac66f7a
Full Text :
https://doi.org/10.1145/3445814.3446747