Back to Search
Start Over
Are Adversarial Robustness and Common Perturbation Robustness Independent Attributes ?
- Publication Year :
- 2019
-
Abstract
- Neural Networks have been shown to be sensitive to common perturbations such as blur, Gaussian noise, rotations, etc. They are also vulnerable to some artificial malicious corruptions called adversarial examples. The adversarial examples study has recently become very popular and it sometimes even reduces the term "adversarial robustness" to the term "robustness". Yet, we do not know to what extent the adversarial robustness is related to the global robustness. Similarly, we do not know if a robustness to various common perturbations such as translations or contrast losses for instance, could help with adversarial corruptions. We intend to study the links between the robustnesses of neural networks to both perturbations. With our experiments, we provide one of the first benchmark designed to estimate the robustness of neural networks to common perturbations. We show that increasing the robustness to carefully selected common perturbations, can make neural networks more robust to unseen common perturbations. We also prove that adversarial robustness and robustness to common perturbations are independent. Our results make us believe that neural network robustness should be addressed in a broader sense.<br />Comment: To appear in ICCV Workshop on Real-World Recognition from Low-Quality Images and Videos (RLQ) 2019
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1909.02436
- Document Type :
- Working Paper