Back to Search Start Over

Understanding Generalization in Neural Networks for Robustness against Adversarial Vulnerabilities

Authors :
Subhajit Chaudhury
Source :
AAAI
Publication Year :
2020
Publisher :
Association for the Advancement of Artificial Intelligence (AAAI), 2020.

Abstract

Neural networks have contributed to tremendous progress in the domains of computer vision, speech processing, and other real-world applications. However, recent studies have shown that these state-of-the-art models can be easily compromised by adding small imperceptible perturbations. My thesis summary frames the problem of adversarial robustness as an equivalent problem of learning suitable features that leads to good generalization in neural networks. This is motivated from learning in humans which is not trivially fooled by such perturbations due to robust feature learning which shows good out-of-sample generalization.

Details

ISSN :
23743468 and 21595399
Volume :
34
Database :
OpenAIRE
Journal :
Proceedings of the AAAI Conference on Artificial Intelligence
Accession number :
edsair.doi...........61fb5752223c2258d028f519d5bbd003
Full Text :
https://doi.org/10.1609/aaai.v34i10.7129