Back to Search
Start Over
Understanding How Image Quality Affects Transformer Neural Networks
- Source :
- Signals, Vol 5, Iss 3, Pp 562-579 (2024)
- Publication Year :
- 2024
- Publisher :
- MDPI AG, 2024.
-
Abstract
- Deep learning models, particularly transformer architectures, have revolutionized various computer vision tasks, including image classification. However, their performance under different types and levels of noise remains a crucial area of investigation. In this study, we explore the noise sensitivity of prominent transformer models trained on the ImageNet dataset. We systematically evaluate 22 transformer variants, ranging from state-of-the-art large-scale models to compact versions tailored for mobile applications, under five common types of image distortions. Our findings reveal diverse sensitivities across different transformer architectures, with notable variations in performance observed under additive Gaussian noise, multiplicative Gaussian noise, Gaussian blur, salt-and-pepper noise, and JPEG compression. Interestingly, we observe a consistent robustness of transformer models to JPEG compression, with top-5 accuracies exhibiting higher resilience to noise compared to top-1 accuracies. Furthermore, our analysis highlights the vulnerability of mobile-oriented transformer variants to various noise types, underscoring the importance of noise robustness considerations in model design and deployment for real-world applications. These insights contribute to a deeper understanding of transformer model behavior under noisy conditions and have implications for improving the robustness and reliability of deep learning systems in practical scenarios.
Details
- Language :
- English
- ISSN :
- 26246120
- Volume :
- 5
- Issue :
- 3
- Database :
- Directory of Open Access Journals
- Journal :
- Signals
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.7e05115dddb54439b0000bb6471df338
- Document Type :
- article
- Full Text :
- https://doi.org/10.3390/signals5030031