Back to Search
Start Over
An Impartial Take to the CNN vs Transformer Robustness Contest
- Source :
- ECCV 2022
- Publication Year :
- 2022
-
Abstract
- Following the surge of popularity of Transformers in Computer Vision, several studies have attempted to determine whether they could be more robust to distribution shifts and provide better uncertainty estimates than Convolutional Neural Networks (CNNs). The almost unanimous conclusion is that they are, and it is often conjectured more or less explicitly that the reason of this supposed superiority is to be attributed to the self-attention mechanism. In this paper we perform extensive empirical analyses showing that recent state-of-the-art CNNs (particularly, ConvNeXt) can be as robust and reliable or even sometimes more than the current state-of-the-art Transformers. However, there is no clear winner. Therefore, although it is tempting to state the definitive superiority of one family of architectures over another, they seem to enjoy similar extraordinary performances on a variety of tasks while also suffering from similar vulnerabilities such as texture, background, and simplicity biases.
Details
- Database :
- arXiv
- Journal :
- ECCV 2022
- Publication Type :
- Report
- Accession number :
- edsarx.2207.11347
- Document Type :
- Working Paper