Back to Search
Start Over
A lightweight transformer with linear self‐attention for defect recognition.
- Source :
-
Electronics Letters (Wiley-Blackwell) . Sep2024, Vol. 60 Issue 17, p1-4. 4p. - Publication Year :
- 2024
-
Abstract
- Visual defect recognition techniques based on deep learning models are crucial for modern industrial quality inspection. The backbone, serving as the primary feature extraction component of the defect recognition model, has not been thoroughly exploited. High‐performance vision transformer (ViT) is less adopted due to high computational complexity and limitations of computational resources and storage hardware in industrial scenarios. This paper presents LSA‐Former, a lightweight transformer architectural backbone that integrates the benefits of convolution and ViT. LSA‐Former proposes a novel self‐attention with linear computational complexity, enabling it to capture local and global semantic features with fewer parameters. LSA‐Former is pre‐trained on ImageNet‐1K and surpasses state‐of‐the‐art methods. LSA‐Former is employed as the backbone for various detectors, evaluated specifically on the PCB defect detection task. The proposed method reduces at least 18M parameters and exceeds the baseline by more than 2.2 mAP. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00135194
- Volume :
- 60
- Issue :
- 17
- Database :
- Academic Search Index
- Journal :
- Electronics Letters (Wiley-Blackwell)
- Publication Type :
- Academic Journal
- Accession number :
- 179639995
- Full Text :
- https://doi.org/10.1049/ell2.13292