Back to Search Start Over

Single image super-resolution based on trainable feature matching attention network.

Authors :
Chen, Qizhou
Shao, Qing
Source :
Pattern Recognition. May2024, Vol. 149, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Convolutional Neural Networks (CNNs) have been widely employed for image Super-Resolution (SR) in recent years. Various techniques enhance SR performance by altering CNN structures or incorporating improved self-attention mechanisms. Interestingly, these advancements share a common trait. Instead of explicitly learning high-frequency details, they learn an implicit feature processing mode that utilizes weighted sums of a feature map's own elements for reconstruction, akin to convolution and non-local. In contrast, early dictionary-based approaches learn feature decompositions explicitly to match and rebuild Low-Resolution (LR) features. Building on this analysis, we introduce Trainable Feature Matching (TFM) to amalgamate this explicit feature learning into CNNs, augmenting their representation capabilities. Within TFM, trainable feature sets are integrated to explicitly learn features from training images through feature matching. Furthermore, we integrate non-local and channel attention into our proposed Trainable Feature Matching Attention Network (TFMAN) to further enhance SR performance. To alleviate the computational demands of non-local operations, we propose a streamlined variant called Same-size-divided Region-level Non-Local (SRNL). SRNL conducts non-local computations in parallel on blocks uniformly divided from the input feature map. The efficacy of TFM and SRNL is validated through ablation studies and module explorations. We employ a recurrent convolutional network as the backbone of our TFMAN to optimize parameter utilization. Comprehensive experiments on benchmark datasets demonstrate that TFMAN achieves superior results in most comparisons while using fewer parameters. The code is available at https://github.com/qizhou000/tfman. • A novel Trainable Feature Matching (TFM) module for super-resolution is proposed. • An lightweight improvement for Non-Local (self-attention) is proposed. • Comprehensive experiments is conducted with BI, BD and DN degradation models. • Our method achieves best super-resolution results on multiple benchmarks. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00313203
Volume :
149
Database :
Academic Search Index
Journal :
Pattern Recognition
Publication Type :
Academic Journal
Accession number :
175681427
Full Text :
https://doi.org/10.1016/j.patcog.2024.110289