1. Single-image super-resolution based on an improved asymmetric Laplacian pyramid structure.
- Author
-
Liu, Xue, Qiao, Shuang, Zhang, Tian, Zhao, Chenyi, and Yao, Xiangyu
- Subjects
- *
CONVOLUTIONAL neural networks , *PYRAMIDS , *HIGH resolution imaging , *FEATURE extraction , *TRANSFORMER models , *ASYMMETRIC synthesis - Abstract
• An asymmetric Laplacian pyramid structure is proposed in which different size features are processed by different network architectures. • The improved dense skip connections and recursive operations are combined to form a deep dense recursive convolutional neural network to properly integrate low-level features and high-level features while expanding the network' s receptive field. • A low-high-medium strategy is adopted to design the number of channels for the convolutional layer of each layer of the pyramid. • Visual assessment and quantitative analysis prove that the proposed network is superior to other classical methods. Large-scale factor image super-resolution whose scale factor is greater than 4 is significant in real-world applications for single image super-resolution. The current image super-resolution techniques for large-scale factor, however, frequently upsample low-resolution images in a single pass, leading to edge artifacts in the reconstructed images. In this article, we provide an improved asymmetric Laplacian pyramid network to further realize large-scale factor image super-resolution and fully utilize various size features. A distinct architecture is applied at each layer of the pyramid for improving feature extraction. We extend the first layer of the pyramid with a lightweight transformer design, which enables the model to efficiently collect the contextual information in the sequence by utilizing the multi-headed attention mechanism. Additionally, we combine improved dense skip connections with recursive operations to form a deep dense recursive convolutional neural network that fuses low-level and high-level features while broadening the network's receptive field. The quantitative and qualitative analysis on benchmark dataset's demonstrate that our method provides superior performance in both PSNR and SSIM and is more in line with human visual perception. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF