Back to Search
Start Over
CGP-Uformer: A low-dose CT image denoising Uformer based on channel graph perception.
- Source :
- Journal of X-Ray Science & Technology; 2023, Vol. 31 Issue 6, p1189-1205, 17p
- Publication Year :
- 2023
-
Abstract
- BACKGROUND: An effective method for achieving low-dose CT is to keep the number of projection angles constant while reducing radiation dose at each angle. However, this leads to high-intensity noise in the reconstructed image, adversely affecting subsequent image processing, analysis, and diagnosis. OBJECTIVE: This paper proposes a novel Channel Graph Perception based U-shaped Transformer (CGP-Uformer) network, aiming to achieve high-performance denoising of low-dose CT images. METHODS: The network consists of convolutional feed-forward Transformer (ConvF-Transformer) blocks, a channel graph perception block (CGPB), and spatial cross-attention (SC-Attention) blocks. The ConvF-Transformer blocks enhance the ability of feature representation and information transmission through the CNN-based feed-forward network. The CGPB introduces Graph Convolutional Network (GCN) for Channel-to-Channel feature extraction, promoting the propagation of information across distinct channels and enabling inter-channel information interchange. The SC-Attention blocks reduce the semantic difference in feature fusion between the encoder and decoder by computing spatial cross-attention. RESULTS: By applying CGP-Uformer to process the 2016 NIH AAPM-Mayo LDCT challenge dataset, experiments show that the peak signal-to-noise ratio value is 35.56 and the structural similarity value is 0.9221. CONCLUSIONS: Compared to the other four representative denoising networks currently, this new network demonstrates superior denoising performance and better preservation of image details. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 08953996
- Volume :
- 31
- Issue :
- 6
- Database :
- Complementary Index
- Journal :
- Journal of X-Ray Science & Technology
- Publication Type :
- Academic Journal
- Accession number :
- 173929629
- Full Text :
- https://doi.org/10.3233/XST-230158