101. DIIK-Net: A full-resolution cross-domain deep interaction convolutional neural network for MR image reconstruction.
- Author
-
Liu, Yu, Pang, Yanwei, Liu, Xiaohan, Liu, Yiming, and Nie, Jing
- Subjects
- *
CONVOLUTIONAL neural networks , *MAGNETIC resonance imaging , *IMAGE reconstruction , *DEEP learning , *FEATURE extraction - Abstract
Acquiring incomplete k-space matrices is an effective way to accelerate Magnetic Resonance Imaging (MRI). It is an important and challenging task to accurately reconstruct images from such under-sampled k-space matrices. On the one hand, neither image-domain oriented nor frequency-domain oriented deep Convolutional Neural Networks can simultaneously employ both frequency features and spatial features for cooperatively improving reconstruction accuracy. On the other hand, existing dual-domain reconstruction methods adopt heavy encoder-decoder frameworks, resulting in low efficiency and information loss in the process of pooling. To deal with these problems, in this paper, we propose a full-resolution dual-domain reconstruction network, called DIIK-Net. The DIIK-Net consists of a full-resolution frequency-domain branch, a full-resolution image-domain branch, and cross-domain interaction modules between the two branches. The first novelty of the proposed method is that the features of each block of frequency-domain branch are extracted by 1 × 1 filters, which reduces computational cost and captures rich contextual information. Due to the fact that an element in frequency domain conveys information of the whole image, 1 × 1 convolutional blocks are able to extract large contextual information with the interaction of image domain. The second novelty is that the image-domain branch consists of a very small number of 3 × 3 convolutional blocks and each block has very large field of perception due to integration of frequency domain. The third novelty lies in the simple and effective cross-domain interaction module. Experimental results on the challenging fastMRI dataset demonstrate that the proposed method is capable of achieving higher reconstruction accuracy with a few number of parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF