Back to Search
Start Over
Cross-Attention and Cycle-Consistency-Based Haptic to Image Inpainting
- Source :
- IEEE Signal Processing Letters; 2024, Vol. 31 Issue: 1 p1650-1654, 5p
- Publication Year :
- 2024
-
Abstract
- With the rapid advancement of deep learning and multimedia technologies, image inpainting has made significant progress in generating desirable content for damaged images. To fill in the corrupted part of the image, existing methods either utilize intrinsic information within the visual modality for single-modal image inpainting or consider semantic correlations from non-visual modalities for cross-modal image inpainting. However, when the corrupted part is extensive, intrinsic information alone within visual modality is insufficient to inpaint the entire image. Cross-modal image inpainting methods cannot guarantee image quality, as they do not explore the relationship among the corrupted part of the image, the remaining part of the image, and the non-visual modality. In order to handle this issue, a haptic to image inpainting scheme is proposed. Specifically, feature extraction and cross-attention-based feature index are firstly constructed to find the correspondence between the corrupted part of the image and the haptic modality. Then, cycle-consistency-based feature translation is implemented to explore the intrinsic correlations among the above two modalities, facilitating the formation of the corrupted part. Finally, the formed corrupted part is combined with the remaining part to realize image inpainting. Experimental results show the effectiveness of the proposed scheme.
Details
- Language :
- English
- ISSN :
- 10709908 and 15582361
- Volume :
- 31
- Issue :
- 1
- Database :
- Supplemental Index
- Journal :
- IEEE Signal Processing Letters
- Publication Type :
- Periodical
- Accession number :
- ejs66751246
- Full Text :
- https://doi.org/10.1109/LSP.2024.3414949