Back to Search
Start Over
Multi-level adversarial attention cross-modal hashing.
- Source :
-
Signal Processing: Image Communication . Sep2023, Vol. 117, pN.PAG-N.PAG. 1p. - Publication Year :
- 2023
-
Abstract
- Deep cross-modal hashing has made great progress in recent years due to the development of deep learning and efficient hashing algorithms. However, most of the existing methods only focus on the feature distribution between modalities, and ignore the fine grain information in each modality. To solve this problem, we propose a multi-level adversarial attention cross-modal hashing (MAAH). First, we design a modality-attention module to find the fine-grained information of each modality. Specifically, we use the channel attention mechanism to divide modality information into relevant and irrelevant representation, in which the irrelevant representation is the fine-grained information of the modality. Then, we design a modality-adversary module to supplement the fine-grained information of each modality. In this module, intra-modal adversarial learning can supplement the relevant representation of modalities, and inter-modal adversarial learning can make the distribution of the relevant representation of each modality more uniform. Experimental results on three widely used datasets demonstrate the superiority of the proposed method. • We design a modality-attention module to separate the relevant and irrelevant representations. • We design a modality-adversary module to supplement the relevant representation information. • The experimental results of our method show superiority on three widely used datasets. [ABSTRACT FROM AUTHOR]
- Subjects :
- *DEEP learning
*PROBLEM solving
Subjects
Details
- Language :
- English
- ISSN :
- 09235965
- Volume :
- 117
- Database :
- Academic Search Index
- Journal :
- Signal Processing: Image Communication
- Publication Type :
- Academic Journal
- Accession number :
- 169787387
- Full Text :
- https://doi.org/10.1016/j.image.2023.117017