Back to Search Start Over

Improvement of deep cross-modal retrieval by generating real-valued representation

Authors :
Amit Ganatra
Nikita Bhatt
Source :
PeerJ Computer Science, Vol 7, p e491 (2021), PeerJ Computer Science
Publication Year :
2021
Publisher :
PeerJ, 2021.

Abstract

The cross-modal retrieval (CMR) has attracted much attention in the research community due to flexible and comprehensive retrieval. The core challenge in CMR is the heterogeneity gap, which is generated due to different statistical properties of multi-modal data. The most common solution to bridge the heterogeneity gap is representation learning, which generates a common sub-space. In this work, we propose a framework called “Improvement of Deep Cross-Modal Retrieval (IDCMR)”, which generates real-valued representation. The IDCMR preserves both intra-modal and inter-modal similarity. The intra-modal similarity is preserved by selecting an appropriate training model for text and image modality. The inter-modal similarity is preserved by reducing modality-invariance loss. The mean average precision (mAP) is used as a performance measure in the CMR system. Extensive experiments are performed, and results show that IDCMR outperforms over state-of-the-art methods by a margin 4% and 2% relatively with mAP in the text to image and image to text retrieval tasks on MSCOCO and Xmedia dataset respectively.

Details

ISSN :
23765992
Volume :
7
Database :
OpenAIRE
Journal :
PeerJ Computer Science
Accession number :
edsair.doi.dedup.....2b2ebfcd3bacfdd65d6447147889d5d6
Full Text :
https://doi.org/10.7717/peerj-cs.491