Back to Search Start Over

A Framework for Enabling Unpaired Multi-Modal Learning for Deep Cross-Modal Hashing Retrieval

Authors :
Mikel Williams-Lekuona
Georgina Cosma
Iain Phillips
Source :
Journal of Imaging, Vol 8, Iss 12, p 328 (2022)
Publication Year :
2022
Publisher :
MDPI AG, 2022.

Abstract

Cross-Modal Hashing (CMH) retrieval methods have garnered increasing attention within the information retrieval research community due to their capability to deal with large amounts of data thanks to the computational efficiency of hash-based methods. To date, the focus of cross-modal hashing methods has been on training with paired data. Paired data refers to samples with one-to-one correspondence across modalities, e.g., image and text pairs where the text sample describes the image. However, real-world applications produce unpaired data that cannot be utilised by most current CMH methods during the training process. Models that can learn from unpaired data are crucial for real-world applications such as cross-modal neural information retrieval where paired data is limited or not available to train the model. This paper provides (1) an overview of the CMH methods when applied to unpaired datasets, (2) proposes a framework that enables pairwise-constrained CMH methods to train with unpaired samples, and (3) evaluates the performance of state-of-the-art CMH methods across different pairing scenarios.

Details

Language :
English
ISSN :
2313433X
Volume :
8
Issue :
12
Database :
Directory of Open Access Journals
Journal :
Journal of Imaging
Publication Type :
Academic Journal
Accession number :
edsdoj.90fd7251bb544012a48c428c36aadd2e
Document Type :
article
Full Text :
https://doi.org/10.3390/jimaging8120328