Back to Search Start Over

Learning shape retrieval from different modalities.

Authors :
Tabia, Hedi
Laga, Hamid
Source :
Neurocomputing. Aug2017, Vol. 253, p24-33. 10p.
Publication Year :
2017

Abstract

We propose in this paper a new framework for 3D shape retrieval using queries of different modalities, which can include 3D models, images and sketches. The main scientific challenge is that different modalities have different representations and thus lie in different spaces. Moreover, the features that can be extracted from 2D images or 2D sketches are often different from those that can be computed from 3D models. Our solution is a new method based on Convolutional Neural Networks (CNN) that embeds all these entities into a common space. We propose a novel 3D shape descriptor based on local CNN features encoded using vectors of locally aggregated descriptors instead of conventional global CNN. Using a kernel function computed from 3D shape similarity, we build a target space in which wild images and sketches can be projected via two different CNNs. With this construction, matching can be performed in the common target space between same entities (sketch–sketch, image–image and 3D shape–3D shape) and more importantly across different entities (sketch-image, sketch-3D shape and image-3D shape). We demonstrate the performance of the proposed framework using different benchmarks including large scale SHREC 3D datasets. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
253
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
123257162
Full Text :
https://doi.org/10.1016/j.neucom.2017.01.101