Back to Search Start Over

MVCLN: Multi-View Convolutional LSTM Network for Cross-Media 3D Shape Recognition

Authors :
Yixin Wang
Qiang Li
Qi Liang
Weizhi Nie
Source :
IEEE Access, Vol 8, Pp 139792-139802 (2020)
Publication Year :
2020
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2020.

Abstract

Cross-media 3D model recognition is an important and challenging task in computer vision, which can be utilized in many applications such as landmark detection, image set classification, etc. In recent years, with the development of deep learning, many approaches have been proposed to handle the 3D model recognition problem. However, all of these methods focus on the structure information representation and the multi-view information fusion, and ignore the spatial and temporal information. So that it is not suitable for the cross-media 3D model recognition. In this paper, we utilize the sequence views to represent each 3D model and propose a novel Multi-view Convolutional LSTM Network (MVCLN), which utilizes the LSTM structure to extract temporal information and applies the convolutional operation to extract spatial information. More especially, the spatial and temporal information both are considered during the training process, which can effectively utilize the differences between the view's spatial information to improve the final performance. Meanwhile, we also introduce the classic attention model to define the weight of each view, which can reduce the redundant information of view's spatial information in the information fusion step. We evaluate the proposed method on the ModelNet40 for 3D model classification and retrieval task. We also construct a dataset utilizing the overlap categories of MV-RED, ShapenetCore and ModelNet to demonstrate the effectiveness of our approach for the cross-media 3D model recognition. Experimental results and comparisons with the state-of-the-art methods demonstrate that our framework can achieve superior performance.

Details

ISSN :
21693536
Volume :
8
Database :
OpenAIRE
Journal :
IEEE Access
Accession number :
edsair.doi.dedup.....44fe05f80836969d384a5dd5b9db5a42