1. Two-Stream Network Based on Visual Saliency Sharing for 3D Model Recognition
- Author
-
Weizhi Nie, Lu Qu, Minjie Ren, Qi Liang, Yuting Su, Yangyang Li, and Hao Jin
- Subjects
3D model ,view-based ,classification ,retrieval ,MVCNN ,LSTM ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Shape representation for 3D models is an important topic in computer vision, multimedia analysis, and computer graphics. Recent multiview-based methods demonstrate promising performance for 3D model recognition and retrieval. However, most of the multiview-based methods focus on the visual information from the taken views and ignore correlation information among these views, which means the similarity and differentiation of multiple views have lost in their methods. In order to address this issue, we propose a novel two-stream network architecture for 3D model recognition and retrieval. The proposed network includes two sub-networks: a multi-view convolutional neural network (MVCNN) that extracts the view information from the taken views, and an Visual Saliency model that defines the weight of views based on the similarity and differentiation information of multiple views. Special, the weight of views defined by the Visual Saliency model can effectively be used to guide the visual information fusion in MVCNN model. This design can make the MVCNN model save visual information and the correlation information of these views in the learning step. Finally, we employ early-fusion method to fuse the feature vectors from MVCNN model and Visual Saliency model respectively, to generate the shape descriptor for 3D model recognition and retrieval. The experimental result on two public datasets, ModelNet40 and ShapeNetCore55, demonstrates the correlation information of multiple views is crucial for view-based 3D model recognition methods and the proposed method can achieve the state-of-the-art performance on both 3D object classification and retrieval.
- Published
- 2020
- Full Text
- View/download PDF