Back to Search Start Over

Image Feature Matching Based on Semantic Fusion Description and Spatial Consistency

Authors :
Wei Zhang
Guoying Zhang
Source :
Symmetry, Vol 10, Iss 12, p 725 (2018)
Publication Year :
2018
Publisher :
MDPI AG, 2018.

Abstract

Image feature description and matching is widely used in computer vision, such as camera pose estimation. Traditional feature descriptions lack the semantic and spatial information, and give rise to a large number of feature mismatches. In order to improve the accuracy of image feature matching, a feature description and matching method, based on local semantic information fusion and feature spatial consistency, is proposed in this paper. Once object detection is used on images, feature points are then extracted, and image patches with various sizes surrounding these points are clipped. These patches are sent into the Siamese convolution network to get their semantic vectors. Then, semantic fusion description of feature points is obtained by weighted sum of the semantic vectors, and their weights optimized by particle swarm optimization (PSO) algorithm. When matching these feature points using their descriptions, feature spatial consistency is calculated based on the spatial consistency of matched objects, and the orientation and distance constraint of adjacent points within matched objects. With the description and matching method, the feature points are matched accurately and effectively. Our experiment results showed the efficiency of our methods.

Details

Language :
English
ISSN :
20738994
Volume :
10
Issue :
12
Database :
Directory of Open Access Journals
Journal :
Symmetry
Publication Type :
Academic Journal
Accession number :
edsdoj.5db99a55e64841ebb453ebe2cf12414c
Document Type :
article
Full Text :
https://doi.org/10.3390/sym10120725