301. Fusion of Deep Learning and Compressed Domain Features for Content-Based Image Retrieval.
- Author
-
Peizhong Liu, Jing-Ming Guo, Chi-Yi Wu, and Danlin Cai
- Abstract
This paper presents an effective image retrieval method by combining high-level features from convolutional neural network (CNN) model and low-level features from dot-diffused block truncation coding (DDBTC). The low-level features, e.g., texture and color, are constructed by vector quantization -indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate and average recall rate (ARR), are employed to examine various data sets. As documented in the experimental results, the proposed schemes can achieve superior performance compared with the state-of-the-art methods with either low-or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.
- Published
- 2017
- Full Text
- View/download PDF