Back to Search Start Over

Objects Discovery Based on Co-Occurrence Word Model With Anchor-Box Polishing.

Authors :
Zhang, Zhewei
Jing, Tao
Tian, Chunhua
Cui, Pengfei
Li, Xuejing
Gao, Meilin
Source :
IEEE Transactions on Circuits & Systems for Video Technology; Mar2020, Vol. 30 Issue 3, p632-645, 14p
Publication Year :
2020

Abstract

The state-of-the-art objects discovery approaches can be categorized into two: deep learning methods based on a convolutional neural network with region proposals, and conventional machine learning methods based on topic models, low-rank matrix factorization, or image processing. The deep learning methods for objects discovery are based on sacrificing computational complexity to achieve precision, and the training time can be long without GPU platforms, whereas the conventional methods usually lack detection accuracy. To effectively address the problems of the training complexity and the detection speed, we present a new objects discovery approach by proposing a two-stage (training and verification) method. In the training stage, a topic model with words co-occurrence prior is proposed on the basis of Latent Dirichlet Allocation (LDA) model, in which the co-occurrence information among the features is sufficiently utilized. In the verification stage, we propose an anchor-box polishing algorithm that fine-tunes the detection results corresponding to the pre-trained topic model from some conventional algorithms with fast detection time. The experiments on various datasets demonstrate that the proposed approach can improve the detection performance in terms of efficiency and computing costs. It is also robust to objects different in colors, lightings, scales, and so on. Interestingly, the proposed method can be combined with many fast but inaccurate detection algorithms, in which it enhances the model’s flexibility. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10518215
Volume :
30
Issue :
3
Database :
Complementary Index
Journal :
IEEE Transactions on Circuits & Systems for Video Technology
Publication Type :
Academic Journal
Accession number :
143312874
Full Text :
https://doi.org/10.1109/TCSVT.2019.2894363