Back to Search Start Over

Dynamic Deep Pixel Distribution Learning for Background Subtraction.

Authors :
Zhao, Chenqiu
Basu, Anup
Source :
IEEE Transactions on Circuits & Systems for Video Technology. Nov2020, Vol. 30 Issue 11, p4192-4206. 15p.
Publication Year :
2020

Abstract

Previous approaches to background subtraction usually approximate the distribution of pixels with artificial models. In this paper, we focus on automatically learning the distribution, using a novel background subtraction model named Dynamic Deep Pixel Distribution Learning (D-DPDL). In our D-DPDL model, a distribution descriptor named Random Permutation of Temporal Pixels (RPoTP) is dynamically generated as the input to a convolutional neural network for learning the statistical distribution, and a Bayesian refinement model is tailored to handle the random noise introduced by the random permutation. Because the temporal pixels are randomly permutated to guarantee that only statistical information is retained in RPoTP features, the network is forced to learn the pixel distribution. Moreover, since the noise is random, the Bayesian theorem is naturally selected to propose an empirical model as a compensation based on the similarity between pixels. Evaluations using standard benchmark demonstrates the superiority of the proposed approach compared with the state-of-the-art, including traditional methods as well as deep learning methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10518215
Volume :
30
Issue :
11
Database :
Academic Search Index
Journal :
IEEE Transactions on Circuits & Systems for Video Technology
Publication Type :
Academic Journal
Accession number :
146783106
Full Text :
https://doi.org/10.1109/TCSVT.2019.2951778