Back to Search Start Over

DeepVS2.0: A Saliency-Structured Deep Learning Method for Predicting Dynamic Visual Attention.

Authors :
Jiang, Lai
Xu, Mai
Wang, Zulin
Sigal, Leonid
Source :
International Journal of Computer Vision. 2021, Vol. 129 Issue 1, p203-224. 22p.
Publication Year :
2021

Abstract

Deep neural networks (DNNs) have exhibited great success in image saliency prediction. However, few works apply DNNs to predict the saliency of generic videos. In this paper, we propose a novel DNN-based video saliency prediction method, called DeepVS2.0. Specifically, we establish a large-scale eye-tracking database of videos (LEDOV), which provides sufficient data to train the DNN models for predicting video saliency. Through the statistical analysis of LEDOV, we find that human attention is normally attracted by objects, particularly moving objects or the moving parts of objects. Accordingly, we propose an object-to-motion convolutional neural network (OM-CNN) in DeepVS2.0 to learn spatio-temporal features for predicting the intra-frame saliency via exploring the information of both objectness and object motion. We further find from our database that human attention has a temporal correlation with a smooth saliency transition across video frames. Therefore, a saliency-structured convolutional long short-term memory network (SS-ConvLSTM) is developed in DeepVS2.0 to predict inter-frame saliency, using the extracted features of OM-CNN as the input. Moreover, the center-bias dropout and sparsity-weighted loss are embedded in SS-ConvLSTM, to consider the center-bias and sparsity of human attention maps. Finally, the experimental results show that our DeepVS2.0 method advances the state-of-the-art video saliency prediction. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09205691
Volume :
129
Issue :
1
Database :
Academic Search Index
Journal :
International Journal of Computer Vision
Publication Type :
Academic Journal
Accession number :
148190618
Full Text :
https://doi.org/10.1007/s11263-020-01371-6