Back to Search Start Over

Three-Dimensional Attention-Based Deep Ranking Model for Video Highlight Detection.

Authors :
Jiao, Yifan
Li, Zhetao
Huang, Shucheng
Yang, Xiaoshan
Liu, Bin
Zhang, Tianzhu
Source :
IEEE Transactions on Multimedia; Oct2018, Vol. 20 Issue 10, p2693-2705, 13p
Publication Year :
2018

Abstract

The video highlight detection task is to localize key elements (moments of user's major or special interest) in a video. Most of existing highlight detection approaches extract features from the video segment as a whole without considering the difference of local features both temporally and spatially. Due to the complexity of video content, this kind of mixed features will impact the final highlight prediction. In temporal extent, not all frames are worth watching because some of them only contain the background of the environment without human or other moving objects. In spatial extent, it is similar that not all regions in each frame are highlights especially when there are lots of clutters in the background. To solve the above problem, we propose a novel three-dimensional (3-D) (spatial+temporal) attention model that can automatically localize the key elements in a video without any extra supervised annotations. Specifically, the proposed attention model produces attention weights of local regions along both the spatial and temporal dimensions of the video segment. The regions of key elements in the video will be strengthened with large weights. Thus, the more effective feature of the video segment is obtained to predict the highlight score. The proposed 3-D attention scheme can be easily integrated into a conventional end-to-end deep ranking model that aims to learn a deep neural network to compute the highlight score of each video segment. Extensive experimental results on the YouTube and SumMe datasets demonstrate that the proposed approach achieves significant improvement over state-of-the-art methods. With the proposed 3-D attention model, video highlights can be accurately retrieved in spatial and temporal dimensions without human supervision in several domains, such as gymnastics, parkour, skating, skiing, surfing, and dog activities, on the public datasets. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15209210
Volume :
20
Issue :
10
Database :
Complementary Index
Journal :
IEEE Transactions on Multimedia
Publication Type :
Academic Journal
Accession number :
131794364
Full Text :
https://doi.org/10.1109/TMM.2018.2815998