Back to Search Start Over

Semantics-Aware Spatial-Temporal Binaries for Cross-Modal Video Retrieval.

Authors :
Qi, Mengshi
Qin, Jie
Yang, Yi
Wang, Yunhong
Luo, Jiebo
Source :
IEEE Transactions on Image Processing; 2021, Vol. 30, p2989-3004, 16p
Publication Year :
2021

Abstract

With the current exponential growth of video-based social networks, video retrieval using natural language is receiving ever-increasing attention. Most existing approaches tackle this task by extracting individual frame-level spatial features to represent the whole video, while ignoring visual pattern consistencies and intrinsic temporal relationships across different frames. Furthermore, the semantic correspondence between natural language queries and person-centric actions in videos has not been fully explored. To address these problems, we propose a novel binary representation learning framework, named Semantics-aware Spatial-temporal Binaries ($\text{S}^{2}$ Bin), which simultaneously considers spatial-temporal context and semantic relationships for cross-modal video retrieval. By exploiting the semantic relationships between two modalities, $\text{S}^{2}$ Bin can efficiently and effectively generate binary codes for both videos and texts. In addition, we adopt an iterative optimization scheme to learn deep encoding functions with attribute-guided stochastic training. We evaluate our model on three video datasets and the experimental results demonstrate that $\text{S}^{2}$ Bin outperforms the state-of-the-art methods in terms of various cross-modal video retrieval tasks. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10577149
Volume :
30
Database :
Complementary Index
Journal :
IEEE Transactions on Image Processing
Publication Type :
Academic Journal
Accession number :
170077679
Full Text :
https://doi.org/10.1109/TIP.2020.3048680