1. Semantics-Aware Spatial-Temporal Binaries for Cross-Modal Video Retrieval
- Author
-
Jie Qin, Yunhong Wang, Yi Yang, Jiebo Luo, and Mengshi Qi
- Subjects
Theoretical computer science ,Computer science ,Feature extraction ,Context (language use) ,02 engineering and technology ,Semantics ,Text mining ,spatial-temporal features ,0202 electrical engineering, electronic engineering, information engineering ,Artificial Intelligence & Image Processing ,semantics ,visualization ,natural language ,business.industry ,feature extraction ,binary codes ,cross-modal hashing ,video retrieval ,Computer Graphics and Computer-Aided Design ,Visualization ,natural languages ,binary representation ,task analysis ,stochastic processes ,020201 artificial intelligence & image processing ,Binary code ,business ,Software ,Natural language ,0801 Artificial Intelligence and Image Processing, 0906 Electrical and Electronic Engineering, 1702 Cognitive Sciences - Abstract
With the current exponential growth of video-based social networks, video retrieval using natural language is receiving ever-increasing attention. Most existing approaches tackle this task by extracting individual frame-level spatial features to represent the whole video, while ignoring visual pattern consistencies and intrinsic temporal relationships across different frames. Furthermore, the semantic correspondence between natural language queries and person-centric actions in videos has not been fully explored. To address these problems, we propose a novel binary representation learning framework, named Semantics-aware Spatial-temporal Binaries ( $\text{S}^{2}$ Bin), which simultaneously considers spatial-temporal context and semantic relationships for cross-modal video retrieval. By exploiting the semantic relationships between two modalities, $\text{S}^{2}$ Bin can efficiently and effectively generate binary codes for both videos and texts. In addition, we adopt an iterative optimization scheme to learn deep encoding functions with attribute-guided stochastic training. We evaluate our model on three video datasets and the experimental results demonstrate that $\text{S}^{2}$ Bin outperforms the state-of-the-art methods in terms of various cross-modal video retrieval tasks.
- Published
- 2021