1. SST-Sal: A spherical spatio-temporal approach for saliency prediction in 360[formula omitted] videos.
- Author
-
Bernal-Berdun, Edurne, Martin, Daniel, Gutierrez, Diego, and Masia, Belen
- Subjects
- *
RECURRENT neural networks , *CONVOLUTIONAL neural networks , *ARTIFICIAL neural networks , *OPTICAL flow , *FEATURE extraction , *VIDEO coding , *VIRTUAL reality - Abstract
Virtual reality (VR) has the potential to change the way people consume content, and has been predicted to become the next big computing paradigm. However, much remains unknown about the grammar and visual language of this new medium, and understanding and predicting how humans behave in virtual environments remains an open problem. In this work, we propose a novel saliency prediction model which exploits the joint potential of spherical convolutions and recurrent neural networks to extract and model the inherent spatio-temporal features from 360° videos. We employ Convolutional Long Short-Term Memory cells (ConvLSTMs) to account for temporal information at the time of feature extraction rather than to post-process spatial features as in previous works. To facilitate spatio-temporal learning, we provide the network with an estimation of the optical flow between 360° frames, since motion is known to be a highly salient feature in dynamic content. Our model is trained with a novel spherical Kullback–Leibler Divergence (KLDiv) loss function specifically tailored for saliency prediction in 360° content. Our approach outperforms previous state-of-the-art works, being able to mimic human visual attention when exploring dynamic 360° videos. • We propose a saliency prediction model for 360° videos based on spherical ConvLSTMs. • ConvLSTMs leverage temporal information when encoding and decoding features. • We present a spherical Kullback–Leibler Divergence loss tailored to 360° content. • Our model exploits optical flow to learn the relationship between motion and saliency. • Our approach outperforms SOTA works for saliency prediction in 360° videos. [ABSTRACT FROM AUTHOR] more...
- Published
- 2022
- Full Text
- View/download PDF