Back to Search Start Over

Motion Context guided Edge-preserving network for video salient object detection.

Authors :
Huang, Kan
Tian, Chunwei
Xu, Zhijing
Li, Nannan
Lin, Jerry Chun-Wei
Source :
Expert Systems with Applications. Dec2023, Vol. 233, pN.PAG-N.PAG. 1p.
Publication Year :
2023

Abstract

Video salient object detection targets at extracting the most conspicuous objects in a video sequence, which facilitate various video processing tasks, e.g., video compression, video recognition, etc. Although remarkable progress has been made for video salient object detection, most existing methods still suffer from coarse edge boundaries which may hinder their usage in real-world applications. To alleviate this problem, in this paper, we propose a Motion Context guided Edge-preserving network (MCE-Net) model for video salient object detection. MCE-Net can generate temporally consistent salient edges, which are then leveraged to refine the salient object regions completely and uniformly. The core innovation in MCE-Net is an Asymmetric Cross-Reference Module (ACRM), which is designed to exploit the cross-modal complementarity between spatial structure and motion flow, facilitating robust salient object edge extraction. To leverage the extracted edge features for salient object refinement, we fuse them with multi-level spatial–temporal embeddings in a paralleled guidance manner, generating the final saliency results. The proposed method is end-to-end trainable and the edge annotations are generated automatically from ground truth saliency maps. Experimental evaluations on five widely-used benchmarks demonstrate that our proposed method can achieve superior performance to other outstanding methods. Moreover, the experimental results indicate that our method can preserve salient objects with clear boundary structures in video sequences. [Display omitted] • We propose to address the coarse boundary issue in video salient object detection. • A novel method that uses object boundaries to refine salient objects is presented. • The complementarity between spatial and motion cues is exploited to generate edges. • Evaluations on five benchmarks verify the efficacy of the edge refinement strategy. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09574174
Volume :
233
Database :
Academic Search Index
Journal :
Expert Systems with Applications
Publication Type :
Academic Journal
Accession number :
171113411
Full Text :
https://doi.org/10.1016/j.eswa.2023.120739