Back to Search Start Over

On guiding video object segmentation

Authors :
Ortego, Diego
McGuinness, Kevin
SanMiguel, Juan C.
Arazo, Eric
Martínez, José M.
O'Connor, Noel E.
Publication Year :
2019

Abstract

This paper presents a novel approach for segmenting moving objects in unconstrained environments using guided convolutional neural networks. This guiding process relies on foreground masks from independent algorithms (i.e. state-of-the-art algorithms) to implement an attention mechanism that incorporates the spatial location of foreground and background to compute their separated representations. Our approach initially extracts two kinds of features for each frame using colour and optical flow information. Such features are combined following a multiplicative scheme to benefit from their complementarity. These unified colour and motion features are later processed to obtain the separated foreground and background representations. Then, both independent representations are concatenated and decoded to perform foreground segmentation. Experiments conducted on the challenging DAVIS 2016 dataset demonstrate that our guided representations not only outperform non-guided, but also recent and top-performing video object segmentation algorithms.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1904.11256
Document Type :
Working Paper