Back to Search Start Over

Temporally Coherent Person Matting Trained on Fake-Motion Dataset

Authors :
Molodetskikh, Ivan
Erofeev, Mikhail
Moskalenko, Andrey
Vatolin, Dmitry
Publication Year :
2021

Abstract

We propose a novel neural-network-based method to perform matting of videos depicting people that does not require additional user input such as trimaps. Our architecture achieves temporal stability of the resulting alpha mattes by using motion-estimation-based smoothing of image-segmentation algorithm outputs, combined with convolutional-LSTM modules on U-Net skip connections. We also propose a fake-motion algorithm that generates training clips for the video-matting network given photos with ground-truth alpha mattes and background videos. We apply random motion to photos and their mattes to simulate movement one would find in real videos and composite the result with the background clips. It lets us train a deep neural network operating on videos in an absence of a large annotated video dataset and provides ground-truth training-clip foreground optical flow for use in loss functions.<br />Comment: 13 pages, 5 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2109.04843
Document Type :
Working Paper