Back to Search Start Over

Real-Time Selfie Video Stabilization

Authors :
Ravi Ramamoorthi
Jiyang Yu
Ke-Li Cheng
Ning Bi
Michel Sarkis
Source :
CVPR
Publication Year :
2020
Publisher :
arXiv, 2020.

Abstract

We propose a novel real-time selfie video stabilization method. Our method is completely automatic and runs at 26 fps. We use a 1D linear convolutional network to directly infer the rigid moving least squares warping which implicitly balances between the global rigidity and local flexibility. Our network structure is specifically designed to stabilize the background and foreground at the same time, while providing optional control of stabilization focus (relative importance of foreground vs. background) to the users. To train our network, we collect a selfie video dataset with 1005 videos, which is significantly larger than previous selfie video datasets. We also propose a grid approximation to the rigid moving least squares that enables the real-time frame warping. Our method is fully automatic and produces visually and quantitatively better results than previous real-time general video stabilization methods. Compared to previous offline selfie video methods, our approach produces comparable quality with a speed improvement of orders of magnitude. Our code and selfie video dataset is available at https://github.com/jiy173/selfievideostabilization.

Details

Database :
OpenAIRE
Journal :
CVPR
Accession number :
edsair.doi.dedup.....0910717ef32fdd968179e632ad4d7708
Full Text :
https://doi.org/10.48550/arxiv.2009.02007