Back to Search Start Over

Motion-Aware Feature Enhancement Network for Video Prediction.

Authors :
Lin, Xue
Zou, Qi
Xu, Xixia
Huang, Yaping
Tian, Yi
Source :
IEEE Transactions on Circuits & Systems for Video Technology. Feb2021, Vol. 31 Issue 2, p688-700. 13p.
Publication Year :
2021

Abstract

Video prediction is challenging, due to the pixel-level precision requirement and the difficulty in capturing scene dynamics. Most approaches tackle the problems by pixel-level reconstruction objectives and two decomposed branches, which still suffer from blurry generations or dramatic degradations in long-term prediction. In this paper, we propose a Motion-Aware Feature Enhancement (MAFE) network for video prediction to produce realistic future frames and achieve relatively long-term predictions. First, a Channel-wise and Spatial Attention (CSA) module is designed to extract motion-aware features, which enhances the contribution of important motion details during encoding, and subsequently improves the discriminability of attention map for the frame refinement. Second, a Motion Perceptual Loss (MPL) is proposed to guide the learning of temporal cues, which benefits to robust long-term video prediction. Extensive experiments on three human activity video datasets: KTH, Human3.6M, and PennAction demonstrate the effectiveness of the proposed video prediction model compared with the state-of-the-art approaches. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10518215
Volume :
31
Issue :
2
Database :
Academic Search Index
Journal :
IEEE Transactions on Circuits & Systems for Video Technology
Publication Type :
Academic Journal
Accession number :
148595704
Full Text :
https://doi.org/10.1109/TCSVT.2020.2987141