1. Channel Attention Is All You Need for Video Frame Interpolation
- Author
-
Myungsub Choi, Ning Xu, Heewon Kim, Bohyung Han, and Kyoung Mu Lee
- Subjects
Artificial neural network ,business.industry ,Computer science ,Frame (networking) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Medicine ,Motion (physics) ,Optical flow estimation ,Feature (computer vision) ,Motion estimation ,Benchmark (computing) ,Computer vision ,Artificial intelligence ,Motion interpolation ,business ,Communication channel - Abstract
Prevailing video frame interpolation techniques rely heavily on optical flow estimation and require additional model complexity and computational cost; it is also susceptible to error propagation in challenging scenarios with large motion and heavy occlusion. To alleviate the limitation, we propose a simple but effective deep neural network for video frame interpolation, which is end-to-end trainable and is free from a motion estimation network component. Our algorithm employs a special feature reshaping operation, referred to as PixelShuffle, with a channel attention, which replaces the optical flow computation module. The main idea behind the design is to distribute the information in a feature map into multiple channels and extract motion information by attending the channels for pixel-level frame synthesis. The model given by this principle turns out to be effective in the presence of challenging motion and occlusion. We construct a comprehensive evaluation benchmark and demonstrate that the proposed approach achieves outstanding performance compared to the existing models with a component for optical flow computation.
- Published
- 2020
- Full Text
- View/download PDF