Back to Search Start Over

Insights from Generative Modeling for Neural Video Compression

Authors :
Yang, Ruihan
Yang, Yibo
Marino, Joseph
Mandt, Stephan
Publication Year :
2021

Abstract

While recent machine learning research has revealed connections between deep generative models such as VAEs and rate-distortion losses used in learned compression, most of this work has focused on images. In a similar spirit, we view recently proposed neural video coding algorithms through the lens of deep autoregressive and latent variable modeling. We present these codecs as instances of a generalized stochastic temporal autoregressive transform, and propose new avenues for further improvements inspired by normalizing flows and structured priors. We propose several architectures that yield state-of-the-art video compression performance on high-resolution video and discuss their tradeoffs and ablations. In particular, we propose (i) improved temporal autoregressive transforms, (ii) improved entropy models with structured and temporal dependencies, and (iii) variable bitrate versions of our algorithms. Since our improvements are compatible with a large class of existing models, we provide further evidence that the generative modeling viewpoint can advance the neural video coding field.<br />Comment: This work has been submitted to the IEEE for publication as an extension work of arXiv:2010.10258. Copyright may be transferred without notice, after which this version may no longer be accessible. arXiv admin note: text overlap with arXiv:2010.10258

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2107.13136
Document Type :
Working Paper