Neural Video Coding using Multiscale Motion Compensation and Spatiotemporal Context Model

07/09/2020
by   Haojie Liu, et al.
0

Over the past two decades, traditional block-based video coding has made remarkable progress and spawned a series of well-known standards such as MPEG-4, H.264/AVC and H.265/HEVC. On the other hand, deep neural networks (DNNs) have shown their powerful capacity for visual content understanding, feature extraction and compact representation. Some previous works have explored the learnt video coding algorithms in an end-to-end manner, which show the great potential compared with traditional methods. In this paper, we propose an end-to-end deep neural video coding framework (NVC), which uses variational autoencoders (VAEs) with joint spatial and temporal prior aggregation (PA) to exploit the correlations in intra-frame pixels, inter-frame motions and inter-frame compensation residuals, respectively. Novel features of NVC include: 1) To estimate and compensate motion over a large range of magnitudes, we propose an unsupervised multiscale motion compensation network (MS-MCN) together with a pyramid decoder in the VAE for coding motion features that generates multiscale flow fields, 2) we design a novel adaptive spatiotemporal context model for efficient entropy coding for motion information, 3) we adopt nonlocal attention modules (NLAM) at the bottlenecks of the VAEs for implicit adaptive feature extraction and activation, leveraging its high transformation capacity and unequal weighting with joint global and local information, and 4) we introduce multi-module optimization and a multi-frame training strategy to minimize the temporal error propagation among P-frames. NVC is evaluated for the low-delay causal settings and compared with H.265/HEVC, H.264/AVC and the other learnt video compression methods following the common test conditions, demonstrating consistent gains across all popular test sequences for both PSNR and MS-SSIM distortion metrics.

READ FULL TEXT

page 1

page 3

page 7

page 12

research
08/05/2021

End-to-end Neural Video Coding Using a Compound Spatiotemporal Representation

Recent years have witnessed rapid advances in learnt video coding. Most ...
research
12/13/2019

Learned Video Compression via Joint Spatial-Temporal Correlation Exploration

Traditional video compression technologies have been developed over deca...
research
04/26/2021

DVMark: A Deep Multiscale Framework for Video Watermarking

Video watermarking embeds a message into a cover video in an imperceptib...
research
09/21/2023

Spatial-Temporal Transformer based Video Compression Framework

Learned video compression (LVC) has witnessed remarkable advancements in...
research
12/16/2021

Adaptation and Attention for Neural Video Coding

Neural image coding represents now the state-of-the-art image compressio...
research
04/13/2021

Spatiotemporal Entropy Model is All You Need for Learned Video Compression

The framework of dominant learned video compression methods is usually c...
research
01/16/2020

Combining Progressive Rethinking and Collaborative Learning: A Deep Framework for In-Loop Filtering

In this paper, we aim to address two critical issues in deep-learning ba...

Please sign up or login with your details

Forgot password? Click here to reset