Learned Block-based Hybrid Image Compression
Learned image compression based on neural networks have made huge progress thanks to its superiority in learning better representation through non-linear transformation. Different from traditional hybrid coding frameworks, that are commonly block-based, existing learned image codecs usually process the images in a full-resolution manner thus not supporting acceleration via parallelism and explicit prediction. Compared to learned image codecs, traditional hybrid coding frameworks are in general hand-crafted and lack the adaptability of being optimized according to heterogeneous metrics. Therefore, in order to collect their good qualities and offset their weakness, we explore a learned block-based hybrid image compression (LBHIC) framework, which achieves a win-win between coding performance and efficiency. Specifically, we introduce block partition and explicit learned predictive coding into learned image compression framework. Compared to prediction through linear weighting of neighbor pixels in traditional codecs, our contextual prediction module (CPM) is designed to better capture long-range correlations by utilizing the strip pooling to extract the most relevant information in neighboring latent space. Moreover, to alleviate blocking artifacts, we further propose a boundary-aware post-processing module (BPM) with the importance of edge taken into account. Extensive experiments demonstrate that the proposed LBHIC codec outperforms state-of-the-art image compression methods in terms of both PSNR and MS-SSIM metrics and promises obvious time-saving.
READ FULL TEXT