Towards Conceptual Compression

04/29/2016
by   Karol Gregor, et al.
0

We introduce a simple recurrent variational auto-encoder architecture that significantly improves image modeling. The system represents the state-of-the-art in latent variable models for both the ImageNet and Omniglot datasets. We show that it naturally separates global conceptual information from lower level details, thus addressing one of the fundamentally desired properties of unsupervised learning. Furthermore, the possibility of restricting ourselves to storing only global information about an image allows us to achieve high quality 'conceptual compression'.

READ FULL TEXT

page 2

page 6

page 7

page 11

page 12

page 13

page 14

research
11/21/2016

Variational Graph Auto-Encoders

We introduce the variational graph auto-encoder (VGAE), a framework for ...
research
06/22/2020

Modeling Lost Information in Lossy Image Compression

Lossy image compression is one of the most commonly used operators for d...
research
01/15/2019

Practical Lossless Compression with Latent Variables using Bits Back Coding

Deep latent variable models have seen recent success in many data domain...
research
10/05/2020

Self-Supervised Variational Auto-Encoders

Density estimation, compression and data generation are crucial tasks in...
research
06/25/2017

A Contemporary Overview of Probabilistic Latent Variable Models

In this paper we provide a conceptual overview of latent variable models...
research
12/20/2019

HiLLoC: Lossless Image Compression with Hierarchical Latent Variable Models

We make the following striking observation: fully convolutional VAE mode...
research
08/16/2016

An image compression and encryption scheme based on deep learning

Stacked Auto-Encoder (SAE) is a kind of deep learning algorithm for unsu...

Please sign up or login with your details

Forgot password? Click here to reset