Processsing Simple Geometric Attributes with Autoencoders

by   Alasdair Newson, et al.

Image synthesis is a core problem in modern deep learning, and many recent architectures such as autoencoders and Generative Adversarial networks produce spectacular results on highly complex data, such as images of faces or landscapes. While these results open up a wide range of new, advanced synthesis applications, there is also a severe lack of theoretical understanding of how these networks work. This results in a wide range of practical problems, such as difficulties in training, the tendency to sample images with little or no variability, and generalisation problems. In this paper, we propose to analyse the ability of the simplest generative network, the autoencoder, to encode and decode two simple geometric attributes : size and position. We believe that, in order to understand more complicated tasks, it is necessary to first understand how these networks process simple attributes. For the first property, we analyse the case of images of centred disks with variable radii. We explain how the autoencoder projects these images to and from a latent space of smallest possible dimension, a scalar. In particular, we describe a closed-form solution to the decoding training problem in a network without biases, and show that during training, the network indeed finds this solution. We then investigate the best regularisation approaches which yield networks that generalise well. For the second property, position, we look at the encoding and decoding of Dirac delta functions, also known as `one-hot' vectors. We describe a hand-crafted filter that achieves encoding perfectly, and show that the network naturally finds this filter during training. We also show experimentally that the decoding can be achieved if the dataset is sampled in an appropriate manner.


page 5

page 6

page 7

page 8

page 10

page 11

page 19


Interpreting the Latent Space of Generative Adversarial Networks using Supervised Learning

With great progress in the development of Generative Adversarial Network...

A PCA-like Autoencoder

An autoencoder is a neural network which data projects to and from a low...

Training Generative Reversible Networks

Generative models with an encoding component such as autoencoders curren...

Towards Photographic Image Manipulation with Balanced Growing of Generative Autoencoders

We build on recent advances in progressively growing generative autoenco...

Pioneer Networks: Progressively Growing Generative Autoencoder

We introduce a novel generative autoencoder network model that learns to...

Recurrent autoencoder with sequence-aware encoding

Recurrent Neural Networks (RNN) received a vast amount of attention last...

Please sign up or login with your details

Forgot password? Click here to reset