Category-Learning with Context-Augmented Autoencoder

10/10/2020
by   Denis Kuzminykh, et al.
0

Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning. Biological neural networks are known to solve this problem quite well in unsupervised manner, yet unsupervised artificial neural networks either struggle to do it or require fine tuning for each task individually. We associate this with the fact that a biological brain learns in the context of the relationships between observations, while an artificial network does not. We also notice that, though a naive data augmentation technique can be very useful for supervised learning problems, autoencoders typically fail to generalize transformations from data augmentations. Thus, we believe that providing additional knowledge about relationships between data samples will improve model's capability of finding useful inner data representation. More formally, we consider a dataset not as a manifold, but as a category, where the examples are objects. Two these objects are connected by a morphism, if they actually represent different transformations of the same entity. Following this formalism, we propose a novel method of using data augmentations when training autoencoders. We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network in terms of the hidden representation. We believe that the classification accuracy of a linear classifier on the learned representation is a good metric to measure its interpretability. In our experiments, present approach outperforms β-VAE and is comparable with Gaussian-mixture VAE.

READ FULL TEXT

page 3

page 9

page 10

research
02/13/2018

TVAE: Triplet-Based Variational Autoencoder using Metric Learning

Deep metric learning has been demonstrated to be highly effective in lea...
research
04/02/2020

Guided Variational Autoencoder for Disentanglement Learning

We propose an algorithm, guided variational autoencoder (Guided-VAE), th...
research
02/08/2022

TransformNet: Self-supervised representation learning through predicting geometric transformations

Deep neural networks need a big amount of training data, while in the re...
research
06/26/2017

Dr.VAE: Drug Response Variational Autoencoder

We present two deep generative models based on Variational Autoencoders ...
research
12/28/2020

Data augmentation and image understanding

Interdisciplinary research is often at the core of scientific progress. ...
research
12/21/2017

Deep Unsupervised Clustering Using Mixture of Autoencoders

Unsupervised clustering is one of the most fundamental challenges in mac...

Please sign up or login with your details

Forgot password? Click here to reset