Simplex Autoencoders

by   Aymene Mohammed Bouayed, et al.

Synthetic data generation is increasingly important due to privacy concerns. While Autoencoder-based approaches have been widely used for this purpose, sampling from their latent spaces can be challenging. Mixture models are currently the most efficient way to sample from these spaces. In this work, we propose a new approach that models the latent space of an Autoencoder as a simplex, allowing for a novel heuristic for determining the number of components in the mixture model. This heuristic is independent of the number of classes and produces comparable results. We also introduce a sampling method based on probability mass functions, taking advantage of the compactness of the latent space. We evaluate our approaches on a synthetic dataset and demonstrate their performance on three benchmark datasets: MNIST, CIFAR-10, and Celeba. Our approach achieves an image generation FID of 4.29, 13.55, and 11.90 on the MNIST, CIFAR-10, and Celeba datasets, respectively. The best AE FID results to date on those datasets are respectively 6.3, 85.3 and 35.6 we hence substantially improve those figures (the lower is the FID the better). However, AEs are not the best performing algorithms on the concerned datasets and all FID records are currently held by GANs. While we do not perform better than GANs on CIFAR and Celeba we do manage to squeeze-out a non-negligible improvement (of 0.21) over the current GAN-held record for the MNIST dataset.


page 7

page 17

page 18

page 19


Sampling From Autoencoders' Latent Space via Quantization And Probability Mass Function Concepts

In this study, we focus on sampling from the latent space of generative ...

PCAAE: Principal Component Analysis Autoencoder for organising the latent space of generative networks

Autoencoders and generative models produce some of the most spectacular ...

Generative Adversarial Autoencoder Networks

We introduce an effective model to overcome the problem of mode collapse...

Audio Latent Space Cartography

We explore the generation of visualisations of audio latent spaces using...

Structuring Autoencoders

In this paper we propose Structuring AutoEncoders (SAE). SAEs are neural...

Generative Adversarial Networks Unlearning

As machine learning continues to develop, and data misuse scandals becom...

Dataset Condensation with Latent Space Knowledge Factorization and Sharing

In this paper, we introduce a novel approach for systematically solving ...

Please sign up or login with your details

Forgot password? Click here to reset