Topological Autoencoders

by   Michael Moor, et al.

We propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders. Using persistent homology, a technique from topological data analysis, we calculate topological signatures of both the input and latent space to derive a topological loss term. Under weak theoretical assumptions, we can construct this loss in a differentiable manner, such that the encoding learns to retain multi-scale connectivity information. We show that our approach is theoretically well-founded, while exhibiting favourable latent representations on synthetic manifold data sets. Moreover, on real-world data sets, introducing our topological loss leads to more meaningful latent representations while preserving low reconstruction errors.


page 1

page 2

page 3

page 4


Disentanglement Learning via Topology

We propose TopDis (Topological Disentanglement), a method for learning d...

Mapper Based Classifier

Topological data analysis aims to extract topological quantities from da...

On the Convergence of Optimizing Persistent-Homology-Based Losses

Topological loss based on persistent homology has shown promise in vario...

TOAST: Topological Algorithm for Singularity Tracking

The manifold hypothesis, which assumes that data lie on or close to an u...

Connectivity-Optimized Representation Learning via Persistent Homology

We study the problem of learning representations with controllable conne...

Capturing Shape Information with Multi-Scale Topological Loss Terms for 3D Reconstruction

Reconstructing 3D objects from 2D images is both challenging for our bra...

Please sign up or login with your details

Forgot password? Click here to reset