Neural Decomposition: Functional ANOVA with Variational Autoencoders

06/25/2020
by   Kaspar Märtens, et al.
0

Variational Autoencoders (VAEs) have become a popular approach for dimensionality reduction. However, despite their ability to identify latent low-dimensional structures embedded within high-dimensional data, these latent representations are typically hard to interpret on their own. Due to the black-box nature of VAEs, their utility for healthcare and genomics applications has been limited. In this paper, we focus on characterising the sources of variation in Conditional VAEs. Our goal is to provide a feature-level variance decomposition, i.e. to decompose variation in the data by separating out the marginal additive effects of latent variables z and fixed inputs c from their non-linear interactions. We propose to achieve this through what we call Neural Decomposition - an adaptation of the well-known concept of functional ANOVA variance decomposition from classical statistics to deep learning models. We show how identifiability can be achieved by training models subject to constraints on the marginal properties of the decoder networks. We demonstrate the utility of our Neural Decomposition on a series of synthetic examples as well as high-dimensional genomics data.

READ FULL TEXT
research
03/06/2020

BasisVAE: Translation-invariant feature-level clustering with Variational Autoencoders

Variational Autoencoders (VAEs) provide a flexible and scalable framewor...
research
10/16/2018

Covariate Gaussian Process Latent Variable Models

Gaussian Process Regression (GPR) and Gaussian Process Latent Variable M...
research
02/22/2021

Non-linear, Sparse Dimensionality Reduction via Path Lasso Penalized Autoencoders

High-dimensional data sets are often analyzed and explored via the const...
research
03/25/2021

Interpretable Approximation of High-Dimensional Data

In this paper we apply the previously introduced approximation method ba...
research
04/10/2020

Supervised Autoencoders Learn Robust Joint Factor Models of Neural Activity

Factor models are routinely used for dimensionality reduction in modelin...
research
09/21/2022

NashAE: Disentangling Representations through Adversarial Covariance Minimization

We present a self-supervised method to disentangle factors of variation ...
research
08/26/2019

Shapley Decomposition of R-Squared in Machine Learning Models

In this paper we introduce a metric aimed at helping machine learning pr...

Please sign up or login with your details

Forgot password? Click here to reset