Is Independence all you need? On the Generalization of Representations Learned from Correlated Data

06/14/2020
by   Frederik Träuble, et al.
14

Despite impressive progress in the last decade, it still remains an open challenge to build models that generalize well across multiple tasks and datasets. One path to achieve this is to learn meaningful and compact representations, in which different semantic aspects of data are structurally disentangled. The focus of disentanglement approaches has been on separating independent factors of variation despite the fact that real-world observations are often not structured into meaningful independent causal variables to begin with. In this work we bridge the gap to real-world scenarios by analyzing the behavior of most prominent methods and disentanglement scores on correlated data in a large scale empirical study (including 3900 models). We show that systematically induced correlations in the dataset are being learned and reflected in the latent representations, while widely used disentanglement scores fall short of capturing these latent correlations. Finally, we demonstrate how to disentangle these latent correlations using weak supervision, even if we constrain this supervision to be causally plausible. Our results thus support the argument to learn independent mechanisms rather than independent factors of variations.

READ FULL TEXT

page 5

page 7

page 16

page 20

page 22

page 23

page 24

page 25

research
12/10/2021

On Causally Disentangled Representations

Representation learners that disentangle factors of variation have alrea...
research
02/07/2020

Weakly-Supervised Disentanglement Without Compromises

Intelligent agents should be able to learn useful representations by obs...
research
12/29/2021

Disentanglement and Generalization Under Correlation Shifts

Correlations between factors of variation are prevalent in real-world da...
research
04/06/2018

Hierarchical Disentangled Representations

Deep latent-variable models learn representations of high-dimensional da...
research
06/07/2019

On the Transfer of Inductive Bias from Simulation to the Real World: a New Disentanglement Dataset

Learning meaningful and compact representations with structurally disent...
research
10/13/2022

Disentanglement of Correlated Factors via Hausdorff Factorized Support

A grand goal in deep learning research is to learn representations capab...
research
09/12/2022

Modular Representations for Weak Disentanglement

The recently introduced weakly disentangled representations proposed to ...

Please sign up or login with your details

Forgot password? Click here to reset