Weakly-Supervised Disentanglement Without Compromises

02/07/2020
by   Francesco Locatello, et al.
20

Intelligent agents should be able to learn useful representations by observing changes in their environment. We model such observations as pairs of non-i.i.d. images sharing at least one of the underlying factors of variation. First, we theoretically show that only knowing how many factors have changed, but not which ones, is sufficient to learn disentangled representations. Second, we provide practical algorithms that learn disentangled representations from pairs of images without requiring annotation of groups, individual factors, or the number of factors that have changed. Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets. Finally, we evaluate our learned representations and find that they are simultaneously useful on a diverse suite of tasks, including generalization under covariate shifts, fairness, and abstract reasoning. Overall, our results demonstrate that weak supervision enables learning of useful disentangled representations in realistic scenarios.

READ FULL TEXT

page 21

page 22

page 23

research
05/29/2019

Are Disentangled Representations Helpful for Abstract Visual Reasoning?

A disentangled representation encodes information about the salient fact...
research
10/23/2021

Group-disentangled Representation Learning with Weakly-Supervised Regularization

Learning interpretable and human-controllable representations that uncov...
research
06/14/2020

Is Independence all you need? On the Generalization of Representations Learned from Correlated Data

Despite impressive progress in the last decade, it still remains an open...
research
10/27/2020

On the Transfer of Disentangled Representations in Realistic Settings

Learning meaningful representations that disentangle the underlying stru...
research
09/12/2022

Modular Representations for Weak Disentanglement

The recently introduced weakly disentangled representations proposed to ...
research
02/26/2020

Representation Learning Through Latent Canonicalizations

We seek to learn a representation on a large annotated data source that ...
research
10/22/2019

Weakly Supervised Disentanglement with Guarantees

Learning disentangled representations that correspond to factors of vari...

Please sign up or login with your details

Forgot password? Click here to reset