Invariant Causal Mechanisms through Distribution Matching

by   Mathieu Chevalley, et al.

Learning representations that capture the underlying data generating process is a key problem for data efficient and robust use of neural networks. One key property for robustness which the learned representation should capture and which recently received a lot of attention is described by the notion of invariance. In this work we provide a causal perspective and new algorithm for learning invariant representations. Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization, where we are able to significantly boost the score of existing models.


page 4

page 15


Contrastive ACE: Domain Generalization Through Alignment of Causal Mechanisms

Domain generalization aims to learn knowledge invariant across different...

Generalized Invariant Matching Property via LASSO

Learning under distribution shifts is a challenging task. One principled...

Towards Robust and Adaptive Motion Forecasting: A Causal Representation Perspective

Learning behavioral patterns from observational data has been a de-facto...

Representation Learning via Invariant Causal Mechanisms

Self-supervised learning has emerged as a strategy to reduce the relianc...

Domain Generalization using Causal Matching

Learning invariant representations has been proposed as a key technique ...

Causal Theories and Structural Data Representations for Improving Out-of-Distribution Classification

We consider how human-centered causal theories and tools from the dynami...

PAC Generalization via Invariant Representations

One method for obtaining generalizable solutions to machine learning tas...

Please sign up or login with your details

Forgot password? Click here to reset