Latent Variable Models for Bayesian Causal Discovery

by   Jithendaraa Subramanian, et al.

Learning predictors that do not rely on spurious correlations involves building causal representations. However, learning such a representation is very challenging. We, therefore, formulate the problem of learning a causal representation from high dimensional data and study causal recovery with synthetic data. This work introduces a latent variable decoder model, Decoder BCD, for Bayesian causal discovery and performs experiments in mildly supervised and unsupervised settings. We present a series of synthetic experiments to characterize important factors for causal discovery and show that using known intervention targets as labels helps in unsupervised Bayesian inference over structure and parameters of linear Gaussian additive noise latent structural causal models.


Latent Variable Discovery Using Dependency Patterns

The causal discovery of Bayesian networks is an active and important res...

Learning Latent Structural Causal Models

Causal learning has long concerned itself with the accurate recovery of ...

Invariant Gaussian Process Latent Variable Models and Application in Causal Discovery

In nonlinear latent variable models or dynamic models, if we consider th...

Causal Structural Learning on MPHIA Individual Dataset

The Population-based HIV Impact Assessment (PHIA) is an ongoing project ...

Deep Bayesian Unsupervised Lifelong Learning

Lifelong Learning (LL) refers to the ability to continually learn and so...

Shadow-Mapping for Unsupervised Neural Causal Discovery

An important goal across most scientific fields is the discovery of caus...

Causal Discovery in Hawkes Processes by Minimum Description Length

Hawkes processes are a special class of temporal point processes which e...

Please sign up or login with your details

Forgot password? Click here to reset