Identification of Latent Variables From Graphical Model Residuals

by   Boris Hayete, et al.

Graph-based causal discovery methods aim to capture conditional independencies consistent with the observed data and differentiate causal relationships from indirect or induced ones. Successful construction of graphical models of data depends on the assumption of causal sufficiency: that is, that all confounding variables are measured. When this assumption is not met, learned graphical structures may become arbitrarily incorrect and effects implied by such models may be wrongly attributed, carry the wrong magnitude, or mis-represent direction of correlation. Wide application of graphical models to increasingly less curated "big data" draws renewed attention to the unobserved confounder problem. We present a novel method that aims to control for the latent space when estimating a DAG by iteratively deriving proxies for the latent space from the residuals of the inferred model. Under mild assumptions, our method improves structural inference of Gaussian graphical models and enhances identifiability of the causal effect. In addition, when the model is being used to predict outcomes, it un-confounds the coefficients on the parents of the outcomes and leads to improved predictive performance when out-of-sample regime is very different from the training data. We show that any improvement of prediction of an outcome is intrinsically capped and cannot rise beyond a certain limit as compared to the confounded model. We extend our methodology beyond GGMs to ordinal variables and nonlinear cases. Our R package provides both PCA and autoencoder implementations of the methodology, suitable for GGMs with some guarantees and for better performance in general cases but without such guarantees.


page 1

page 2

page 3

page 4


Learning Gaussian Graphical Models with Latent Confounders

Gaussian Graphical models (GGM) are widely used to estimate the network ...

Learning loopy graphical models with latent variables: Efficient methods and guarantees

The problem of structure estimation in graphical models with latent vari...

Learning the effect of latent variables in Gaussian Graphical models with unobserved variables

The edge structure of the graph defining an undirected graphical model d...

Nonlinearity, Feedback and Uniform Consistency in Causal Structural Learning

The goal of Causal Discovery is to find automated search methods for lea...

Time Adaptive Gaussian Model

Multivariate time series analysis is becoming an integral part of data a...

Moment-Matching Graph-Networks for Causal Inference

In this note we explore a fully unsupervised deep-learning framework for...

Please sign up or login with your details

Forgot password? Click here to reset