Amortized learning of neural causal representations

08/21/2020
by   Nan Rosemary Ke, et al.
5

Causal models can compactly and efficiently encode the data-generating process under all interventions and hence may generalize better under changes in distribution. These models are often represented as Bayesian networks and learning them scales poorly with the number of variables. Moreover, these approaches cannot leverage previously learned knowledge to help with learning new causal models. In order to tackle these challenges, we represent a novel algorithm called causal relational networks (CRN) for learning causal models using neural networks. The CRN represent causal models using continuous representations and hence could scale much better with the number of variables. These models also take in previously learned information to facilitate learning of new causal models. Finally, we propose a decoding-based metric to evaluate causal models with continuous representations. We test our method on synthetic data achieving high accuracy and quick adaptation to previously unseen causal models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2018

Learning and Testing Causal Models with Interventions

We consider testing and learning problems on causal Bayesian networks as...
research
07/01/2023

Causal Structure Learning by Using Intersection of Markov Blankets

In this paper, we introduce a novel causal structure learning algorithm ...
research
07/18/2022

A Meta-Reinforcement Learning Algorithm for Causal Discovery

Causal discovery is a major task with the utmost importance for machine ...
research
10/27/2016

Causal Network Learning from Multiple Interventions of Unknown Manipulated Targets

In this paper, we discuss structure learning of causal networks from mul...
research
05/20/2021

To do or not to do: finding causal relations in smart homes

Research in Cognitive Science suggests that humans understand and repres...
research
10/14/2020

Learning Robust Models Using The Principle of Independent Causal Mechanisms

Standard supervised learning breaks down under data distribution shift. ...
research
12/01/2021

Inducing Causal Structure for Interpretable Neural Networks

In many areas, we have well-founded insights about causal structure that...

Please sign up or login with your details

Forgot password? Click here to reset