Exploiting Qualitative Knowledge in the Learning of Conditional Probabilities of Bayesian Networks

by   Frank Wittig, et al.

Algorithms for learning the conditional probabilities of Bayesian networks with hidden variables typically operate within a high-dimensional search space and yield only locally optimal solutions. One way of limiting the search space and avoiding local optima is to impose qualitative constraints that are based on background knowledge concerning the domain. We present a method for integrating formal statements of qualitative constraints into two learning algorithms, APN and EM. In our experiments with synthetic data, this method yielded networks that satisfied the constraints almost perfectly. The accuracy of the learned networks was consistently superior to that of corresponding networks learned without constraints. The exploitation of qualitative constraints therefore appears to be a promising way to increase both the interpretability and the accuracy of learned Bayesian networks with known structure.


page 1

page 2

page 3

page 5

page 6

page 7

page 8

page 9


A Dynamic Approach to Probabilistic Inference

In this paper we present a framework for dynamically constructing Bayesi...

A Traveling Salesman Learns Bayesian Networks

Structure learning of Bayesian networks is an important problem that ari...

Learning from Sparse Data by Exploiting Monotonicity Constraints

When training data is sparse, more domain knowledge must be incorporated...

Uncertain Bayesian Networks: Learning from Incomplete Data

When the historical data are limited, the conditional probabilities asso...

Bidirectional Inference Networks: A Class of Deep Bayesian Networks for Health Profiling

We consider the problem of inferring the values of an arbitrary set of v...

Please sign up or login with your details

Forgot password? Click here to reset