Inference in Graded Bayesian Networks

by   Robert Leppert, et al.

Machine learning provides algorithms that can learn from data and make inferences or predictions on data. Bayesian networks are a class of graphical models that allow to represent a collection of random variables and their condititional dependencies by directed acyclic graphs. In this paper, an inference algorithm for the hidden random variables of a Bayesian network is given by using the tropicalization of the marginal distribution of the observed variables. By restricting the topological structure to graded networks, an inference algorithm for graded Bayesian networks will be established that evaluates the hidden random variables rank by rank and in this way yields the most probable states of the hidden variables. This algorithm can be viewed as a generalized version of the Viterbi algorithm for graded Bayesian networks.


page 1

page 2

page 3

page 4


Structure Learning for Hybrid Bayesian Networks

Bayesian networks have been used as a mechanism to represent the joint d...

Marginalization in Bayesian Networks: Integrating Exact and Approximate Inference

Bayesian Networks are probabilistic graphical models that can compactly ...

A Traveling Salesman Learns Bayesian Networks

Structure learning of Bayesian networks is an important problem that ari...

Designing neural networks that process mean values of random variables

We introduce a class of neural networks derived from probabilistic model...

On the Geometry of Bayesian Graphical Models with Hidden Variables

In this paper we investigate the geometry of the likelihood of the unkno...

Learning Structures of Bayesian Networks for Variable Groups

Bayesian networks, and especially their structures, are powerful tools f...

Please sign up or login with your details

Forgot password? Click here to reset