Inference in Graded Bayesian Networks

12/23/2018
by   Robert Leppert, et al.
tuhh
0

Machine learning provides algorithms that can learn from data and make inferences or predictions on data. Bayesian networks are a class of graphical models that allow to represent a collection of random variables and their condititional dependencies by directed acyclic graphs. In this paper, an inference algorithm for the hidden random variables of a Bayesian network is given by using the tropicalization of the marginal distribution of the observed variables. By restricting the topological structure to graded networks, an inference algorithm for graded Bayesian networks will be established that evaluates the hidden random variables rank by rank and in this way yields the most probable states of the hidden variables. This algorithm can be viewed as a generalized version of the Viterbi algorithm for graded Bayesian networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/03/2022

Structure Learning for Hybrid Bayesian Networks

Bayesian networks have been used as a mechanism to represent the joint d...
12/16/2021

Marginalization in Bayesian Networks: Integrating Exact and Approximate Inference

Bayesian Networks are probabilistic graphical models that can compactly ...
11/20/2012

A Traveling Salesman Learns Bayesian Networks

Structure learning of Bayesian networks is an important problem that ari...
04/29/2010

Designing neural networks that process mean values of random variables

We introduce a class of neural networks derived from probabilistic model...
01/30/2013

On the Geometry of Bayesian Graphical Models with Hidden Variables

In this paper we investigate the geometry of the likelihood of the unkno...
08/31/2015

Learning Structures of Bayesian Networks for Variable Groups

Bayesian networks, and especially their structures, are powerful tools f...

Please sign up or login with your details

Forgot password? Click here to reset