Designing neural networks that process mean values of random variables

by   Michael J. Barber, et al.
AIT Austrian Institute of Technology GmbH

We introduce a class of neural networks derived from probabilistic models in the form of Bayesian networks. By imposing additional assumptions about the nature of the probabilistic models represented in the networks, we derive neural networks with standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the random variables, that can pool multiple sources of evidence, and that deal cleanly and consistently with inconsistent or contradictory evidence. The presented neural networks capture many properties of Bayesian networks, providing distributed versions of probabilistic models.


page 1

page 2

page 3

page 4


Inference in Graded Bayesian Networks

Machine learning provides algorithms that can learn from data and make i...

Probabilistic Models for Computerized Adaptive Testing: Experiments

This paper follows previous research we have already performed in the ar...

PGMHD: A Scalable Probabilistic Graphical Model for Massive Hierarchical Data Problems

In the big data era, scalability has become a crucial requirement for an...

Certain Bayesian Network based on Fuzzy knowledge Bases

In this paper, we are trying to examine trade offs between fuzzy logic a...

Bidirectional Inference Networks: A Class of Deep Bayesian Networks for Health Profiling

We consider the problem of inferring the values of an arbitrary set of v...

Probabilistic Meta-Representations Of Neural Networks

Existing Bayesian treatments of neural networks are typically characteri...

A Probabilistic Framework for Nonlinearities in Stochastic Neural Networks

We present a probabilistic framework for nonlinearities, based on doubly...

Please sign up or login with your details

Forgot password? Click here to reset