Universal Approximation Depth and Errors of Narrow Belief Networks with Discrete Units

by   Guido F. Montúfar, et al.

We generalize recent theoretical work on the minimal number of layers of narrow deep belief networks that can approximate any probability distribution on the states of their visible units arbitrarily well. We relax the setting of binary units (Sutskever and Hinton, 2008; Le Roux and Bengio, 2008, 2010; Montúfar and Ay, 2011) to units with arbitrary finite state spaces, and the vanishing approximation error to an arbitrary approximation error tolerance. For example, we show that a q-ary deep belief network with L≥ 2+q^ m-δ-1/q-1 layers of width n ≤ m + _q(m) + 1 for some m∈N can approximate any probability distribution on {0,1,...,q-1}^n without exceeding a Kullback-Leibler divergence of δ. Our analysis covers discrete restricted Boltzmann machines and naïve Bayes models as special cases.


Deep Narrow Boltzmann Machines are Universal Approximators

We show that deep narrow Boltzmann machines are universal approximators ...

Quantitative Universal Approximation Bounds for Deep Belief Networks

We show that deep belief networks with binary hidden units can approxima...

Maximal Information Divergence from Statistical Models defined by Neural Networks

We review recent results about the maximal values of the Kullback-Leible...

Geometry and Expressive Power of Conditional Restricted Boltzmann Machines

Conditional restricted Boltzmann machines are undirected stochastic neur...

Stochastic Feedforward Neural Networks: Universal Approximation

In this chapter we take a look at the universal approximation question f...

A random energy approach to deep learning

We study a generic ensemble of deep belief networks which is parametrize...

Kernels and Submodels of Deep Belief Networks

We study the mixtures of factorizing probability distributions represent...

Please sign up or login with your details

Forgot password? Click here to reset