Emergence of Compositional Representations in Restricted Boltzmann Machines

11/21/2016
by   Jérôme Tubiana, et al.
0

Extracting automatically the complex set of features composing real high-dimensional data is crucial for achieving high performance in machine--learning tasks. Restricted Boltzmann Machines (RBM) are empirically known to be efficient for this purpose, and to be able to generate distributed and graded representations of the data. We characterize the structural conditions (sparsity of the weights, low effective temperature, nonlinearities in the activation functions of hidden units, and adaptation of fields maintaining the activity in the visible layer) allowing RBM to operate in such a compositional phase. Evidence is provided by the replica analysis of an adequate statistical ensemble of random RBMs and by RBM trained on the handwritten digits dataset MNIST.

READ FULL TEXT

page 8

page 9

page 12

page 13

research
06/23/2022

Disentangling representations in Restricted Boltzmann Machines without adversaries

A goal of unsupervised machine learning is to disentangle representation...
research
02/18/2019

Learning Compositional Representations of Interacting Systems with Restricted Boltzmann Machines: Comparative Study of Lattice Proteins

A Restricted Boltzmann Machine (RBM) is an unsupervised machine-learning...
research
08/30/2010

Sparse Group Restricted Boltzmann Machines

Since learning is typically very slow in Boltzmann machines, there is a ...
research
03/05/2018

Thermodynamics of Restricted Boltzmann Machines and related learning dynamics

We analyze the learning process of the restricted Boltzmann machine (RBM...
research
05/04/2020

Complex Amplitude-Phase Boltzmann Machines

We extend the framework of Boltzmann machines to a network of complex-va...
research
02/20/2017

Phase Diagram of Restricted Boltzmann Machines and Generalised Hopfield Networks with Arbitrary Priors

Restricted Boltzmann Machines are described by the Gibbs measure of a bi...
research
07/20/2022

Can a Hebbian-like learning rule be avoiding the curse of dimensionality in sparse distributed data?

It is generally assumed that the brain uses something akin to sparse dis...

Please sign up or login with your details

Forgot password? Click here to reset