Occam learning

by   Rongrong Xie, et al.

We discuss probabilistic neural network models for unsupervised learning where the distribution of the hidden layer is fixed. We argue that learning machines with this architecture enjoy a number of desirable properties. For example, the model can be chosen as a simple and interpretable one, it does not need to be over-parametrised and training is argued to be efficient in a thermodynamic sense. When hidden units are binary variables, these models have a natural interpretation in terms of features. We show that the featureless state corresponds to a state of maximal ignorance about the features and that learning the first feature depends on non-Gaussian statistical properties of the data. We suggest that the distribution of hidden variables should be chosen according to the principle of maximal relevance. We introduce the Hierarchical Feature Model as an example of a model that satisfies this principle, and that encodes an a priori organisation of the feature space. We present extensive numerical experiments in order i) to test that the internal representation of learning machines can indeed be independent of the data with which they are trained and ii) that only a finite number of features are needed to describe a datasets.


page 10

page 15

page 16

page 19


Self-learning Local Supervision Encoding Framework to Constrict and Disperse Feature Distribution for Clustering

To obtain suitable feature distribution is a difficult task in machine l...

Hierarchical Models as Marginals of Hierarchical Models

We investigate the representation of hierarchical models in terms of mar...

Quantifying Relevance in Learning and Inference

Learning is a distinctive feature of intelligent behaviour. High-through...

Disentangling Space and Time in Video with Hierarchical Variational Auto-encoders

There are many forms of feature information present in video data. Princ...

Subgraph Networks with Application to Structural Feature Space Expansion

In this paper, the concept of subgraph network (SGN) is introduced and t...

Scalable Regularised Joint Mixture Models

In many applications, data can be heterogeneous in the sense of spanning...

Non-exchangeable feature allocation models with sublinear growth of the feature sizes

Feature allocation models are popular models used in different applicati...

Please sign up or login with your details

Forgot password? Click here to reset