A Mathematical Theory of Learning

05/07/2014
by   Ibrahim Alabdulmohsin, et al.
0

In this paper, a mathematical theory of learning is proposed that has many parallels with information theory. We consider Vapnik's General Setting of Learning in which the learning process is defined to be the act of selecting a hypothesis in response to a given training set. Such hypothesis can, for example, be a decision boundary in classification, a set of centroids in clustering, or a set of frequent item-sets in association rule mining. Depending on the hypothesis space and how the final hypothesis is selected, we show that a learning process can be assigned a numeric score, called learning capacity, which is analogous to Shannon's channel capacity and satisfies similar interesting properties as well such as the data-processing inequality and the information-cannot-hurt inequality. In addition, learning capacity provides the tightest possible bound on the difference between true risk and empirical risk of the learning process for all loss functions that are parametrized by the chosen hypothesis. It is also shown that the notion of learning capacity equivalently quantifies how sensitive the choice of the final hypothesis is to a small perturbation in the training set. Consequently, algorithmic stability is both necessary and sufficient for generalization. While the theory does not rely on concentration inequalities, we finally show that analogs to classical results in learning theory using the Probably Approximately Correct (PAC) model can be immediately deduced using this theory, and conclude with information-theoretic bounds to learning capacity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2020

Generalization Bounds via Information Density and Conditional Information Density

We present a general approach, based on an exponential inequality, to de...
research
06/11/2023

The capacity of quiver representations and the Anantharam-Jog-Nair inequality

The Anantharam-Jog-Nair inequality [AJN22] in Information Theory provide...
research
02/28/2017

Algorithmic stability and hypothesis complexity

We introduce a notion of algorithmic stability of learning algorithms---...
research
08/22/2016

Uniform Generalization, Concentration, and Adaptive Learning

One fundamental goal in any learning algorithm is to mitigate its risk f...
research
06/07/2013

Loss-Proportional Subsampling for Subsequent ERM

We propose a sampling scheme suitable for reducing a data set prior to s...
research
07/18/2021

A Theory of PAC Learnability of Partial Concept Classes

We extend the theory of PAC learning in a way which allows to model a ri...
research
05/12/2015

Permutational Rademacher Complexity: a New Complexity Measure for Transductive Learning

Transductive learning considers situations when a learner observes m lab...

Please sign up or login with your details

Forgot password? Click here to reset