Realizable Learning is All You Need

by   Max Hopkins, et al.

The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory. With variants ranging from classical settings like PAC learning and regression to recent trends such as adversarially robust and private learning, it's surprising that we still lack a unified theory; traditional proofs of the equivalence tend to be disparate, and rely on strong model-specific assumptions like uniform convergence and sample compression. In this work, we give the first model-independent framework explaining the equivalence of realizable and agnostic learnability: a three-line blackbox reduction that simplifies, unifies, and extends our understanding across a wide variety of settings. This includes models with no known characterization of learnability such as learning with arbitrary distributional assumptions or general loss, as well as a host of other popular settings such as robust learning, partial learning, fair learning, and the statistical query model. More generally, we argue that the equivalence of realizable and agnostic learning is actually a special case of a broader phenomenon we call property generalization: any desirable property of a learning algorithm (e.g. noise tolerance, privacy, stability) that can be satisfied over finite hypothesis classes extends (possibly in some variation) to any learnable hypothesis class.


page 1

page 2

page 3

page 4


A Characterization of List Learnability

A classical result in learning theory shows the equivalence of PAC learn...

A Computational Separation between Private Learning and Online Learning

A recent line of work has shown a qualitative equivalence between differ...

A New Lower Bound for Agnostic Learning with Sample Compression Schemes

We establish a tight characterization of the worst-case rates for the ex...

On the Equivalence between Online and Private Learnability beyond Binary Classification

Alon et al. [2019] and Bun et al. [2020] recently showed that online lea...

A Theory of PAC Learnability of Partial Concept Classes

We extend the theory of PAC learning in a way which allows to model a ri...

On the Complexity of Learning from Label Proportions

In the problem of learning with label proportions, which we call LLP lea...

Uniform Generalization, Concentration, and Adaptive Learning

One fundamental goal in any learning algorithm is to mitigate its risk f...

Please sign up or login with your details

Forgot password? Click here to reset