Realizable Learning is All You Need

11/08/2021
by   Max Hopkins, et al.
8

The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory. With variants ranging from classical settings like PAC learning and regression to recent trends such as adversarially robust and private learning, it's surprising that we still lack a unified theory; traditional proofs of the equivalence tend to be disparate, and rely on strong model-specific assumptions like uniform convergence and sample compression. In this work, we give the first model-independent framework explaining the equivalence of realizable and agnostic learnability: a three-line blackbox reduction that simplifies, unifies, and extends our understanding across a wide variety of settings. This includes models with no known characterization of learnability such as learning with arbitrary distributional assumptions or general loss, as well as a host of other popular settings such as robust learning, partial learning, fair learning, and the statistical query model. More generally, we argue that the equivalence of realizable and agnostic learning is actually a special case of a broader phenomenon we call property generalization: any desirable property of a learning algorithm (e.g. noise tolerance, privacy, stability) that can be satisfied over finite hypothesis classes extends (possibly in some variation) to any learnable hypothesis class.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2022

A Characterization of List Learnability

A classical result in learning theory shows the equivalence of PAC learn...
research
07/11/2020

A Computational Separation between Private Learning and Online Learning

A recent line of work has shown a qualitative equivalence between differ...
research
05/21/2018

A New Lower Bound for Agnostic Learning with Sample Compression Schemes

We establish a tight characterization of the worst-case rates for the ex...
research
06/02/2020

On the Equivalence between Online and Private Learnability beyond Binary Classification

Alon et al. [2019] and Bun et al. [2020] recently showed that online lea...
research
07/18/2021

A Theory of PAC Learnability of Partial Concept Classes

We extend the theory of PAC learning in a way which allows to model a ri...
research
04/07/2020

On the Complexity of Learning from Label Proportions

In the problem of learning with label proportions, which we call LLP lea...
research
08/22/2016

Uniform Generalization, Concentration, and Adaptive Learning

One fundamental goal in any learning algorithm is to mitigate its risk f...

Please sign up or login with your details

Forgot password? Click here to reset