Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity

03/09/2020
by   Pritish Kamath, et al.
0

We present and study approximate notions of dimensional and margin complexity, which correspond to the minimal dimension or norm of an embedding required to approximate, rather then exactly represent, a given hypothesis class. We show that such notions are not only sufficient for learning using linear predictors or a kernel, but unlike the exact variants, are also necessary. Thus they are better suited for discussing limitations of linear or kernel methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2018

Optimality of 1-norm regularization among weighted 1-norms for sparse recovery: a case study on how to find optimal regularizations

The 1-norm was proven to be a good convex regularizer for the recovery o...
research
07/21/2020

On the Rademacher Complexity of Linear Hypothesis Sets

Linear predictors form a rich class of hypotheses used in a variety of l...
research
08/04/2023

Linear isomorphism testing of Boolean functions with small approximate spectral norm

Two Boolean functions f, g : F_2^n →-1, 1 are called linearly isomorphic...
research
05/09/2022

Exponential tractability of L_2-approximation with function values

We study the complexity of high-dimensional approximation in the L_2-nor...
research
09/26/2022

Approximate Description Length, Covering Numbers, and VC Dimension

Recently, Daniely and Granot [arXiv:1910.05697] introduced a new notion ...
research
08/29/2019

Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin

We study the problem of properly learning large margin halfspaces in th...
research
05/24/2022

Embedding Neighborhoods Simultaneously t-SNE (ENS-t-SNE)

We propose an algorithm for visualizing a dataset by embedding it in 3-d...

Please sign up or login with your details

Forgot password? Click here to reset