Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data

02/02/2023
by   Jonathan W. Siegel, et al.
0

We study the interpolation, or memorization, power of deep ReLU neural networks. Specifically, we consider the question of how efficiently, in terms of the number of parameters, deep ReLU networks can interpolate values at N datapoints in the unit ball which are separated by a distance δ. We show that Ω(N) parameters are required in the regime where δ is exponentially small in N, which gives the sharp result in this regime since O(N) parameters are always sufficient. This also shows that the bit-extraction technique used to prove lower bounds on the VC dimension cannot be applied to irregularly spaced datapoints.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2021

Sharp Lower Bounds on the Approximation Rate of Shallow Neural Networks

We consider the approximation rates of shallow neural networks with resp...
research
10/07/2021

On the Optimal Memorization Power of ReLU Neural Networks

We study the memorization power of feedforward ReLU neural networks. We ...
research
02/10/2022

Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks

We give superpolynomial statistical query (SQ) lower bounds for learning...
research
06/15/2022

Local Identifiability of Deep ReLU Neural Networks: the Theory

Is a sample rich enough to determine, at least locally, the parameters o...
research
05/26/2021

A Universal Law of Robustness via Isoperimetry

Classically, data interpolation with a parametrized model class is possi...
research
11/25/2019

Trajectory growth lower bounds for random sparse deep ReLU networks

This paper considers the growth in the length of one-dimensional traject...
research
01/18/2019

Machine Learning with Clos Networks

We present a new methodology for improving the accuracy of small neural ...

Please sign up or login with your details

Forgot password? Click here to reset