The Connection Between Approximation, Depth Separation and Learnability in Neural Networks

01/31/2021
by   Eran Malach, et al.
0

Several recent works have shown separation results between deep neural networks, and hypothesis classes with inferior approximation capacity such as shallow networks or kernel classes. On the other hand, the fact that deep networks can efficiently express a target function does not mean this target function can be learned efficiently by deep neural networks. In this work we study the intricate connection between learnability and approximation capacity. We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target. Specifically, we show that a necessary condition for a function to be learnable by gradient descent on deep neural networks is to be able to approximate the function, at least in a weak sense, with shallow neural networks. We also show that a class of functions can be learned by an efficient statistical query algorithm if and only if it can be approximated in a weak sense by some kernel class. We give several examples of functions which demonstrate depth separation, and conclude that they cannot be efficiently learned, even by a hypothesis class that can efficiently approximate them.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/09/2020

Expressivity of Deep Neural Networks

In this review paper, we give a comprehensive overview of the large vari...
research
07/19/2019

Representational Capacity of Deep Neural Networks -- A Computing Study

There is some theoretical evidence that deep neural networks with multip...
research
03/08/2019

Is Deeper Better only when Shallow is Good?

Understanding the power of depth in feed-forward neural networks is an o...
research
07/17/2018

Are Efficient Deep Representations Learnable?

Many theories of deep learning have shown that a deep network can requir...
research
10/26/2020

Provable Memorization via Deep Neural Networks using Sub-linear Parameters

It is known that Θ(N) parameters are sufficient for neural networks to m...
research
08/18/2020

When Hardness of Approximation Meets Hardness of Learning

A supervised learning algorithm has access to a distribution of labeled ...
research
02/18/2020

Learning Parities with Neural Networks

In recent years we see a rapidly growing line of research which shows le...

Please sign up or login with your details

Forgot password? Click here to reset