A Revision of Neural Tangent Kernel-based Approaches for Neural Networks

07/02/2020
by   Kyung-Su Kim, et al.
0

Recent theoretical works based on the neural tangent kernel (NTK) have shed light on the optimization and generalization of over-parameterized networks, and partially bridge the gap between their practical success and classical learning theory. Especially, using the NTK-based approach, the following three representative results were obtained: (1) A training error bound was derived to show that networks can fit any finite training sample perfectly by reflecting a tighter characterization of training speed depending on the data complexity. (2) A generalization error bound invariant of network size was derived by using a data-dependent complexity measure (CMD). It follows from this CMD bound that networks can generalize arbitrary smooth functions. (3) A simple and analytic kernel function was derived as indeed equivalent to a fully-trained network. This kernel outperforms its corresponding network and the existing gold standard, Random Forests, in few shot learning. For all of these results to hold, the network scaling factor κ should decrease w.r.t. sample size n. In this case of decreasing κ, however, we prove that the aforementioned results are surprisingly erroneous. It is because the output value of trained network decreases to zero when κ decreases w.r.t. n. To solve this problem, we tighten key bounds by essentially removing κ-affected values. Our tighter analysis resolves the scaling problem and enables the validation of the original NTK-based results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/24/2019

Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks

Recent works have cast some light on the mystery of why deep nets fit an...
research
09/13/2021

Uniform Generalization Bounds for Overparameterized Neural Networks

An interesting observation in artificial neural networks is their favora...
research
09/29/2021

On the Provable Generalization of Recurrent Neural Networks

Recurrent Neural Network (RNN) is a fundamental structure in deep learni...
research
05/27/2019

Understanding Generalization of Deep Neural Networks Trained with Noisy Labels

Over-parameterized deep neural networks trained by simple first-order me...
research
03/09/2021

On the Generalization Power of Overfitted Two-Layer Neural Tangent Kernel Models

In this paper, we study the generalization performance of min ℓ_2-norm o...
research
06/17/2020

Interpolation and Learning with Scale Dependent Kernels

We study the learning properties of nonparametric ridge-less least squar...
research
10/11/2018

Of Kernels and Queues: when network calculus meets analytic combinatorics

Stochastic network calculus is a tool for computing error bounds on the ...

Please sign up or login with your details

Forgot password? Click here to reset