Are deep ResNets provably better than linear predictors?

07/09/2019
by   Chulhee Yun, et al.
2

Recently, a residual network (ResNet) with a single residual block has been shown to outperform linear predictors, in the sense that all its local minima are at least as good as the best linear predictor. We take a step towards extending this result to deep ResNets. As motivation, we first show that there exist datasets for which all local minima of a fully-connected ReLU network are no better than the best linear predictor, while a ResNet can have strictly better local minima. Second, we show that even at its global minimum, the representation obtained from the residual blocks of a 2-block ResNet does not necessarily improve monotonically as more blocks are added, highlighting a fundamental difficulty in analyzing deep ResNets. Our main result on deep ResNets shows that (under some geometric conditions) any critical point is either (i) at least as good as the best linear predictor; or (ii) the Hessian at this critical point has a strictly negative eigenvalue. Finally, we complement our results by analyzing near-identity regions of deep ResNets, obtaining size-independent upper bounds for the risk attained at critical points as well as the Rademacher complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2018

A Critical View of Global Optimality in Deep Learning

We investigate the loss surface of deep linear and nonlinear neural netw...
research
04/18/2018

Are ResNets Provably Better than Linear Predictors?

A residual network (or ResNet) is a standard deep neural net architectur...
research
10/17/2018

The loss surface of deep linear networks viewed through the algebraic geometry lens

By using the viewpoint of modern computational algebraic geometry, we ex...
research
11/10/2021

ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees

Models recently used in the literature proving residual networks (ResNet...
research
10/21/2018

Depth with Nonlinearity Creates No Bad Local Minima in ResNets

In this paper, we prove that depth with nonlinearity creates no bad loca...
research
10/03/2022

Feature Embedding by Template Matching as a ResNet Block

Convolution blocks serve as local feature extractors and are the key to ...
research
02/26/2022

Adversarial robustness of sparse local Lipschitz predictors

This work studies the adversarial robustness of parametric functions com...

Please sign up or login with your details

Forgot password? Click here to reset