Nonlinear Collaborative Scheme for Deep Neural Networks

11/04/2018
by   Hui-Ling Zhen, et al.
0

Conventional research attributes the improvements of generalization ability of deep neural networks either to powerful optimizers or the new network design. Different from them, in this paper, we aim to link the generalization ability of a deep network to optimizing a new objective function. To this end, we propose a nonlinear collaborative scheme for deep network training, with the key technique as combining different loss functions in a nonlinear manner. We find that after adaptively tuning the weights of different loss functions, the proposed objective function can efficiently guide the optimization process. What is more, we demonstrate that, from the mathematical perspective, the nonlinear collaborative scheme can lead to (i) smaller KL divergence with respect to optimal solutions; (ii) data-driven stochastic gradient descent; (iii) tighter PAC-Bayes bound. We also prove that its advantage can be strengthened by nonlinearity increasing. To some extent, we bridge the gap between learning (i.e., minimizing the new objective function) and generalization (i.e., minimizing a PAC-Bayes bound) in the new scheme. We also interpret our findings through the experiments on Residual Networks and DenseNet, showing that our new scheme performs superior to single-loss and multi-loss schemes no matter with randomization or not.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2019

PAC-Bayesian Transportation Bound

We present a new generalization error bound, the PAC-Bayesian transporta...
research
02/04/2022

Demystify Optimization and Generalization of Over-parameterized PAC-Bayesian Learning

PAC-Bayesian is an analysis framework where the training error can be ex...
research
05/15/2022

Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective

The lottery ticket hypothesis (LTH) has attracted attention because it c...
research
05/30/2023

Auto-tune: PAC-Bayes Optimization over Prior and Posterior for Neural Networks

It is widely recognized that the generalization ability of neural networ...
research
11/04/2021

Recurrent Neural Network Training with Convex Loss and Regularization Functions by Extended Kalman Filtering

We investigate the use of extended Kalman filtering to train recurrent n...
research
11/29/2019

Barcodes as summary of objective function's topology

We apply the canonical forms (barcodes) of gradient Morse complexes to e...
research
05/04/2023

Stimulative Training++: Go Beyond The Performance Limits of Residual Networks

Residual networks have shown great success and become indispensable in r...

Please sign up or login with your details

Forgot password? Click here to reset