Can we avoid Double Descent in Deep Neural Networks?

02/26/2023
by   Victor Quétu, et al.
0

Finding the optimal size of deep learning models is very actual and of broad impact, especially in energy-saving schemes. Very recently, an unexpected phenomenon, the “double descent”, has caught the attention of the deep learning community. As the model's size grows, the performance gets first worse, and then goes back to improving. It raises serious questions about the optimal model's size to maintain high generalization: the model needs to be sufficiently over-parametrized, but adding too many parameters wastes training resources. Is it possible to find, in an efficient way, the best trade-off? Our work shows that the double descent phenomenon is potentially avoidable with proper conditioning of the learning problem, but a final answer is yet to be found. We empirically observe that there is hope to dodge the double descent in complex scenarios with proper regularization, as a simple ℓ_2 regularization is already positively contributing to such a perspective.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/31/2023

The Quest of Finding the Antidote to Sparse Double Descent

In energy-efficient schemes, finding the optimal size of deep learning m...
research
02/18/2022

Geometric Regularization from Overparameterization explains Double Descent and other findings

The volume of the distribution of possible weight configurations associa...
research
07/26/2023

Sparse Double Descent in Vision Transformers: real or phantom threat?

Vision transformers (ViT) have been of broad interest in recent theoreti...
research
05/25/2023

Dropout Drops Double Descent

In this paper, we find and analyze that we can easily drop the double de...
research
03/10/2023

Unifying Grokking and Double Descent

A principled understanding of generalization in deep learning may requir...
research
05/31/2022

VC Theoretical Explanation of Double Descent

There has been growing interest in generalization performance of large m...
research
03/02/2023

Dodging the Sparse Double Descent

This paper presents an approach to addressing the issue of over-parametr...

Please sign up or login with your details

Forgot password? Click here to reset