On the Implicit Bias in Deep-Learning Algorithms

08/26/2022
by   Gal Vardi, et al.
0

Gradient-based deep-learning algorithms exhibit remarkable performance in practice, but it is not well-understood why they are able to generalize despite having more parameters than training examples. It is believed that implicit bias is a key factor in their ability to generalize, and hence it has been widely studied in recent years. In this short survey, we explain the notion of implicit bias, review main results and discuss their implications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2023

The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks

In this work, we study the implications of the implicit bias of gradient...
research
03/13/2020

Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study

The notion of implicit bias, or implicit regularization, has been sugges...
research
12/10/2002

How to Shift Bias: Lessons from the Baldwin Effect

An inductive learning algorithm takes a set of data as input and generat...
research
02/07/2016

Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms

The question why deep learning algorithms generalize so well has attract...
research
03/23/2017

Failures of Gradient-Based Deep Learning

In recent years, Deep Learning has become the go-to solution for a broad...
research
12/28/2022

On Implicit Bias in Overparameterized Bilevel Optimization

Many problems in machine learning involve bilevel optimization (BLO), in...
research
02/09/2022

The no-free-lunch theorems of supervised learning

The no-free-lunch theorems promote a skeptical conclusion that all possi...

Please sign up or login with your details

Forgot password? Click here to reset