Neural networks learn to magnify areas near decision boundaries

01/26/2023
by   Jacob A. Zavatone-Veth, et al.
0

We study how training molds the Riemannian geometry induced by neural network feature maps. At infinite width, neural networks with random parameters induce highly symmetric metrics on input space. Feature learning in networks trained to perform classification tasks magnifies local areas along decision boundaries. These changes are consistent with previously proposed geometric approaches for hand-tuning of kernel methods to improve generalization.

READ FULL TEXT

page 5

page 19

page 35

page 36

page 37

page 38

page 41

page 42

research
02/20/2020

On the Decision Boundaries of Deep Neural Networks: A Tropical Geometry Perspective

This work tackles the problem of characterizing and understanding the de...
research
11/30/2020

Feature Learning in Infinite-Width Neural Networks

As its width tends to infinity, a deep neural network's behavior under g...
research
05/27/2022

Unsupervised learning of features and object boundaries from local prediction

A visual system has to learn both which features to extract from images ...
research
04/30/2022

Loss Function Entropy Regularization for Diverse Decision Boundaries

Is it possible to train several classifiers to perform meaningful crowd-...
research
06/30/2023

The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks

Do neural networks, trained on well-understood algorithmic tasks, reliab...
research
05/19/2022

Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks

We analyze feature learning in infinite width neural networks trained wi...
research
03/15/2022

Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective

We discuss methods for visualizing neural network decision boundaries an...

Please sign up or login with your details

Forgot password? Click here to reset