Understanding robustness and generalization of artificial neural networks through Fourier masks

03/16/2022
by   Nikos Karantzas, et al.
0

Despite the enormous success of artificial neural networks (ANNs) in many disciplines, the characterization of their computations and the origin of key properties such as generalization and robustness remain open questions. Recent literature suggests that robust networks with good generalization properties tend to be biased towards processing low frequencies in images. To explore the frequency bias hypothesis further, we develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance. We achieve this by imposing invariance in the loss with respect to such modulations in the input frequencies. We first use our method to test the low-frequency preference hypothesis of adversarially trained or data-augmented networks. Our results suggest that adversarially robust networks indeed exhibit a low-frequency bias but we find this bias is also dependent on directions in frequency space. However, this is not necessarily true for other types of data augmentation. Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place. Surprisingly, images seen through these modulatory masks are not recognizable and resemble texture-like patterns.

READ FULL TEXT

page 6

page 7

page 8

page 9

page 10

research
10/06/2021

Spectral Bias in Practice: The Role of Function Frequency in Generalization

Despite their ability to represent highly expressive functions, deep lea...
research
07/19/2023

What do neural networks learn in image classification? A frequency shortcut perspective

Frequency analysis is useful for understanding the mechanisms of represe...
research
04/29/2020

Rethink the Connections among Generalization, Memorization and the Spectral Bias of DNNs

Over-parameterized deep neural networks (DNNs) with sufficient capacity ...
research
07/19/2023

Constructing Extreme Learning Machines with zero Spectral Bias

The phenomena of Spectral Bias, where the higher frequency components of...
research
05/16/2023

A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree Spectral Bias of Neural Networks

Despite the capacity of neural nets to learn arbitrary functions, models...
research
08/12/2023

DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning

Neural networks are prone to learn easy solutions from superficial stati...
research
11/10/2015

Analyzing Stability of Convolutional Neural Networks in the Frequency Domain

Understanding the internal process of ConvNets is commonly done using vi...

Please sign up or login with your details

Forgot password? Click here to reset