Exact Feature Collisions in Neural Networks

05/31/2022
by   Utku Ozbulak, et al.
2

Predictions made by deep neural networks were shown to be highly sensitive to small changes made in the input space where such maliciously crafted data points containing small perturbations are being referred to as adversarial examples. On the other hand, recent research suggests that the same networks can also be extremely insensitive to changes of large magnitude, where predictions of two largely different data points can be mapped to approximately the same output. In such cases, features of two data points are said to approximately collide, thus leading to the largely similar predictions. Our results improve and extend the work of Li et al.(2019), laying out theoretical grounds for the data points that have colluding features from the perspective of weights of neural networks, revealing that neural networks not only suffer from features that approximately collide but also suffer from features that exactly collide. We identify the necessary conditions for the existence of such scenarios, hereby investigating a large number of DNNs that have been used to solve various computer vision problems. Furthermore, we propose the Null-space search, a numerical approach that does not rely on heuristics, to create data points with colliding features for any input and for any task, including, but not limited to, classification, localization, and segmentation.

READ FULL TEXT

page 7

page 8

research
11/29/2018

Bayesian Adversarial Spheres: Bayesian Inference and Adversarial Examples in a Noiseless Setting

Modern deep neural network models suffer from adversarial examples, i.e....
research
03/26/2021

Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation

We present a novel method for reliably explaining the predictions of neu...
research
07/19/2021

Feature-Filter: Detecting Adversarial Examples through Filtering off Recessive Features

Deep neural networks (DNNs) are under threat from adversarial example at...
research
07/23/2021

Bias Loss for Mobile Neural Networks

Compact convolutional neural networks (CNNs) have witnessed exceptional ...
research
12/20/2022

Empirical Analysis of Limits for Memory Distance in Recurrent Neural Networks

Common to all different kinds of recurrent neural networks (RNNs) is the...
research
01/08/2019

Comments on "Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?"

In a recently published paper [1], it is shown that deep neural networks...
research
10/29/2021

CvS: Classification via Segmentation For Small Datasets

Deep learning models have shown promising results in a wide range of com...

Please sign up or login with your details

Forgot password? Click here to reset