DeepAI AI Chat
Log In Sign Up

SNN under Attack: are Spiking Deep Belief Networks vulnerable to Adversarial Examples?

02/04/2019
by   Alberto Marchisio, et al.
Politecnico di Torino
TU Wien
16

Recently, many adversarial examples have emerged for Deep Neural Networks (DNNs) causing misclassifications. However, in-depth work still needs to be performed to demonstrate such attacks and security vulnerabilities for spiking neural networks (SNNs), i.e. the 3rd generation NNs. This paper aims at addressing the fundamental questions:"Are SNNs vulnerable to the adversarial attacks as well?" and "if yes, to what extent?" Using a Spiking Deep Belief Network (SDBN) for the MNIST database classification, we show that the SNN accuracy decreases accordingly to the noise magnitude in data poisoning random attacks applied to the test images. Moreover, SDBNs generalization capabilities increase by applying noise to the training images. We develop a novel black box attack methodology to automatically generate imperceptible and robust adversarial examples through a greedy algorithm, which is first of its kind for SNNs.

READ FULL TEXT

page 1

page 4

page 5

page 6

09/30/2018

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks

Deep neural networks have been shown to be vulnerable to adversarial exa...
09/07/2022

Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples

Spiking neural networks (SNNs) have attracted much attention for their h...
02/22/2018

Adversarial Training for Probabilistic Spiking Neural Networks

Classifiers trained using conventional empirical risk minimization or ma...
02/13/2023

Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data

Deep neural networks (DNNs) have achieved excellent results in various t...
10/19/2015

Exploring the Space of Adversarial Images

Adversarial examples have raised questions regarding the robustness and ...
04/20/2018

ADef: an Iterative Algorithm to Construct Adversarial Deformations

While deep neural networks have proven to be a powerful tool for many re...