Survey of Attacks and Defenses on Edge-Deployed Neural Networks

11/27/2019
by   Mihailo Isakov, et al.
30

Deep Neural Network (DNN) workloads are quickly moving from datacenters onto edge devices, for latency, privacy, or energy reasons. While datacenter networks can be protected using conventional cybersecurity measures, edge neural networks bring a host of new security challenges. Unlike classic IoT applications, edge neural networks are typically very compute and memory intensive, their execution is data-independent, and they are robust to noise and faults. Neural network models may be very expensive to develop, and can potentially reveal information about the private data they were trained on, requiring special care in distribution. The hidden states and outputs of the network can also be used in reconstructing user inputs, potentially violating users' privacy. Furthermore, neural networks are vulnerable to adversarial attacks, which may cause misclassifications and violate the integrity of the output. These properties add challenges when securing edge-deployed DNNs, requiring new considerations, threat models, priorities, and approaches in securely and privately deploying DNNs to the edge. In this work, we cover the landscape of attacks on, and defenses, of neural networks deployed in edge devices and provide a taxonomy of attacks and defenses targeting edge DNNs.

READ FULL TEXT

page 1

page 3

research
10/23/2020

On Evaluating Neural Network Backdoor Defenses

Deep neural networks (DNNs) demonstrate superior performance in various ...
research
09/09/2020

SoK: Certified Robustness for Deep Neural Networks

Great advancement in deep neural networks (DNNs) has led to state-of-the...
research
11/26/2020

Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks

Deep Learning is able to solve a plethora of once impossible problems. H...
research
05/12/2020

Serdab: An IoT Framework for Partitioning Neural Networks Computation across Multiple Enclaves

Recent advances in Deep Neural Networks (DNN) and Edge Computing have ma...
research
08/25/2020

Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks

Deep Neural Networks (DNNs) have been shown to be prone to adversarial a...
research
02/01/2021

Fast Training of Provably Robust Neural Networks by SingleProp

Recent works have developed several methods of defending neural networks...
research
08/04/2022

Padding-only defenses add delay in Tor

Website fingerprinting is an attack that uses size and timing characteri...

Please sign up or login with your details

Forgot password? Click here to reset