Closeness and Uncertainty Aware Adversarial Examples Detection in Adversarial Machine Learning

12/11/2020
by   Omer Faruk Tuna, et al.
4

Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, which are termed as adversarial samples. In recent years, numerous studies have been conducted to increase the reliability of DNN models by distinguishing adversarial samples from regular inputs. In this work, we explore and assess the usage of 2 different groups of metrics in detecting adversarial samples: the ones which are based on the uncertainty estimation using Monte-Carlo Dropout Sampling and the ones which are based on closeness measures in the subspace of deep features extracted by the model. We also introduce a new feature for adversarial detection, and we show that the performances of all these metrics heavily depend on the strength of the attack being used.

READ FULL TEXT

page 3

page 7

page 9

page 11

research
02/08/2021

Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

Deep neural network architectures are considered to be robust to random ...
research
03/01/2017

Detecting Adversarial Samples from Artifacts

Deep neural networks (DNNs) are powerful nonlinear architectures that ar...
research
11/19/2019

Deep Detector Health Management under Adversarial Campaigns

Machine learning models are vulnerable to adversarial inputs that induce...
research
04/09/2019

Exploring Uncertainty Measures for Image-Caption Embedding-and-Retrieval Task

With the wide development of black-box machine learning algorithms, part...
research
04/05/2019

Minimum Uncertainty Based Detection of Adversaries in Deep Neural Networks

Despite their unprecedented performance in various domains, utilization ...
research
04/18/2022

UNBUS: Uncertainty-aware Deep Botnet Detection System in Presence of Perturbed Samples

A rising number of botnet families have been successfully detected using...
research
07/15/2021

Adversarial Attack for Uncertainty Estimation: Identifying Critical Regions in Neural Networks

We propose a novel method to capture data points near decision boundary ...

Please sign up or login with your details

Forgot password? Click here to reset