Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

01/31/2018
by   Tsui-Wei Weng, et al.
0

The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness. In this paper, we provide a theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and propose to use the Extreme Value Theory for efficient evaluation. Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. The proposed CLEVER score is attack-agnostic and computationally feasible for large neural networks. Experimental results on various networks, including ResNet, Inception-v3 and MobileNet, show that (i) CLEVER is aligned with the robustness indication measured by the ℓ_2 and ℓ_∞ norms of adversarial examples from powerful attacks, and (ii) defended networks using defensive distillation or bounded ReLU indeed achieve better CLEVER scores. To the best of our knowledge, CLEVER is the first attack-independent robustness metric that can be applied to any neural network classifier.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/16/2016

Towards Evaluating the Robustness of Neural Networks

Neural networks provide state-of-the-art results for most machine learni...
research
10/19/2018

On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm

CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is an Extr...
research
04/22/2022

How Sampling Impacts the Robustness of Stochastic Neural Networks

Stochastic neural networks (SNNs) are random functions and predictions a...
research
03/01/2022

A Domain-Theoretic Framework for Robustness Analysis of Neural Networks

We present a domain-theoretic framework for validated robustness analysi...
research
03/14/2020

Minimum-Norm Adversarial Examples on KNN and KNN-Based Models

We study the robustness against adversarial examples of kNN classifiers ...
research
02/11/2020

Generalised Lipschitz Regularisation Equals Distributional Robustness

The problem of adversarial examples has highlighted the need for a theor...
research
09/20/2022

Audit and Improve Robustness of Private Neural Networks on Encrypted Data

Performing neural network inference on encrypted data without decryption...

Please sign up or login with your details

Forgot password? Click here to reset