Towards Verifying the Geometric Robustness of Large-scale Neural Networks

by   Fu Wang, et al.

Deep neural networks (DNNs) are known to be vulnerable to adversarial geometric transformation. This paper aims to verify the robustness of large-scale DNNs against the combination of multiple geometric transformations with a provable guarantee. Given a set of transformations (e.g., rotation, scaling, etc.), we develop GeoRobust, a black-box robustness analyser built upon a novel global optimisation strategy, for locating the worst-case combination of transformations that affect and even alter a network's output. GeoRobust can provide provable guarantees on finding the worst-case combination based on recent advances in Lipschitzian theory. Due to its black-box nature, GeoRobust can be deployed on large-scale DNNs regardless of their architectures, activation functions, and the number of neurons. In practice, GeoRobust can locate the worst-case geometric transformation with high precision for the ResNet50 model on ImageNet in a few seconds on average. We examined 18 ImageNet classifiers, including the ResNet family and vision transformers, and found a positive correlation between the geometric robustness of the networks and the parameter numbers. We also observe that increasing the depth of DNN is more beneficial than increasing its width in terms of improving its geometric robustness. Our tool GeoRobust is available at


REST: Performance Improvement of a Black Box Model via RL-based Spatial Transformation

In recent years, deep neural networks (DNN) have become a highly active ...

Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions

Recent studies have shown that deep neural networks (DNNs) are vulnerabl...

Geometric robustness of deep networks: analysis and improvement

Deep convolutional neural networks have been shown to be vulnerable to a...

Scalable Quantitative Verification For Deep Neural Networks

Verifying security properties of deep neural networks (DNNs) is becoming...

B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers

We present a new direction for increasing the interpretability of deep n...

Linking average- and worst-case perturbation robustness via class selectivity and dimensionality

Representational sparsity is known to affect robustness to input perturb...

On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm

CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is an Extr...

Please sign up or login with your details

Forgot password? Click here to reset