ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing

03/30/2023
by   Xiaodan Li, et al.
0

Recent studies have shown that higher accuracy on ImageNet usually leads to better robustness against different corruptions. Therefore, in this paper, instead of following the traditional research paradigm that investigates new out-of-distribution corruptions or perturbations deep models may encounter, we conduct model debugging in in-distribution data to explore which object attributes a model may be sensitive to. To achieve this goal, we create a toolkit for object editing with controls of backgrounds, sizes, positions, and directions, and create a rigorous benchmark named ImageNet-E(diting) for evaluating the image classifier robustness in terms of object attributes. With our ImageNet-E, we evaluate the performance of current deep learning models, including both convolutional neural networks and vision transformers. We find that most models are quite sensitive to attribute changes. A small change in the background can lead to an average of 9.23% drop on top-1 accuracy. We also evaluate some robust models including both adversarially trained models and other robust trained models and find that some models show worse robustness against attribute changes than vanilla models. Based on these findings, we discover ways to enhance attribute robustness with preprocessing, architecture designs, and training strategies. We hope this work can provide some insights to the community and open up a new avenue for research in robust computer vision. The code and dataset are available at https://github.com/alibaba/easyrobust.

READ FULL TEXT

page 1

page 5

page 13

page 14

page 15

page 18

page 19

page 21

research
02/05/2023

Leaving Reality to Imagination: Robust Classification via Generated Datasets

Recent research on robustness has revealed significant performance gaps ...
research
09/30/2022

MobileViTv3: Mobile-Friendly Vision Transformer with Simple and Effective Fusion of Local, Global and Input Features

MobileViT (MobileViTv1) combines convolutional neural networks (CNNs) an...
research
03/28/2019

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

In this paper we establish rigorous benchmarks for image classifier robu...
research
09/11/2021

RobustART: Benchmarking Robustness on Architecture Design and Training Techniques

Deep neural networks (DNNs) are vulnerable to adversarial noises, which ...
research
08/05/2018

Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

The prediction accuracy has been the long-lasting and sole standard for ...
research
07/04/2018

Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations

In this paper we establish rigorous benchmarks for image classifier robu...
research
07/17/2022

Performance degradation of ImageNet trained models by simple image transformations

ImageNet trained PyTorch models are generally preferred as the off-the-s...

Please sign up or login with your details

Forgot password? Click here to reset