Improving robustness against common corruptions by covariate shift adaptation

06/30/2020
by   Steffen Schneider, et al.
5

Today's state-of-the-art machine vision models are vulnerable to image corruptions like blurring or compression artefacts, limiting their performance in many real-world applications. We here argue that popular benchmarks to measure model robustness against common corruptions (like ImageNet-C) underestimate model robustness in many (but not all) application scenarios. The key insight is that in many scenarios, multiple unlabeled examples of the corruptions are available and can be used for unsupervised online adaptation. Replacing the activation statistics estimated by batch normalization on the training set with the statistics of the corrupted images consistently improves the robustness across 25 different popular computer vision models. Using the corrected statistics, ResNet-50 reaches 62.2 76.7 state of the art from 56.5 improves robustness for the ResNet-50 and AugMix models, and 32 samples are sufficient to improve the current state of the art for a ResNet-50 architecture. We argue that results with adapted statistics should be included whenever reporting scores in corruption benchmarks and other out-of-distribution generalization settings.

READ FULL TEXT

page 17

page 18

page 30

research
10/07/2020

Revisiting Batch Normalization for Improving Corruption Robustness

Modern deep neural networks (DNN) have demonstrated remarkable success i...
research
03/21/2022

Delving into the Estimation Shift of Batch Normalization in a Network

Batch normalization (BN) is a milestone technique in deep learning. It n...
research
10/06/2021

Test-time Batch Statistics Calibration for Covariate Shift

Deep neural networks have a clear degradation when applying to the unsee...
research
06/19/2020

Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift

Covariate shift has been shown to sharply degrade both predictive accura...
research
02/09/2021

Adversarially Robust Classifier with Covariate Shift Adaptation

Existing adversarially trained models typically perform inference on tes...
research
03/30/2021

Improving robustness against common corruptions with frequency biased models

CNNs perform remarkably well when the training and test distributions ar...
research
04/06/2022

Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations

Neural network classifiers can largely rely on simple spurious features,...

Please sign up or login with your details

Forgot password? Click here to reset