Adversarial Robustness under Long-Tailed Distribution

by   Tong Wu, et al.

Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks. However, existing works on adversarial robustness mainly focus on balanced datasets, while real-world data usually exhibits a long-tailed distribution. To push adversarial robustness towards more realistic scenarios, in this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions. In particular, we first reveal the negative impacts induced by imbalanced data on both recognition performance and adversarial robustness, uncovering the intrinsic challenges of this problem. We then perform a systematic study on existing long-tailed recognition methods in conjunction with the adversarial training framework. Several valuable observations are obtained: 1) natural accuracy is relatively easy to improve, 2) fake gain of robust accuracy exists under unreliable evaluation, and 3) boundary error limits the promotion of robustness. Inspired by these observations, we propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant classifier and data re-balancing via both margin engineering at training stage and boundary adjustment during inference. Extensive experiments demonstrate the superiority of our approach over other state-of-the-art defense methods. To our best knowledge, we are the first to tackle adversarial robustness under long-tailed distributions, which we believe would be a significant step towards real-world robustness. Our code is available at: .


page 1

page 2

page 3

page 4


Adversarial Training Over Long-Tailed Distribution

In this paper, we study adversarial training on datasets that obey the l...

Robust Long-Tailed Learning under Label Noise

Long-tailed learning has attracted much attention recently, with the goa...

Inverse Image Frequency for Long-tailed Image Recognition

The long-tailed distribution is a common phenomenon in the real world. E...

Balanced Product of Experts for Long-Tailed Recognition

Many real-world recognition problems suffer from an imbalanced or long-t...

Learning Imbalanced Data with Vision Transformers

The real-world data tends to be heavily imbalanced and severely skew the...

HeroLT: Benchmarking Heterogeneous Long-Tailed Learning

Long-tailed data distributions are prevalent in a variety of domains, in...

Identifying Hard Noise in Long-Tailed Sample Distribution

Conventional de-noising methods rely on the assumption that all samples ...

Please sign up or login with your details

Forgot password? Click here to reset