Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks

by   Jianan Zhou, et al.
The University of Sydney
Hong Kong Baptist University
The University of Tokyo

Adversarial training (AT) with imperfect supervision is significant but receives limited attention. To push AT towards more practical scenarios, we explore a brand new yet challenging setting, i.e., AT with complementary labels (CLs), which specify a class that a data sample does not belong to. However, the direct combination of AT with existing methods for CLs results in consistent failure, but not on a simple baseline of two-stage training. In this paper, we further explore the phenomenon and identify the underlying challenges of AT with CLs as intractable adversarial optimization and low-quality adversarial examples. To address the above problems, we propose a new learning strategy using gradually informative attacks, which consists of two critical components: 1) Warm-up Attack (Warm-up) gently raises the adversarial perturbation budgets to ease the adversarial optimization with CLs; 2) Pseudo-Label Attack (PLA) incorporates the progressively informative model predictions into a corrected complementary loss. Extensive experiments are conducted to demonstrate the effectiveness of our method on a range of benchmarked datasets. The code is publicly available at:


page 16

page 17


On the effectiveness of adversarial training against common corruptions

The literature on robustness towards common corruptions shows no consens...

Adversarial Training Over Long-Tailed Distribution

In this paper, we study adversarial training on datasets that obey the l...

Outlier Robust Adversarial Training

Supervised learning models are challenged by the intrinsic complexities ...

DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks

Adversarial training has been proven to be a powerful regularization met...

LAS-AT: Adversarial Training with Learnable Attack Strategy

Adversarial training (AT) is always formulated as a minimax problem, of ...

Prior-Guided Adversarial Initialization for Fast Adversarial Training

Fast adversarial training (FAT) effectively improves the efficiency of s...

Please sign up or login with your details

Forgot password? Click here to reset