Robust Pre-Training by Adversarial Contrastive Learning

10/26/2020
by   Ziyu Jiang, et al.
0

Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness In this work, we improve robustness-aware self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations. Our approach leverages a recent contrastive learning framework, which learns representations by maximizing feature consistency under differently augmented views. This fits particularly well with the goal of adversarial robustness, as one cause of adversarial fragility is the lack of feature invariance, i.e., small input perturbations can result in undesirable large changes in features or even predicted labels. We explore various options to formulate the contrastive task, and demonstrate that by injecting adversarial perturbations, contrastive pre-training can lead to models that are both label-efficient and robust. We empirically evaluate the proposed Adversarial Contrastive Learning (ACL) and show it can consistently outperform existing methods. For example on the CIFAR-10 dataset, ACL outperforms the previous state-of-the-art unsupervised robust pre-training approach by 2.99 robust accuracy and 2.14 pre-training can improve semi-supervised adversarial training, even when only a few labeled examples are available. Our codes and pre-trained models have been released at: https://github.com/VITA-Group/Adversarial-Contrastive-Learning.

READ FULL TEXT
research
12/24/2020

Adversarial Momentum-Contrastive Pre-Training

Deep neural networks are vulnerable to semantic invariant corruptions an...
research
07/11/2022

RUSH: Robust Contrastive Learning via Randomized Smoothing

Recently, adversarial training has been incorporated in self-supervised ...
research
02/05/2023

On the Role of Contrastive Representation Learning in Adversarial Robustness: An Empirical Study

Self-supervised contrastive learning has solved one of the significant o...
research
06/02/2023

Supervised Adversarial Contrastive Learning for Emotion Recognition in Conversations

Extracting generalized and robust representations is a major challenge i...
research
12/08/2021

On visual self-supervision and its effect on model robustness

Recent self-supervision methods have found success in learning feature r...
research
10/09/2021

Adversarial Training for Face Recognition Systems using Contrastive Adversarial Learning and Triplet Loss Fine-tuning

Though much work has been done in the domain of improving the adversaria...
research
02/17/2022

Augment with Care: Contrastive Learning for the Boolean Satisfiability Problem

Supervised learning can improve the design of state-of-the-art solvers f...

Please sign up or login with your details

Forgot password? Click here to reset