Collaborative Adversarial Training

05/23/2022
by   Qizhang Li, et al.
0

The vulnerability of deep neural networks (DNNs) to adversarial examples has attracted great attention in the machine learning community. The problem is related to local non-smoothness and steepness of normally obtained loss landscapes. Training augmented with adversarial examples (a.k.a., adversarial training) is considered as an effective remedy. In this paper, we highlight that some collaborative examples, nearly perceptually indistinguishable from both adversarial and benign examples yet show extremely lower prediction loss, can be utilized to enhance adversarial training. A novel method called collaborative adversarial training (CoAT) is thus proposed to achieve new state-of-the-arts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/23/2020

Towards Robust DNNs: An Taylor Expansion-Based Method for Generating Powerful Adversarial Examples

Although deep neural networks (DNNs) have achieved successful applicatio...
research
01/22/2021

Adaptive Neighbourhoods for the Discovery of Adversarial Examples

Deep Neural Networks (DNNs) have often supplied state-of-the-art results...
research
08/25/2021

Bridged Adversarial Training

Adversarial robustness is considered as a required property of deep neur...
research
08/07/2020

Optimizing Information Loss Towards Robust Neural Networks

Neural Networks (NNs) are vulnerable to adversarial examples. Such input...
research
08/10/2021

Enhancing Knowledge Tracing via Adversarial Training

We study the problem of knowledge tracing (KT) where the goal is to trac...
research
05/27/2022

R-HTDetector: Robust Hardware-Trojan Detection Based on Adversarial Training

Hardware Trojans (HTs) have become a serious problem, and extermination ...
research
09/07/2021

Adversarial Parameter Defense by Multi-Step Risk Minimization

Previous studies demonstrate DNNs' vulnerability to adversarial examples...

Please sign up or login with your details

Forgot password? Click here to reset