KATANA: Simple Post-Training Robustness Using Test Time Augmentations

09/16/2021
by   Gilad Cohen, et al.
0

Although Deep Neural Networks (DNNs) achieve excellent performance on many real-world tasks, they are highly vulnerable to adversarial attacks. A leading defense against such attacks is adversarial training, a technique in which a DNN is trained to be robust to adversarial attacks by introducing adversarial noise to its input. This procedure is effective but must be done during the training phase. In this work, we propose a new simple and easy-to-use technique, KATANA, for robustifying an existing pretrained DNN without modifying its weights. For every image, we generate N randomized Test Time Augmentations (TTAs) by applying diverse color, blur, noise, and geometric transforms. Next, we utilize the DNN's logits output to train a simple random forest classifier to predict the real class label. Our strategy achieves state-of-the-art adversarial robustness on diverse attacks with minimal compromise on the natural images' classification. We test KATANA also against two adaptive white-box attacks and it shows excellent results when combined with adversarial training. Code is available in https://github.com/giladcohen/KATANA.

READ FULL TEXT
research
01/08/2023

RobArch: Designing Robust Architectures against Adversarial Attacks

Adversarial Training is the most effective approach for improving the ro...
research
10/17/2020

A Stochastic Neural Network for Attack-Agnostic Adversarial Robustness

Stochastic Neural Networks (SNNs) that inject noise into their hidden la...
research
12/11/2018

Adversarial Framing for Image and Video Classification

Neural networks are prone to adversarial attacks. In general, such attac...
research
09/10/2019

FDA: Feature Disruptive Attack

Though Deep Neural Networks (DNN) show excellent performance across vari...
research
07/16/2019

Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning

Improving the accuracy and robustness of deep neural nets (DNNs) and ada...
research
06/22/2021

DetectX – Adversarial Input Detection using Current Signatures in Memristive XBar Arrays

Adversarial input detection has emerged as a prominent technique to hard...
research
09/10/2023

DAD++: Improved Data-free Test Time Adversarial Defense

With the increasing deployment of deep neural networks in safety-critica...

Please sign up or login with your details

Forgot password? Click here to reset