Applying Tensor Decomposition to image for Robustness against Adversarial Attack

02/28/2020
by   Seungju Cho, et al.
0

Nowadays the deep learning technology is growing faster and shows dramatic performance in computer vision areas. However, it turns out a deep learning based model is highly vulnerable to some small perturbation called an adversarial attack. It can easily fool the deep learning model by adding small perturbations. On the other hand, tensor decomposition method widely uses for compressing the tensor data, including data matrix, image, etc. In this paper, we suggest combining tensor decomposition for defending the model against adversarial example. We verify this idea is simple and effective to resist adversarial attack. In addition, this method rarely degrades the original performance of clean data. We experiment on MNIST, CIFAR10 and ImageNet data and show our method robust on state-of-the-art attack methods.

READ FULL TEXT

page 2

page 7

page 8

research
08/05/2020

Adv-watermark: A Novel Watermark Perturbation for Adversarial Examples

Recent research has demonstrated that adding some imperceptible perturba...
research
01/31/2022

Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons

We identify fragile and robust neurons of deep learning architectures us...
research
08/14/2019

DAPAS : Denoising Autoencoder to Prevent Adversarial attack in Semantic Segmentation

Nowadays, Deep learning techniques show dramatic performance on computer...
research
03/04/2020

Double Backpropagation for Training Autoencoders against Adversarial Attack

Deep learning, as widely known, is vulnerable to adversarial samples. Th...
research
03/30/2020

Towards Deep Learning Models Resistant to Large Perturbations

Adversarial robustness has proven to be a required property of machine l...
research
05/11/2023

Inter-frame Accelerate Attack against Video Interpolation Models

Deep learning based video frame interpolation (VIF) method, aiming to sy...
research
12/28/2021

Associative Adversarial Learning Based on Selective Attack

A human's attention can intuitively adapt to corrupted areas of an image...

Please sign up or login with your details

Forgot password? Click here to reset