Adversarial Attacks on Transformers-Based Malware Detectors

10/01/2022
by   Yash Jakhotiya, et al.
11

Signature-based malware detectors have proven to be insufficient as even a small change in malignant executable code can bypass these signature-based detectors. Many machine learning-based models have been proposed to efficiently detect a wide variety of malware. Many of these models are found to be susceptible to adversarial attacks - attacks that work by generating intentionally designed inputs that can force these models to misclassify. Our work aims to explore vulnerabilities in the current state of the art malware detectors to adversarial attacks. We train a Transformers-based malware detector, carry out adversarial attacks resulting in a misclassification rate of 23.9 An implementation of our work can be found at https://github.com/yashjakhotiya/Adversarial-Attacks-On-Transformers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/24/2020

Torchattacks : A Pytorch Repository for Adversarial Attacks

Torchattacks is a PyTorch library that contains adversarial attacks to g...
research
11/28/2021

MALIGN: Adversarially Robust Malware Family Detection using Sequence Alignment

We propose MALIGN, a novel malware family detection approach inspired by...
research
03/11/2021

Stochastic-HMDs: Adversarial Resilient Hardware Malware Detectors through Voltage Over-scaling

Machine learning-based hardware malware detectors (HMDs) offer a potenti...
research
01/31/2023

Inference Time Evidences of Adversarial Attacks for Forensic on Transformers

Vision Transformers (ViTs) are becoming a very popular paradigm for visi...
research
10/07/2020

Fortifying Toxic Speech Detectors Against Veiled Toxicity

Modern toxic speech detectors are incompetent in recognizing disguised o...
research
08/17/2023

Towards a Practical Defense against Adversarial Attacks on Deep Learning-based Malware Detectors via Randomized Smoothing

Malware detectors based on deep learning (DL) have been shown to be susc...
research
02/07/2020

Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks

In recent years, a variety of effective neural network-based methods for...

Please sign up or login with your details

Forgot password? Click here to reset