FGAM:Fast Adversarial Malware Generation Method Based on Gradient Sign

05/22/2023
by   Kun Li, et al.
0

Malware detection models based on deep learning have been widely used, but recent research shows that deep learning models are vulnerable to adversarial attacks. Adversarial attacks are to deceive the deep learning model by generating adversarial samples. When adversarial attacks are performed on the malware detection model, the attacker will generate adversarial malware with the same malicious functions as the malware, and make the detection model classify it as benign software. Studying adversarial malware generation can help model designers improve the robustness of malware detection models. At present, in the work on adversarial malware generation for byte-to-image malware detection models, there are mainly problems such as large amount of injection perturbation and low generation efficiency. Therefore, this paper proposes FGAM (Fast Generate Adversarial Malware), a method for fast generating adversarial malware, which iterates perturbed bytes according to the gradient sign to enhance adversarial capability of the perturbed bytes until the adversarial malware is successfully generated. It is experimentally verified that the success rate of the adversarial malware deception model generated by FGAM is increased by about 84% compared with existing methods.

READ FULL TEXT
research
07/11/2023

ATWM: Defense against adversarial malware based on adversarial training

Deep learning technology has made great achievements in the field of ima...
research
11/09/2019

Protecting from Malware Obfuscation Attacks through Adversarial Risk Analysis

Malware constitutes a major global risk affecting millions of users each...
research
01/20/2022

RoboMal: Malware Detection for Robot Network Systems

Robot systems are increasingly integrating into numerous avenues of mode...
research
01/27/2021

Robust Android Malware Detection System against Adversarial Attacks using Q-Learning

The current state-of-the-art Android malware detection systems are based...
research
09/07/2023

Adversarially Robust Deep Learning with Optimal-Transport-Regularized Divergences

We introduce the ARMOR_D methods as novel approaches to enhancing the ad...
research
09/20/2019

COPYCAT: Practical Adversarial Attacks on Visualization-Based Malware Detection

Despite many attempts, the state-of-the-art of adversarial machine learn...
research
08/31/2023

The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement Learning

Due to the proliferation of malware, defenders are increasingly turning ...

Please sign up or login with your details

Forgot password? Click here to reset