Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A Causal Language Model Approach

12/03/2021
by   James Lee Hu, et al.
0

Deep Learning (DL)-based malware detectors are increasingly adopted for early detection of malicious behavior in cybersecurity. However, their sensitivity to adversarial malware variants has raised immense security concerns. Generating such adversarial variants by the defender is crucial to improving the resistance of DL-based malware detectors against them. This necessity has given rise to an emerging stream of machine learning research, Adversarial Malware example Generation (AMG), which aims to generate evasive adversarial malware variants that preserve the malicious functionality of a given malware. Within AMG research, black-box method has gained more attention than white-box methods. However, most black-box AMG methods require numerous interactions with the malware detectors to generate adversarial malware examples. Given that most malware detectors enforce a query limit, this could result in generating non-realistic adversarial examples that are likely to be detected in practice due to lack of stealth. In this study, we show that a novel DL-based causal language model enables single-shot evasion (i.e., with only one query to malware detector) by treating the content of the malware executable as a byte sequence and training a Generative Pre-Trained Transformer (GPT). Our proposed method, MalGPT, significantly outperformed the leading benchmark methods on a real-world malware dataset obtained from VirusTotal, achieving over 24.51% evasion rate. MalGPT enables cybersecurity researchers to develop advanced defense capabilities by emulating large-scale realistic AMG.

READ FULL TEXT

page 1

page 5

research
12/14/2020

Binary Black-box Evasion Attacks Against Deep Learning-based Static Malware Detectors with Adversarial Byte-Level Language Model

Anti-malware engines are the first line of defense against malicious sof...
research
11/03/2020

MalFox: Camouflaged Adversarial Malware Example Generation Based on C-GANs Against Black-Box Detectors

Deep learning is a thriving field currently stuffed with many practical ...
research
06/16/2023

Query-Free Evasion Attacks Against Machine Learning-Based Malware Detectors with Generative Adversarial Networks

Malware detectors based on machine learning (ML) have been shown to be s...
research
04/14/2023

Combining Generators of Adversarial Malware Examples to Increase Evasion Rate

Antivirus developers are increasingly embracing machine learning as a ke...
research
03/11/2021

Stochastic-HMDs: Adversarial Resilient Hardware Malware Detectors through Voltage Over-scaling

Machine learning-based hardware malware detectors (HMDs) offer a potenti...
research
10/07/2021

EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box Android Malware Detection

Over the last decade, several studies have investigated the weaknesses o...
research
10/25/2022

Multi-view Representation Learning from Malware to Defend Against Adversarial Variants

Deep learning-based adversarial malware detectors have yielded promising...

Please sign up or login with your details

Forgot password? Click here to reset