Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models

12/16/2017
by   Jack W. Stokes, et al.
0

Recently researchers have proposed using deep learning-based systems for malware detection. Unfortunately, all deep learning classification systems are vulnerable to adversarial attacks. Previous work has studied adversarial attacks against static analysis-based malware classifiers which only classify the content of the unknown file without execution. However, since the majority of malware is either packed or encrypted, malware classification based on static analysis often fails to detect these types of files. To overcome this limitation, anti-malware companies typically perform dynamic analysis by emulating each file in the anti-malware engine or performing in-depth scanning in a virtual machine. These strategies allow the analysis of the malware after unpacking or decryption. In this work, we study different strategies of crafting adversarial samples for dynamic analysis. These strategies operate on sparse, binary inputs in contrast to continuous inputs such as pixels in images. We then study the effects of two, previously proposed defensive mechanisms against crafted adversarial samples including the distillation and ensemble defenses. We also propose and evaluate the weight decay defense. Experiments show that with these three defensive strategies, the number of successfully crafted adversarial samples is reduced compared to a standard baseline system without any defenses. In particular, the ensemble defense is the most resilient to adversarial attacks. Importantly, none of the defenses significantly reduce the classification accuracy for detecting malware. Finally, we demonstrate that while adding additional hidden layers to neural models does not significantly improve the malware classification accuracy, it does significantly increase the classifier's robustness to adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2023

Effectiveness of Moving Target Defenses for Adversarial Attacks in ML-based Malware Detection

Several moving target defenses (MTDs) to counter adversarial ML attacks ...
research
03/06/2020

Automatic Generation of Adversarial Examples for Interpreting Malware Classifiers

Recent advances in adversarial attacks have shown that machine learning ...
research
10/06/2021

amsqr at MLSEC-2021: Thwarting Adversarial Malware Evasion with a Defense-in-Depth

This paper describes the author's participation in the 3rd edition of th...
research
09/23/2021

On The Vulnerability of Anti-Malware Solutions to DNS Attacks

Anti-malware agents typically communicate with their remote services to ...
research
10/19/2020

Against All Odds: Winning the Defense Challenge in an Evasion Competition with Diversification

Machine learning-based systems for malware detection operate in a hostil...
research
08/17/2022

An Efficient Multi-Step Framework for Malware Packing Identification

Malware developers use combinations of techniques such as compression, e...
research
08/03/2022

Design of secure and robust cognitive system for malware detection

Machine learning based malware detection techniques rely on grayscale im...

Please sign up or login with your details

Forgot password? Click here to reset