A survey on practical adversarial examples for malware classifiers

11/06/2020
by   Daniel Park, et al.
0

Machine learning based solutions have been very helpful in solving problems that deal with immense amounts of data, such as malware detection and classification. However, deep neural networks have been found to be vulnerable to adversarial examples, or inputs that have been purposefully perturbed to result in an incorrect label. Researchers have shown that this vulnerability can be exploited to create evasive malware samples. However, many proposed attacks do not generate an executable and instead generate a feature vector. To fully understand the impact of adversarial examples on malware detection, we review practical attacks against malware classifiers that generate executable adversarial malware examples. We also discuss current challenges in this area of research, as well as suggestions for improvement and future research directions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2019

Short Paper: Creating Adversarial Malware Examples using Code Insertion

There has been an increased interest in the application of convolutional...
research
03/04/2022

Adversarial Patterns: Building Robust Android Malware Classifiers

Deep learning-based classifiers have substantially improved recognition ...
research
10/27/2017

Adversarial Detection of Flash Malware: Limitations and Open Issues

During the past two years, Flash malware has become one of the most insi...
research
07/12/2022

Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware

While machine learning is vulnerable to adversarial examples, it still l...
research
08/27/2021

Mal2GCN: A Robust Malware Detection Approach Using Deep Graph Convolutional Networks With Non-Negative Weights

With the growing pace of using machine learning to solve various problem...
research
06/14/2016

Adversarial Perturbations Against Deep Neural Networks for Malware Classification

Deep neural networks, like many other machine learning models, have rece...
research
10/22/2021

Improving Robustness of Malware Classifiers using Adversarial Strings Generated from Perturbed Latent Representations

In malware behavioral analysis, the list of accessed and created files v...

Please sign up or login with your details

Forgot password? Click here to reset