Adversarial Attacks for Tabular Data: Application to Fraud Detection and Imbalanced Data

01/20/2021
by   Francesco Cartella, et al.
0

Guaranteeing the security of transactional systems is a crucial priority of all institutions that process transactions, in order to protect their businesses against cyberattacks and fraudulent attempts. Adversarial attacks are novel techniques that, other than being proven to be effective to fool image classification models, can also be applied to tabular data. Adversarial attacks aim at producing adversarial examples, in other words, slightly modified inputs that induce the Artificial Intelligence (AI) system to return incorrect outputs that are advantageous for the attacker. In this paper we illustrate a novel approach to modify and adapt state-of-the-art algorithms to imbalanced tabular data, in the context of fraud detection. Experimental results show that the proposed modifications lead to a perfect attack success rate, obtaining adversarial examples that are also less perceptible when analyzed by humans. Moreover, when applied to a real-world production system, the proposed techniques shows the possibility of posing a serious threat to the robustness of advanced AI-based fraud detection procedures.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2022

On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks

While the literature on security attacks and defense of Machine Learning...
research
12/06/2021

ML Attack Models: Adversarial Attacks and Data Poisoning Attacks

Many state-of-the-art ML models have outperformed humans in various task...
research
04/14/2020

Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions

Despite the remarkable performance and generalization levels of deep lea...
research
10/26/2019

Detection of Adversarial Attacks and Characterization of Adversarial Subspace

Adversarial attacks have always been a serious threat for any data-drive...
research
09/26/2019

Adversarial ML Attack on Self Organizing Cellular Networks

Deep Neural Networks (DNN) have been widely adopted in self-organizing n...
research
10/18/2018

A Training-based Identification Approach to VIN Adversarial Examples

With the rapid development of Artificial Intelligence (AI), the problem ...
research
07/23/2020

AI Data poisoning attack: Manipulating game AI of Go

With the extensive use of AI in various fields, the issue of AI security...

Please sign up or login with your details

Forgot password? Click here to reset