Gradient-based Data Subversion Attack Against Binary Classifiers

05/31/2021
by   Rosni K Vasu, et al.
0

Machine learning based data-driven technologies have shown impressive performances in a variety of application domains. Most enterprises use data from multiple sources to provide quality applications. The reliability of the external data sources raises concerns for the security of the machine learning techniques adopted. An attacker can tamper the training or test datasets to subvert the predictions of models generated by these techniques. Data poisoning is one such attack wherein the attacker tries to degrade the performance of a classifier by manipulating the training data. In this work, we focus on label contamination attack in which an attacker poisons the labels of data to compromise the functionality of the system. We develop Gradient-based Data Subversion strategies to achieve model degradation under the assumption that the attacker has limited-knowledge of the victim model. We exploit the gradients of a differentiable convex loss function (residual errors) with respect to the predicted label as a warm-start and formulate different strategies to find a set of data instances to contaminate. Further, we analyze the transferability of attacks and the susceptibility of binary classifiers. Our experiments show that the proposed approach outperforms the baselines and is computationally efficient.

READ FULL TEXT
research
04/03/2021

Gradient-based Adversarial Deep Modulation Classification with Data-driven Subsampling

Automatic modulation classification can be a core component for intellig...
research
01/26/2020

Ensemble Noise Simulation to Handle Uncertainty about Gradient-based Adversarial Attacks

Gradient-based adversarial attacks on neural networks can be crafted in ...
research
04/24/2021

Influence Based Defense Against Data Poisoning Attacks in Online Learning

Data poisoning is a type of adversarial attack on training data where an...
research
03/02/2018

Label Sanitization against Label Flipping Poisoning Attacks

Many machine learning systems rely on data collected in the wild from un...
research
03/23/2021

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?

One of the most concerning threats for modern AI systems is data poisoni...
research
11/08/2022

Inferring Class Label Distribution of Training Data from Classifiers: An Accuracy-Augmented Meta-Classifier Attack

Property inference attacks against machine learning (ML) models aim to i...
research
10/24/2019

Toward a view-based data cleaning architecture

Big data analysis has become an active area of study with the growth of ...

Please sign up or login with your details

Forgot password? Click here to reset