Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization

06/02/2023
by   Javier Carnerero-Cano, et al.
0

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to deliberately degrade the algorithms' performance. Optimal attacks can be formulated as bilevel optimization problems and help to assess their robustness in worst-case scenarios. We show that current approaches, which typically assume that hyperparameters remain constant, lead to an overly pessimistic view of the algorithms' robustness and of the impact of regularization. We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters and models the attack as a multiobjective bilevel optimization problem. This allows to formulate optimal attacks, learn hyperparameters and evaluate robustness under worst-case conditions. We apply this attack formulation to several ML classifiers using L_2 and L_1 regularization. Our evaluation on multiple datasets confirms the limitations of previous strategies and evidences the benefits of using L_2 and L_1 regularization to dampen the effect of poisoning attacks.

READ FULL TEXT

page 1

page 8

research
05/23/2021

Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters

Machine learning algorithms are vulnerable to poisoning attacks, where a...
research
02/28/2020

Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, wh...
research
04/24/2020

Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers

Backdoor data poisoning attacks have recently been demonstrated in compu...
research
02/09/2023

Hyperparameter Search Is All You Need For Training-Agnostic Backdoor Robustness

Commoditization and broad adoption of machine learning (ML) technologies...
research
10/02/2022

Optimization for Robustness Evaluation beyond ℓ_p Metrics

Empirical evaluation of deep learning models against adversarial attacks...
research
01/29/2020

Regularization Helps with Mitigating Poisoning Attacks: Distributionally-Robust Machine Learning Using the Wasserstein Distance

We use distributionally-robust optimization for machine learning to miti...
research
06/10/2021

A Unified Framework for Task-Driven Data Quality Management

High-quality data is critical to train performant Machine Learning (ML) ...

Please sign up or login with your details

Forgot password? Click here to reset