Plausible Counterfactuals: Auditing Deep Learning Classifiers with Realistic Adversarial Examples

The last decade has witnessed the proliferation of Deep Learning models in many applications, achieving unrivaled levels of predictive performance. Unfortunately, the black-box nature of Deep Learning models has posed unanswered questions about what they learn from data. Certain application scenarios have highlighted the importance of assessing the bounds under which Deep Learning models operate, a problem addressed by using assorted approaches aimed at audiences from different domains. However, as the focus of the application is placed more on non-expert users, it results mandatory to provide the means for him/her to trust the model, just like a human gets familiar with a system or process: by understanding the hypothetical circumstances under which it fails. This is indeed the angular stone for this research work: to undertake an adversarial analysis of a Deep Learning model. The proposed framework constructs counterfactual examples by ensuring their plausibility, e.g. there is a reasonable probability that a human could generate them without resorting to a computer program. Therefore, this work must be regarded as valuable auditing exercise of the usable bounds a certain model is constrained within, thereby allowing for a much greater understanding of the capabilities and pitfalls of a model used in a real application. To this end, a Generative Adversarial Network (GAN) and multi-objective heuristics are used to furnish a plausible attack to the audited model, efficiently trading between the confusion of this model, the intensity and plausibility of the generated counterfactual. Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.

READ FULL TEXT

page 1

page 6

page 7

research
11/28/2018

Adversarial Machine Learning And Speech Emotion Recognition: Utilizing Generative Adversarial Networks For Robustness

Deep learning has undoubtedly offered tremendous improvements in the per...
research
10/31/2017

Generating Natural Adversarial Examples

Due to their complex nature, it is hard to characterize the ways in whic...
research
11/12/2019

Few-Features Attack to Fool Machine Learning Models through Mask-Based GAN

GAN is a deep-learning based generative approach to generate contents su...
research
11/03/2020

MalFox: Camouflaged Adversarial Malware Example Generation Based on C-GANs Against Black-Box Detectors

Deep learning is a thriving field currently stuffed with many practical ...
research
01/11/2021

Deeplite Neutrino: An End-to-End Framework for Constrained Deep Learning Model Optimization

Designing deep learning-based solutions is becoming a race for training ...
research
08/14/2020

Efficiently Constructing Adversarial Examples by Feature Watermarking

With the increasing attentions of deep learning models, attacks are also...

Please sign up or login with your details

Forgot password? Click here to reset