Adversarial Robustness Toolbox v0.2.2

07/03/2018
by   Maria-Irina Nicolae, et al.
0

Adversarial examples have become an indisputable threat to the security of modern AI systems based on deep neural networks (DNNs). The Adversarial Robustness Toolbox (ART) is a Python library designed to support researchers and developers in creating novel defence techniques, as well as in deploying practical defences of real-world AI systems. Researchers can use ART to benchmark novel defences against the state-of-the-art. For developers, the library provides interfaces which support the composition of comprehensive defence systems using individual methods as building blocks. The Adversarial Robustness Toolbox supports machine learning models (and deep neural networks (DNNs) specifically) implemented in any of the most popular deep learning frameworks (TensorFlow, Keras, PyTorch). Currently, the library is primarily intended to improve the adversarial robustness of visual recognition systems, however, future releases that will comprise adaptations to other data modes (such as speech, text or time series) are envisioned. The ART source code is released (https://github.com/IBM/adversarial-robustness-toolbox) under an MIT license. The release includes code examples and extensive documentation (http://adversarial-robustness-toolbox.readthedocs.io) to help researchers and developers get quickly started.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/09/2022

Adversarial Framework with Certified Robustness for Time-Series Domain via Statistical Features

Time-series data arises in many real-world applications (e.g., mobile he...
research
08/10/2020

EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy

EagerPy is a Python framework that lets you write code that automaticall...
research
08/05/2023

feather – a Python SDK to share and deploy models

At its core, feather was a tool that allowed model developers to build s...
research
06/06/2017

Adversarial-Playground: A Visualization Suite for Adversarial Sample Generation

With growing interest in adversarial machine learning, it is important f...
research
03/10/2022

Exploiting the Potential of Datasets: A Data-Centric Approach for Model Robustness

Robustness of deep neural networks (DNNs) to malicious perturbations is ...
research
07/05/2022

PRoA: A Probabilistic Robustness Assessment against Functional Perturbations

In safety-critical deep learning applications robustness measurement is ...
research
06/23/2016

DropNeuron: Simplifying the Structure of Deep Neural Networks

Deep learning using multi-layer neural networks (NNs) architecture manif...

Please sign up or login with your details

Forgot password? Click here to reset