Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness

06/04/2021
by   Zifeng Wang, et al.
0

We investigate the HSIC (Hilbert-Schmidt independence criterion) bottleneck as a regularizer for learning an adversarially robust deep neural network classifier. We show that the HSIC bottleneck enhances robustness to adversarial attacks both theoretically and experimentally. Our experiments on multiple benchmark datasets and architectures demonstrate that incorporating an HSIC bottleneck regularizer attains competitive natural accuracy and improves adversarial robustness, both with and without adversarial examples during training.

READ FULL TEXT
research
10/09/2022

Pruning Adversarially Robust Neural Networks without Adversarial Examples

Adversarial pruning compresses models while preserving robustness. Curre...
research
10/25/2022

Causal Information Bottleneck Boosts Adversarial Robustness of Deep Neural Network

The information bottleneck (IB) method is a feasible defense solution ag...
research
02/13/2020

The Conditional Entropy Bottleneck

Much of the field of Machine Learning exhibits a prominent set of failur...
research
01/10/2020

Guess First to Enable Better Compression and Adversarial Robustness

Machine learning models are generally vulnerable to adversarial examples...
research
05/24/2018

Laplacian Power Networks: Bounding Indicator Function Smoothness for Adversarial Defense

Deep Neural Networks often suffer from lack of robustness to adversarial...
research
03/16/2023

Gate Recurrent Unit Network based on Hilbert-Schmidt Independence Criterion for State-of-Health Estimation

State-of-health (SOH) estimation is a key step in ensuring the safe and ...
research
06/11/2022

Improving the Adversarial Robustness of NLP Models by Information Bottleneck

Existing studies have demonstrated that adversarial examples can be dire...

Please sign up or login with your details

Forgot password? Click here to reset