On Visual Hallmarks of Robustness to Adversarial Malware

05/09/2018
by   Alex Huang, et al.
0

A central challenge of adversarial learning is to interpret the resulting hardened model. In this contribution, we ask how robust generalization can be visually discerned and whether a concise view of the interactions between a hardened decision map and input samples is possible. We first provide a means of visually comparing a hardened model's loss behavior with respect to the adversarial variants generated during training versus loss behavior with respect to adversarial variants generated from other sources. This allows us to confirm that the association of observed flatness of a loss landscape with generalization that is seen with naturally trained models extends to adversarially hardened models and robust generalization. To complement these means of interpreting model parameter robustness we also use self-organizing maps to provide a visual means of superimposing adversarial and natural variants on a model's decision space, thus allowing the model's global robustness to be comprehensively examined.

READ FULL TEXT
research
09/29/2018

Interpreting Adversarial Robustness: A View from Decision Surface in Input Space

One popular hypothesis of neural network generalization is that the flat...
research
10/25/2022

Multi-view Representation Learning from Malware to Defend Against Adversarial Variants

Deep learning-based adversarial malware detectors have yielded promising...
research
04/09/2021

Relating Adversarially Robust Generalization to Flat Minima

Adversarial training (AT) has become the de-facto standard to obtain mod...
research
11/21/2022

CLAWSAT: Towards Both Robust and Accurate Code Models

We integrate contrastive learning (CL) with adversarial learning to co-o...
research
02/05/2021

Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression

Adversarial training is actively studied for learning robust models agai...
research
04/10/2018

Adversarial Training Versus Weight Decay

Performance-critical machine learning models should be robust to input p...

Please sign up or login with your details

Forgot password? Click here to reset