Stationary Point Losses for Robust Model

02/19/2023
by   Weiwei Gao, et al.
0

The inability to guarantee robustness is one of the major obstacles to the application of deep learning models in security-demanding domains. We identify that the most commonly used cross-entropy (CE) loss does not guarantee robust boundary for neural networks. CE loss sharpens the neural network at the decision boundary to achieve a lower loss, rather than pushing the boundary to a more robust position. A robust boundary should be kept in the middle of samples from different classes, thus maximizing the margins from the boundary to the samples. We think this is due to the fact that CE loss has no stationary point. In this paper, we propose a family of new losses, called stationary point (SP) loss, which has at least one stationary point on the correct classification side. We proved that robust boundary can be guaranteed by SP loss without losing much accuracy. With SP loss, larger perturbations are required to generate adversarial examples. We demonstrate that robustness is improved under a variety of adversarial attacks by applying SP loss. Moreover, robust boundary learned by SP loss also performs well on imbalanced datasets.

READ FULL TEXT

page 2

page 6

page 8

page 11

research
07/09/2020

Boundary thickness and robustness in learning models

Robustness of machine learning models to various adversarial and non-adv...
research
02/28/2019

Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN

Deep neural networks have been widely deployed in various machine learni...
research
06/19/2017

Towards Deep Learning Models Resistant to Adversarial Attacks

Recent work has demonstrated that neural networks are vulnerable to adve...
research
07/30/2020

Trade-offs in Top-k Classification Accuracies on Losses for Deep Learning

This paper presents an experimental analysis about trade-offs in top-k c...
research
01/24/2019

Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples

State-of-the-art neural networks are vulnerable to adversarial examples;...
research
02/11/2022

Improving Generalization via Uncertainty Driven Perturbations

Recently Shah et al., 2020 pointed out the pitfalls of the simplicity bi...
research
06/14/2021

iNNformant: Boundary Samples as Telltale Watermarks

Boundary samples are special inputs to artificial neural networks crafte...

Please sign up or login with your details

Forgot password? Click here to reset