Constraining the Attack Space of Machine Learning Models with Distribution Clamping Preprocessing

05/18/2022
by   Ryan Feng, et al.
6

Preprocessing and outlier detection techniques have both been applied to neural networks to increase robustness with varying degrees of success. In this paper, we formalize the ideal preprocessor function as one that would take any input and set it to the nearest in-distribution input. In other words, we detect any anomalous pixels and set them such that the new input is in-distribution. We then illustrate a relaxed solution to this problem in the context of patch attacks. Specifically, we demonstrate that we can model constraints on the patch attack that specify regions as out of distribution. With these constraints, we are able to preprocess inputs successfully, increasing robustness on CARLA object detection.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset