Robustness and invariance properties of image classifiers
Deep neural networks have achieved impressive results in many image classification tasks. However, since their performance is usually measured in controlled settings, it is important to ensure that their decisions remain correct when deployed in noisy environments. In fact, deep networks are not robust to a large variety of semantic-preserving image modifications, even to imperceptible image changes known as adversarial perturbations. The poor robustness of image classifiers to small data distribution shifts raises serious concerns regarding their trustworthiness. To build reliable machine learning models, we must design principled methods to analyze and understand the mechanisms that shape robustness and invariance. This is exactly the focus of this thesis. First, we study the problem of computing sparse adversarial perturbations. We exploit the geometry of the decision boundaries of image classifiers for computing sparse perturbations very fast, and reveal a qualitative connection between adversarial examples and the data features that image classifiers learn. Then, to better understand this connection, we propose a geometric framework that connects the distance of data samples to the decision boundary, with the features existing in the data. We show that deep classifiers have a strong inductive bias towards invariance to non-discriminative features, and that adversarial training exploits this property to confer robustness. Finally, we focus on the challenging problem of generalization to unforeseen corruptions of the data, and we propose a novel data augmentation scheme for achieving state-of-the-art robustness to common corruptions of the images. Overall, our results contribute to the understanding of the fundamental mechanisms of deep image classifiers, and pave the way for building more reliable machine learning systems that can be deployed in real-world environments.
READ FULL TEXT