A Tale of Two Cities: Data and Configuration Variances in Robust Deep Learning

by   Guanqin Zhang, et al.
English 设为首页
University of Technology Sydney

Deep neural networks (DNNs), are widely used in many industries such as image recognition, supply chain, medical diagnosis, and autonomous driving. However, prior work has shown the high accuracy of a DNN model does not imply high robustness (i.e., consistent performances on new and future datasets) because the input data and external environment (e.g., software and model configurations) for a deployed model are constantly changing. Hence, ensuring the robustness of deep learning is not an option but a priority to enhance business and consumer confidence. Previous studies mostly focus on the data aspect of model variance. In this article, we systematically summarize DNN robustness issues and formulate them in a holistic view through two important aspects, i.e., data and software configuration variances in DNNs. We also provide a predictive framework to generate representative variances (counterexamples) by considering both data and configurations for robust learning through the lens of search-based optimization.


Understanding Spatial Robustness of Deep Neural Networks

Deep Neural Networks (DNNs) are being deployed in a wide range of settin...

Automated Search for Configurations of Deep Neural Network Architectures

Deep Neural Networks (DNNs) are intensively used to solve a wide variety...

An Investigation of the Weight Space for Version Control of Neural Networks

Deployed Deep Neural Networks (DNNs) are often trained further to improv...

Robust Sensible Adversarial Learning of Deep Neural Networks for Image Classification

The idea of robustness is central and critical to modern statistical ana...

StyleLess layer: Improving robustness for real-world driving

Deep Neural Networks (DNNs) are a critical component for self-driving ve...

Please sign up or login with your details

Forgot password? Click here to reset