Paoding: Supervised Robustness-preserving Data-free Neural Network Pruning

04/02/2022
by   Mark Huasong Meng, et al.
0

When deploying pre-trained neural network models in real-world applications, model consumers often encounter resource-constraint platforms such as mobile and smart devices. They typically use the pruning technique to reduce the size and complexity of the model, generating a lighter one with less resource consumption. Nonetheless, most existing pruning methods are proposed with a premise that the model after being pruned has a chance to be fine-tuned or even retrained based on the original training data. This may be unrealistic in practice, as the data controllers are often reluctant to provide their model consumers with the original data. In this work, we study the neural network pruning in the data-free context, aiming to yield lightweight models that are not only accurate in prediction but also robust against undesired inputs in open-world deployments. Considering the absence of the fine-tuning and retraining that can fix the mis-pruned units, we replace the traditional aggressive one-shot strategy with a conservative one that treats the pruning as a progressive process. We propose a pruning method based on stochastic optimization that uses robustness-related metrics to guide the pruning process. Our method is implemented as a Python package named Paoding and evaluated with a series of experiments on diverse neural network models. The experimental results show that it significantly outperforms existing one-shot data-free pruning approaches in terms of robustness preservation and accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/24/2022

Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning

Most existing pruning works are resource-intensive, requiring retraining...
research
06/14/2019

Towards Compact and Robust Deep Neural Networks

Deep neural networks have achieved impressive performance in many applic...
research
08/14/2023

Unified Data-Free Compression: Pruning and Quantization without Fine-Tuning

Structured pruning and quantization are promising approaches for reducin...
research
06/16/2022

"Understanding Robustness Lottery": A Comparative Visual Analysis of Neural Network Pruning Approaches

Deep learning approaches have provided state-of-the-art performance in m...
research
04/26/2023

Sparsified Model Zoo Twins: Investigating Populations of Sparsified Neural Network Models

With growing size of Neural Networks (NNs), model sparsification to redu...
research
10/14/2020

Towards Accurate Quantization and Pruning via Data-free Knowledge Transfer

When large scale training data is available, one can obtain compact and ...
research
02/15/2022

Convolutional Network Fabric Pruning With Label Noise

This paper presents an iterative pruning strategy for Convolutional Netw...

Please sign up or login with your details

Forgot password? Click here to reset