Hessian-Aware Pruning and Optimal Neural Implant

by   Shixing Yu, et al.

Pruning is an effective method to reduce the memory footprint and FLOPs associated with neural network models. However, existing pruning methods often result in significant accuracy degradation for moderate pruning levels. To address this problem, we introduce a new Hessian Aware Pruning (HAP) method which uses second-order sensitivity as a metric for structured pruning. In particular, we use the Hessian trace to find insensitive parameters in the neural network. This is different than magnitude based pruning methods, which prune small weight values. We also propose a new neural implant method, which replaces pruned spatial convolutions with point-wise convolution. We show that this method can improve the accuracy of pruned models while preserving the model size. We test HAP on multiple models (ResNet56, WideResNet32, PreResNet29, VGG16) on CIFAR-10 and (ResNet50) on ImageNet, and we achieve new state-of-the-art results. Specifically, HAP achieves 94.3% accuracy (<0.1% degradation) on PreResNet29 (CIFAR-10), with more than 70% of parameters pruned. In comparison to EigenDamage <cit.>, we achieve up to 1.2% higher accuracy with fewer parameters and FLOPs. Moreover, for ResNet50 HAP achieves 75.1% top-1 accuracy (0.5% degradation) on ImageNet, after pruning more than half of the parameters. In comparison to prior state-of-the-art of HRank <cit.>, we achieve up to 2% higher accuracy with fewer parameters and FLOPs. The framework has been open source and available online.


page 1

page 4

page 7

page 14


Resource Efficient Neural Networks Using Hessian Based Pruning

Neural network pruning is a practical way for reducing the size of train...

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis

Reducing the test time resource requirements of a neural network while p...

MLPruning: A Multilevel Structured Pruning Framework for Transformer-based Models

Pruning is an effective method to reduce the memory footprint and comput...

Efficient Stein Variational Inference for Reliable Distribution-lossless Network Pruning

Network pruning is a promising way to generate light but accurate models...

WoodFisher: Efficient second-order approximations for model compression

Second-order information, in the form of Hessian- or Inverse-Hessian-vec...

SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning

Pruning neural networks reduces inference time and memory costs. On stan...

Knapsack Pruning with Inner Distillation

Neural network pruning reduces the computational cost of an over-paramet...

Please sign up or login with your details

Forgot password? Click here to reset