SiPPing Neural Networks: Sensitivity-informed Provable Pruning of Neural Networks

10/11/2019
by   Cenk Baykal, et al.
17

We introduce a pruning algorithm that provably sparsifies the parameters of a trained model in a way that approximately preserves the model's predictive accuracy. Our algorithm uses a small batch of input points to construct a data-informed importance sampling distribution over the network's parameters, and adaptively mixes a sampling-based and deterministic pruning procedure to discard redundant weights. Our pruning method is simultaneously computationally efficient, provably accurate, and broadly applicable to various network architectures and data distributions. Our empirical comparisons show that our algorithm reliably generates highly compressed networks that incur minimal loss in performance relative to that of the original network. We present experimental results that demonstrate our algorithm's potential to unearth essential network connections that can be trained successfully in isolation, which may be of independent interest.

READ FULL TEXT
research
11/18/2019

Provable Filter Pruning for Efficient Neural Networks

We present a provable, sampling-based approach for generating compact Co...
research
04/15/2018

Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds

The deployment of state-of-the-art neural networks containing millions o...
research
03/09/2022

Data-Efficient Structured Pruning via Submodular Optimization

Structured pruning is an effective approach for compressing large pre-tr...
research
10/04/2018

SNIP: Single-shot Network Pruning based on Connection Sensitivity

Pruning large neural networks while maintaining the performance is often...
research
02/11/2023

Pruning Deep Neural Networks from a Sparsity Perspective

In recent years, deep network pruning has attracted significant attentio...
research
03/03/2020

Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection

Recent empirical works show that large deep neural networks are often hi...
research
10/29/2020

Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough

Despite the great success of deep learning, recent works show that large...

Please sign up or login with your details

Forgot password? Click here to reset