Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition

07/23/2021
by   Lucas Liebenwein, et al.
37

We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression. Our algorithm hinges on the idea of compressing each convolutional (or fully-connected) layer by slicing its channels into multiple groups and decomposing each group via low-rank decomposition. At the core of our algorithm is the derivation of layer-wise error bounds from the Eckart Young Mirsky theorem. We then leverage these bounds to frame the compression problem as an optimization problem where we wish to minimize the maximum compression error across layers and propose an efficient algorithm towards a solution. Our experiments indicate that our method outperforms existing low-rank compression approaches across a wide range of networks and data sets. We believe that our results open up new avenues for future research into the global performance-size trade-offs of modern neural networks. Our code is available at https://github.com/lucaslie/torchprune.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2021

Low-rank Tensor Decomposition for Compression of Convolutional Neural Networks Using Funnel Regularization

Tensor decomposition is one of the fundamental technique for model compr...
research
07/23/2020

WeightNet: Revisiting the Design Space of Weight Networks

We present a conceptually simple, flexible and effective framework for w...
research
08/08/2023

Lossy and Lossless (L^2) Post-training Model Size Compression

Deep neural networks have delivered remarkable performance and have been...
research
09/11/2023

Efficient Finite Initialization for Tensorized Neural Networks

We present a novel method for initializing layers of tensorized neural n...
research
05/25/2023

Sharpness-Aware Minimization Leads to Low-Rank Features

Sharpness-aware minimization (SAM) is a recently proposed method that mi...
research
05/30/2022

Compressible-composable NeRF via Rank-residual Decomposition

Neural Radiance Field (NeRF) has emerged as a compelling method to repre...
research
06/09/2023

Error Feedback Can Accurately Compress Preconditioners

Leveraging second-order information at the scale of deep networks is one...

Please sign up or login with your details

Forgot password? Click here to reset