SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks

02/07/2021
by   Enzo Tartaglione, et al.
0

Deep neural networks include millions of learnable parameters, making their deployment over resource-constrained devices problematic. SeReNe (Sensitivity-based Regularization of Neurons) is a method for learning sparse topologies with a structure, exploiting neural sensitivity as a regularizer. We define the sensitivity of a neuron as the variation of the network output with respect to the variation of the activity of the neuron. The lower the sensitivity of a neuron, the less the network output is perturbed if the neuron output changes. By including the neuron sensitivity in the cost function as a regularization term, we areable to prune neurons with low sensitivity. As entire neurons are pruned rather then single parameters, practical network footprint reduction becomes possible. Our experimental results on multiple network architectures and datasets yield competitive compression ratios with respect to state-of-the-art references.

READ FULL TEXT
research
11/16/2020

LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks

LOBSTER (LOss-Based SensiTivity rEgulaRization) is a method for training...
research
10/28/2018

Learning Sparse Neural Networks via Sensitivity-Driven Regularization

The ever-increasing number of parameters in deep neural networks poses c...
research
09/25/2019

Switched linear projections and inactive state sensitivity for deep neural network interpretability

We introduce switched linear projections for expressing the activity of ...
research
06/02/2020

An Informal Introduction to Multiplet Neural Networks

In the artificial neuron, I replace the dot product with the weighted Le...
research
03/09/2023

TANGOS: Regularizing Tabular Neural Networks through Gradient Orthogonalization and Specialization

Despite their success with unstructured data, deep neural networks are n...
research
12/24/2020

Sensitivity – Local Index to Control Chaoticity or Gradient Globally

In this paper, we propose a fully local index named "sensitivity" for ea...
research
04/13/2022

Receding Neuron Importances for Structured Pruning

Structured pruning efficiently compresses networks by identifying and re...

Please sign up or login with your details

Forgot password? Click here to reset