DiSparse: Disentangled Sparsification for Multitask Model Compression

06/09/2022
by   Xinglong Sun, et al.
0

Despite the popularity of Model Compression and Multitask Learning, how to effectively compress a multitask model has been less thoroughly analyzed due to the challenging entanglement of tasks in the parameter space. In this paper, we propose DiSparse, a simple, effective, and first-of-its-kind multitask pruning and sparse training scheme. We consider each task independently by disentangling the importance measurement and take the unanimous decisions among all tasks when performing parameter pruning and selection. Our experimental results demonstrate superior performance on various configurations and settings compared to popular sparse training and pruning methods. Besides the effectiveness in compression, DiSparse also provides a powerful tool to the multitask learning community. Surprisingly, we even observed better performance than some dedicated multitask learning methods in several cases despite the high model sparsity enforced by DiSparse. We analyzed the pruning masks generated with DiSparse and observed strikingly similar sparse network architecture identified by each task even before the training starts. We also observe the existence of a "watershed" layer where the task relatedness sharply drops, implying no benefits in continued parameters sharing. Our code and models will be available at: https://github.com/SHI-Labs/DiSparse-Multitask-Model-Compression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/25/2023

Learning Compact Neural Networks with Deep Overparameterised Multitask Learning

Compact neural network offers many benefits for real-world applications....
research
12/10/2021

Pruning Pretrained Encoders with a Multitask Objective

The sizes of pretrained language models make them challenging and expens...
research
05/05/2023

Reduction of Class Activation Uncertainty with Background Information

Multitask learning is a popular approach to training high-performing neu...
research
03/21/2023

Performance-aware Approximation of Global Channel Pruning for Multitask CNNs

Global channel pruning (GCP) aims to remove a subset of channels (filter...
research
06/06/2023

FAMO: Fast Adaptive Multitask Optimization

One of the grand enduring goals of AI is to create generalist agents tha...
research
01/26/2022

Auto-Compressing Subset Pruning for Semantic Image Segmentation

State-of-the-art semantic segmentation models are characterized by high ...
research
04/15/2020

Extending Unsupervised Neural Image Compression With Supervised Multitask Learning

We focus on the problem of training convolutional neural networks on gig...

Please sign up or login with your details

Forgot password? Click here to reset