Pruning Pretrained Encoders with a Multitask Objective

12/10/2021
by   Patrick Xia, et al.
0

The sizes of pretrained language models make them challenging and expensive to use when there are multiple desired downstream tasks. In this work, we adopt recent strategies for model pruning during finetuning to explore the question of whether it is possible to prune a single encoder so that it can be used for multiple tasks. We allocate a fixed parameter budget and compare pruning a single model with a multitask objective against the best ensemble of single-task models. We find that under two pruning strategies (element-wise and rank pruning), the approach with the multitask objective outperforms training models separately when averaged across all tasks, and it is competitive on each individual one. Additional analysis finds that using a multitask objective during pruning can also be an effective method for reducing model sizes for low-resource tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2023

Performance-aware Approximation of Global Channel Pruning for Multitask CNNs

Global channel pruning (GCP) aims to remove a subset of channels (filter...
research
06/09/2022

DiSparse: Disentangled Sparsification for Multitask Model Compression

Despite the popularity of Model Compression and Multitask Learning, how ...
research
09/30/2022

Depth-Wise Attention (DWAtt): A Layer Fusion Method for Data-Efficient Classification

Language Models pretrained on large textual data have been shown to enco...
research
03/06/2023

Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning

Prompt tuning, in which a base pretrained model is adapted to each task ...
research
08/28/2023

FonMTL: Towards Multitask Learning for the Fon Language

The Fon language, spoken by an average 2 million of people, is a truly l...
research
04/01/2021

StyleML: Stylometry with Structure and Multitask Learning for Darkweb Markets

Darknet market forums are frequently used to exchange illegal goods and ...
research
02/23/2022

Reconstruction Task Finds Universal Winning Tickets

Pruning well-trained neural networks is effective to achieve a promising...

Please sign up or login with your details

Forgot password? Click here to reset