PEFT-Ref: A Modular Reference Architecture and Typology for Parameter-Efficient Finetuning Techniques

04/24/2023
by   Mohammed Sabry, et al.
0

Recent parameter-efficient finetuning (PEFT) techniques aim to improve over the considerable cost of fully finetuning large pretrained language models (PLM). As different PEFT techniques proliferate, it is becoming difficult to compare them, in particular in terms of (i) the structure and functionality they add to the PLM, (ii) the different types and degrees of efficiency improvements achieved, (iii) performance at different downstream tasks, and (iv) how differences in structure and functionality relate to efficiency and task performance. To facilitate such comparisons, this paper presents a reference framework which standardises aspects shared by different PEFT techniques, while isolating differences to specific locations and interactions with the standard components. Through this process of standardising and isolating differences, a modular view of PEFT techniques emerges, supporting not only direct comparison of different techniques and their efficiency and task performance, but also systematic exploration of reusability and composability of the different types of finetuned modules. We demonstrate how the reference framework can be applied to understand properties and relative advantages of PEFT techniques, hence to inform selection of techniques for specific tasks, and design choices for new PEFT techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2022

NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better

Effectively finetuning pretrained language models (PLMs) is critical for...
research
06/07/2023

ModuleFormer: Learning Modular Large Language Models From Uncurated Data

Large Language Models (LLMs) have achieved remarkable results. But exist...
research
06/01/2022

What Changed? Investigating Debiasing Methods using Causal Mediation Analysis

Previous work has examined how debiasing language models affect downstre...
research
07/15/2023

CPET: Effective Parameter-Efficient Tuning for Compressed Large Language Models

Parameter-efficient tuning (PET) has been widely explored in recent year...
research
08/18/2023

VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control

As the model size of pre-trained language models (PLMs) grows rapidly, f...
research
02/22/2023

Modular Deep Learning

Transfer learning has recently become the dominant paradigm of machine l...

Please sign up or login with your details

Forgot password? Click here to reset