Task-Specific Skill Localization in Fine-tuned Language Models

by   Abhishek Panigrahi, et al.

Pre-trained language models can be fine-tuned to solve diverse NLP tasks, including in few-shot settings. Thus fine-tuning allows the model to quickly pick up task-specific “skills,” but there has been limited study of where these newly-learnt skills reside inside the massive model. This paper introduces the term skill localization for this problem and proposes a solution. Given the downstream task and a model fine-tuned on that task, a simple optimization is used to identify a very small subset of parameters (∼0.01 performance, in the sense that grafting the fine-tuned values for just this tiny subset onto the pre-trained model gives performance almost as well as the fine-tuned model. While reminiscent of recent works on parameter-efficient fine-tuning, the novel aspects here are that: (i) No further re-training is needed on the subset (unlike, say, with lottery tickets). (ii) Notable improvements are seen over vanilla fine-tuning with respect to calibration of predictions in-distribution (40-90 of predictions out-of-distribution (OOD). In models trained on multiple tasks, a stronger notion of skill localization is observed, where the sparse regions corresponding to different tasks are almost disjoint, and their overlap (when it happens) is a proxy for task similarity. Experiments suggest that localization via grafting can assist certain forms of continual learning.


page 9

page 15

page 16


Preserving Pre-trained Features Helps Calibrate Fine-tuned Language Models

Large pre-trained language models (PLMs) have demonstrated strong perfor...

On the Transformation of Latent Space in Fine-Tuned NLP Models

We study the evolution of latent space in fine-tuned NLP models. Differe...

Reptile: a Scalable Metalearning Algorithm

This paper considers metalearning problems, where there is a distributio...

WIKITIDE: A Wikipedia-Based Timestamped Definition Pairs Dataset

A fundamental challenge in the current NLP context, dominated by languag...

Toward a traceable, explainable, and fairJD/Resume recommendation system

In the last few decades, companies are interested to adopt an online aut...

Assessing the efficacy of large language models in generating accurate teacher responses

(Tack et al., 2023) organized the shared task hosted by the 18th Worksho...

Analyzing Syntactic Generalization Capacity of Pre-trained Language Models on Japanese Honorific Conversion

Using Japanese honorifics is challenging because it requires not only kn...

Please sign up or login with your details

Forgot password? Click here to reset