LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks

06/14/2022
by   Tuan Dinh, et al.
8

Fine-tuning pretrained language models (LMs) without making any architectural changes has become a norm for learning various language downstream tasks. However, for non-language downstream tasks, a common practice is to employ task-specific designs for input, output layers, and loss functions. For instance, it is possible to fine-tune an LM into an MNIST classifier by replacing the word embedding layer with an image patch embedding layer, the word token output layer with a 10-way output layer, and the word prediction loss with a 10-way classification loss, respectively. A natural question arises: can LM fine-tuning solve non-language downstream tasks without changing the model architecture or loss function? To answer this, we propose Language-Interfaced Fine-Tuning (LIFT) and study its efficacy and limitations by conducting an extensive empirical study on a suite of non-language classification and regression tasks. LIFT does not make any changes to the model architecture or loss function, and it solely relies on the natural language interface, enabling "no-code machine learning with LMs." We find that LIFT performs relatively well across a wide range of low-dimensional classification and regression tasks, matching the performances of the best baselines in many cases, especially for the classification tasks. We report the experimental results on the fundamental properties of LIFT, including its inductive bias, sample efficiency, ability to extrapolate, robustness to outliers and label noise, and generalization. We also analyze a few properties/techniques specific to LIFT, e.g., context-aware learning via appropriate prompting, quantification of predictive uncertainty, and two-stage fine-tuning. Our code is available at https://github.com/UW-Madison-Lee-Lab/LanguageInterfacedFineTuning.

READ FULL TEXT

page 9

page 16

research
04/11/2023

Towards Efficient Fine-tuning of Pre-trained Code Models: An Experimental Study and Beyond

Recently, fine-tuning pre-trained code models such as CodeBERT on downst...
research
06/07/2022

DynaMaR: Dynamic Prompt with Mask Token Representation

Recent research has shown that large language models pretrained using un...
research
03/22/2022

Task-guided Disentangled Tuning for Pretrained Language Models

Pretrained language models (PLMs) trained on large-scale unlabeled corpu...
research
05/02/2022

Robust Fine-tuning via Perturbation and Interpolation from In-batch Instances

Fine-tuning pretrained language models (PLMs) on downstream tasks has be...
research
03/13/2023

Model-tuning Via Prompts Makes NLP Models Adversarially Robust

In recent years, NLP practitioners have converged on the following pract...
research
03/23/2023

MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models

Foundation models have shown outstanding performance and generalization ...
research
11/28/2022

Distance Metric Learning Loss Functions in Few-Shot Scenarios of Supervised Language Models Fine-Tuning

This paper presents an analysis regarding an influence of the Distance M...

Please sign up or login with your details

Forgot password? Click here to reset