Offsite-Tuning: Transfer Learning without Full Model

by   Guangxuan Xiao, et al.

Transfer learning is important for foundation models to adapt to downstream tasks. However, many foundation models are proprietary, so users must share their data with model owners to fine-tune the models, which is costly and raise privacy concerns. Moreover, fine-tuning large foundation models is computation-intensive and impractical for most downstream users. In this paper, we propose Offsite-Tuning, a privacy-preserving and efficient transfer learning framework that can adapt billion-parameter foundation models to downstream data without access to the full model. In offsite-tuning, the model owner sends a light-weight adapter and a lossy compressed emulator to the data owner, who then fine-tunes the adapter on the downstream data with the emulator's assistance. The fine-tuned adapter is then returned to the model owner, who plugs it into the full model to create an adapted foundation model. Offsite-tuning preserves both parties' privacy and is computationally more efficient than the existing fine-tuning methods that require access to the full model weights. We demonstrate the effectiveness of offsite-tuning on various large language and vision foundation models. Offsite-tuning can achieve comparable accuracy as full model fine-tuning while being privacy-preserving and efficient, achieving 6.5x speedup and 5.6x memory reduction. Code is available at


page 1

page 2

page 3

page 4


One to Transfer All: A Universal Transfer Framework for Vision Foundation Model with Few Data

The foundation model is not the last chapter of the model production pip...

MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models

Foundation models have shown outstanding performance and generalization ...

When is a Foundation Model a Foundation Model

Recently, several studies have reported on the fine-tuning of foundation...

On the power of foundation models

With infinitely many high-quality data points, infinite computational po...

RetClean: Retrieval-Based Data Cleaning Using Foundation Models and Data Lakes

Can foundation models (such as ChatGPT) clean your data? In this proposa...

Fine-tuning can cripple your foundation model; preserving features may be the solution

Pre-trained foundation models, owing primarily to their enormous capacit...

SAM-OCTA: A Fine-Tuning Strategy for Applying Foundation Model to OCTA Image Segmentation Tasks

In the analysis of optical coherence tomography angiography (OCTA) image...

Please sign up or login with your details

Forgot password? Click here to reset