SynBench: Task-Agnostic Benchmarking of Pretrained Representations using Synthetic Data

by   Ching-Yun Ko, et al.

Recent success in fine-tuning large models, that are pretrained on broad data at scale, on downstream tasks has led to a significant paradigm shift in deep learning, from task-centric model design to task-agnostic representation learning and task-specific fine-tuning. As the representations of pretrained models are used as a foundation for different downstream tasks, this paper proposes a new task-agnostic framework, SynBench, to measure the quality of pretrained representations using synthetic data. We set up a reference by a theoretically-derived robustness-accuracy tradeoff of the class conditional Gaussian mixture. Given a pretrained model, the representations of data synthesized from the Gaussian mixture are used to compare with our reference to infer the quality. By comparing the ratio of area-under-curve between the raw data and their representations, SynBench offers a quantifiable score for robustness-accuracy performance benchmarking. Our framework applies to a wide range of pretrained models taking continuous data inputs and is independent of the downstream tasks and datasets. Evaluated with several pretrained vision transformer models, the experimental results show that our SynBench score well matches the actual linear probing performance of the pre-trained model when fine-tuned on downstream tasks. Moreover, our framework can be used to inform the design of robust linear probing on pretrained representations to mitigate the robustness-accuracy tradeoff in downstream tasks.


page 1

page 2

page 3

page 4


Pro-tuning: Unified Prompt Tuning for Vision Tasks

In computer vision, fine-tuning is the de-facto approach to leverage pre...

Why Can You Lay Off Heads? Investigating How BERT Heads Transfer

The huge size of the widely used BERT family models has led to recent ef...

Evaluating Representations with Readout Model Switching

Although much of the success of Deep Learning builds on learning good re...

Fine-tune the pretrained ATST model for sound event detection

Sound event detection (SED) often suffers from the data deficiency probl...

Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks

The utilization of broad datasets has proven to be crucial for generaliz...

How to prepare your task head for finetuning

In deep learning, transferring information from a pretrained network to ...

BROW: Better featuRes fOr Whole slide image based on self-distillation

Whole slide image (WSI) processing is becoming part of the key component...

Please sign up or login with your details

Forgot password? Click here to reset