How to Estimate Model Transferability of Pre-Trained Speech Models?

by   Zih-Ching Chen, et al.

In this work, we introduce a “score-based assessment” framework for estimating the transferability of pre-trained speech models (PSMs) for fine-tuning target tasks. We leverage upon two representation theories, Bayesian likelihood estimation and optimal transport, to generate rank scores for the PSM candidates using the extracted representations. Our framework efficiently computes transferability scores without actual fine-tuning of candidate models or layers by making a temporal independent hypothesis. We evaluate some popular supervised speech models (e.g., Conformer RNN-Transducer) and self-supervised speech models (e.g., HuBERT) in cross-layer and cross-model settings using public data. Experimental results show a high Spearman's rank correlation and low p-value between our estimation framework and fine-tuning ground truth. Our proposed transferability framework requires less computational time and resources, making it a resource-saving and time-efficient approach for tuning speech foundation models.


page 1

page 2

page 3

page 4


Can Self-Supervised Neural Representations Pre-Trained on Human Speech distinguish Animal Callers?

Self-supervised learning (SSL) models use only the intrinsic structure o...

Newer is not always better: Rethinking transferability metrics, their peculiarities, stability and performance

Fine-tuning of large pre-trained image and language models on small cust...

ETran: Energy-Based Transferability Estimation

This paper addresses the problem of ranking pre-trained models for objec...

MTI-Net: A Multi-Target Speech Intelligibility Prediction Model

Recently, deep learning (DL)-based non-intrusive speech assessment model...

Evidence > Intuition: Transferability Estimation for Encoder Selection

With the increase in availability of large pre-trained language models (...

Please sign up or login with your details

Forgot password? Click here to reset