Using Language to Extend to Unseen Domains

10/18/2022
by   Lisa Dunlap, et al.
3

It is expensive to collect training data for every possible domain that a vision model may encounter when deployed. We instead consider how simply verbalizing the training domain (e.g. "photos of birds") as well as domains we want to extend to but do not have data for (e.g. "paintings of birds") can improve robustness. Using a multimodal model with a joint image and language embedding space, our method LADS learns a transformation of the image embeddings from the training domain to each unseen test domain, while preserving task relevant information. Without using any images from the unseen test domain, we show that over the extended domain containing both training and unseen test domains, LADS outperforms standard fine-tuning and ensemble approaches over a suite of four benchmarks targeting domain adaptation and dataset bias

READ FULL TEXT

page 2

page 8

page 15

page 16

page 17

research
03/29/2021

Adaptive Methods for Real-World Domain Generalization

Invariant approaches have been remarkably successful in tackling the pro...
research
12/15/2021

Improving both domain robustness and domain adaptability in machine translation

We address two problems of domain adaptation in neural machine translati...
research
11/12/2020

Domain Generalization in Biosignal Classification

Objective: When training machine learning models, we often assume that t...
research
02/24/2021

PADA: A Prompt-based Autoregressive Approach for Adaptation to Unseen Domains

Natural Language Processing algorithms have made incredible progress rec...
research
05/28/2019

Image Alignment in Unseen Domains via Domain Deep Generalization

Image alignment across domains has recently become one of the realistic ...
research
12/09/2018

Beyond Domain Adaptation: Unseen Domain Encapsulation via Universal Non-volume Preserving Models

Recognition across domains has recently become an active topic in the re...
research
03/23/2022

A Scalable Model Specialization Framework for Training and Inference using Submodels and its Application to Speech Model Personalization

Model fine-tuning and adaptation have become a common approach for model...

Please sign up or login with your details

Forgot password? Click here to reset