Robustness and Adaptation to Hidden Factors of Variation

03/03/2022
by   William Paul, et al.
8

We tackle here a specific, still not widely addressed aspect, of AI robustness, which consists of seeking invariance / insensitivity of model performance to hidden factors of variations in the data. Towards this end, we employ a two step strategy that a) does unsupervised discovery, via generative models, of sensitive factors that cause models to under-perform, and b) intervenes models to make their performance invariant to these sensitive factors' influence. We consider 3 separate interventions for robustness, including: data augmentation, semantic consistency, and adversarial alignment. We evaluate our method using metrics that measure trade offs between invariance (insensitivity) and overall performance (utility) and show the benefits of our method for 3 settings (unsupervised, semi-supervised and generalization).

READ FULL TEXT

page 5

page 6

research
09/26/2018

Unsupervised Adversarial Invariance

Data representations that contain all the information about target varia...
research
06/21/2019

A Fourier Perspective on Model Robustness in Computer Vision

Achieving robustness to distributional shift is a longstanding and chall...
research
03/30/2021

Learning Representational Invariances for Data-Efficient Action Recognition

Data augmentation is a ubiquitous technique for improving image classifi...
research
06/09/2021

Grounding inductive biases in natural images:invariance stems from variations in data

To perform well on unseen and potentially out-of-distribution samples, i...
research
01/30/2023

Standardized CycleGAN training for unsupervised stain adaptation in invasive carcinoma classification for breast histopathology

Generalization is one of the main challenges of computational pathology....
research
06/04/2022

Toward Learning Robust and Invariant Representations with Alignment Regularization and Data Augmentation

Data augmentation has been proven to be an effective technique for devel...
research
12/11/2020

RENATA: REpreseNtation And Training Alteration for Bias Mitigation

We propose a novel method for enforcing AI fairness with respect to prot...

Please sign up or login with your details

Forgot password? Click here to reset