DOI: Divergence-based Out-of-Distribution Indicators via Deep Generative Models

by   Wenxiao Chen, et al.

To ensure robust and reliable classification results, OoD (out-of-distribution) indicators based on deep generative models are proposed recently and are shown to work well on small datasets. In this paper, we conduct the first large collection of benchmarks (containing 92 dataset pairs, which is 1 order of magnitude larger than previous ones) for existing OoD indicators and observe that none perform well. We thus advocate that a large collection of benchmarks is mandatory for evaluating OoD indicators. We propose a novel theoretical framework, DOI, for divergence-based Out-of-Distribution indicators (instead of traditional likelihood-based) in deep generative models. Following this framework, we further propose a simple and effective OoD detection algorithm: Single-shot Fine-tune. It significantly outperforms past works by 5 8 in AUROC, and its performance is close to optimal. In recent, the likelihood criterion is shown to be ineffective in detecting OoD. Single-shot Fine-tune proposes a novel fine-tune criterion to detect OoD, by whether the likelihood of the testing sample is improved after fine-tuning a well-trained model on it. Fine-tune criterion is a clear and easy-following criterion, which will lead the OoD domain into a new stage.


page 1

page 2

page 3

page 4


How to fine-tune deep neural networks in few-shot learning?

Deep learning has been widely used in data-intensive applications. Howev...

OSOA: One-Shot Online Adaptation of Deep Generative Models for Lossless Compression

Explicit deep generative models (DGMs), e.g., VAEs and Normalizing Flows...

Detecting Out-of-Distribution Inputs to Deep Generative Models Using a Test for Typicality

Recent work has shown that deep generative models can assign higher like...

Do Deep Generative Models Know What They Don't Know?

A neural network deployed in the wild may be asked to make predictions f...

Understanding Failures in Out-of-Distribution Detection with Deep Generative Models

Deep generative models (DGMs) seem a natural fit for detecting out-of-di...

Divergence Frontiers for Generative Models: Sample Complexity, Quantization Level, and Frontier Integral

The spectacular success of deep generative models calls for quantitative...

Can We Trust Deep Speech Prior?

Recently, speech enhancement (SE) based on deep speech prior has attract...

Please sign up or login with your details

Forgot password? Click here to reset