Confidence-based Out-of-Distribution Detection: A Comparative Study and Analysis

07/06/2021
by   Christoph Berger, et al.
8

Image classification models deployed in the real world may receive inputs outside the intended data distribution. For critical applications such as clinical decision making, it is important that a model can detect such out-of-distribution (OOD) inputs and express its uncertainty. In this work, we assess the capability of various state-of-the-art approaches for confidence-based OOD detection through a comparative study and in-depth analysis. First, we leverage a computer vision benchmark to reproduce and compare multiple OOD detection methods. We then evaluate their capabilities on the challenging task of disease classification using chest X-rays. Our study shows that high performance in a computer vision task does not directly translate to accuracy in a medical imaging task. We analyse factors that affect performance of the methods between the two tasks. Our results provide useful insights for developing the next generation of OOD detection methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2022

A Comparative Study of Confidence Calibration in Deep Learning: From Computer Vision to Medical Imaging

Although deep learning prediction models have been successful in the dis...
research
05/27/2022

Failure Detection in Medical Image Classification: A Reality Check and Benchmarking Testbed

Failure detection in automated image classification is a critical safegu...
research
04/11/2018

KS(conf ): A Light-Weight Test if a ConvNet Operates Outside of Its Specifications

Computer vision systems for automatic image categorization have become a...
research
09/12/2021

On the Impact of Spurious Correlation for Out-of-distribution Detection

Modern neural networks can assign high confidence to inputs drawn from o...
research
06/22/2023

Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs

The task of empowering large language models (LLMs) to accurately expres...
research
01/24/2022

AutoSeg – Steering the Inductive Biases for Automatic Pathology Segmentation

In medical imaging, un-, semi-, or self-supervised pathology detection i...
research
08/22/2023

Expecting The Unexpected: Towards Broad Out-Of-Distribution Detection

Improving the reliability of deployed machine learning systems often inv...

Please sign up or login with your details

Forgot password? Click here to reset