Provable Worst Case Guarantees for the Detection of Out-of-Distribution Data

07/16/2020
by   Julian Bitterwolf, et al.
0

Deep neural networks are known to be overconfident when applied to out-of-distribution (OOD) inputs which clearly do not belong to any class. This is a problem in safety-critical applications since a reliable assessment of the uncertainty of a classifier is a key property, allowing to trigger human intervention or to transfer into a safe state. In this paper, we are aiming for certifiable worst case guarantees for OOD detection by enforcing not only low confidence at the OOD point but also in an l_∞-ball around it. For this purpose, we use interval bound propagation (IBP) to upper bound the maximal confidence in the l_∞-ball and minimize this upper bound during training time. We show that non-trivial bounds on the confidence for OOD data generalizing beyond the OOD dataset seen at training time are possible. Moreover, in contrast to certified adversarial robustness which typically comes with significant loss in prediction performance, certified guarantees for worst case OOD detection are possible without much loss in accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset