Few-shot Deep Representation Learning based on Information Bottleneck Principle

11/25/2021
by   Shin Ando, et al.
0

In a standard anomaly detection problem, a detection model is trained in an unsupervised setting, under an assumption that the samples were generated from a single source of normal data. In practice, however, normal data often consist of multiple classes. In such settings, learning to differentiate between normal instances and anomalies among discrepancies between normal classes without large-scale labeled data presents a significant challenge. In this work, we attempt to overcome this challenge by preparing few examples from each normal class, which is not excessively costly. The above setting can also be described as a few-shot learning for multiple, normal classes, with the goal of learning a useful representation for anomaly detection. In order to utilize the limited labeled examples in training, we integrate the inter-class distances among the labeled examples in the deep feature space into the MAP loss. We derive their relations from an information-theoretic principle. Our empirical study shows that the proposed model improves the segmentation of normal classes in the deep feature space which contributes to identifying the anomaly class examples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro