Projective Latent Space Decluttering

06/23/2020
by   Andreas Hinterreiter, et al.
5

High-dimensional latent representations learned by neural network classifiers are notoriously hard to interpret. Especially in medical applications, model developers and domain experts desire a better understanding of how these latent representations relate to the resulting classification performance. We present a framework for retraining classifiers by backpropagating manual changes made to low-dimensional embeddings of the latent space. This means that our technique allows the practitioner to control the latent decision space in an intuitive way. Our approach is based on parametric approximations of non-linear embedding techniques such as t-distributed stochastic neighbourhood embedding. Using this approach, it is possible to manually shape and declutter the latent space of image classifiers in order to better match the expectations of domain experts or to fulfil specific requirements of classification tasks. For instance, the performance for specific class pairs can be enhanced by manually separating the class clusters in the embedding, without significantly affecting the overall performance of the other classes. We evaluate our technique on a real-world scenario in fetal ultrasound imaging.

READ FULL TEXT
research
03/02/2020

Predictive Coding for Locally-Linear Control

High-dimensional observations and unknown dynamics are major challenges ...
research
12/06/2022

Domain Translation via Latent Space Mapping

In this paper, we investigate the problem of multi-domain translation: g...
research
03/06/2023

Towards Composable Distributions of Latent Space Augmentations

We propose a composable framework for latent space image augmentation th...
research
09/08/2020

Understanding and Exploiting Dependent Variables with Deep Metric Learning

Deep Metric Learning (DML) approaches learn to represent inputs to a low...
research
07/04/2021

Latent structure blockmodels for Bayesian spectral graph clustering

Spectral embedding of network adjacency matrices often produces node rep...
research
11/05/2018

Intrinsic Universal Measurements of Non-linear Embeddings

A basic problem in machine learning is to find a mapping f from a low di...
research
09/20/2023

A Spike-and-Slab Prior for Dimension Selection in Generalized Linear Network Eigenmodels

Latent space models (LSMs) are frequently used to model network data by ...

Please sign up or login with your details

Forgot password? Click here to reset