Measuring Model Biases in the Absence of Ground Truth

03/05/2021
by   Osman Aka, et al.
0

Recent advances in computer vision have led to the development of image classification models that can predict tens of thousands of object classes. Training these models can require millions of examples, leading to a demand of potentially billions of annotations. In practice, however, images are typically sparsely annotated, which can lead to problematic biases in the distribution of ground truth labels that are collected. This potential for annotation bias may then limit the utility of ground truth-dependent fairness metrics (e.g., Equalized Odds). To address this problem, in this work we introduce a new framing to the measurement of fairness and bias that does not rely on ground truth labels. Instead, we treat the model predictions for a given image as a set of labels, analogous to a 'bag of words' approach used in Natural Language Processing (NLP). This allows us to explore different association metrics between prediction sets in order to detect patterns of bias. We apply this approach to examine the relationship between identity labels, and all other labels in the dataset, using labels associated with 'male' and 'female') as a concrete example. We demonstrate how the statistical properties (especially normalization) of the different association metrics can lead to different sets of labels detected as having "gender bias". We conclude by demonstrating that pointwise mutual information normalized by joint probability (nPMI) is able to detect many labels with significant gender bias despite differences in the labels' marginal frequencies. Finally, we announce an open-sourced nPMI visualization tool using TensorBoard.

READ FULL TEXT
research
05/19/2022

Towards Understanding Gender-Seniority Compound Bias in Natural Language Generation

Women are often perceived as junior to their male counterparts, even wit...
research
08/07/2021

What do Bias Measures Measure?

Natural Language Processing (NLP) models propagate social biases about p...
research
02/06/2023

When the Ground Truth is not True: Modelling Human Biases in Temporal Annotations

In supervised learning, low quality annotations lead to poorly performin...
research
08/24/2019

Unsupervised Recalibration

Unsupervised recalibration (URC) is a general way to improve the accurac...
research
01/23/2022

The risk of bias in denoising methods

Experimental datasets are growing rapidly in size, scope, and detail, bu...
research
09/24/2018

Statistical Estimation of Malware Detection Metrics in the Absence of Ground Truth

The accurate measurement of security metrics is a critical research prob...
research
01/17/2022

Visual Identification of Problematic Bias in Large Label Spaces

While the need for well-trained, fair ML systems is increasing ever more...

Please sign up or login with your details

Forgot password? Click here to reset