BiaScope: Visual Unfairness Diagnosis for Graph Embeddings

by   Agapi Rissaki, et al.

The issue of bias (i.e., systematic unfairness) in machine learning models has recently attracted the attention of both researchers and practitioners. For the graph mining community in particular, an important goal toward algorithmic fairness is to detect and mitigate bias incorporated into graph embeddings since they are commonly used in human-centered applications, e.g., social-media recommendations. However, simple analytical methods for detecting bias typically involve aggregate statistics which do not reveal the sources of unfairness. Instead, visual methods can provide a holistic fairness characterization of graph embeddings and help uncover the causes of observed bias. In this work, we present BiaScope, an interactive visualization tool that supports end-to-end visual unfairness diagnosis for graph embeddings. The tool is the product of a design study in collaboration with domain experts. It allows the user to (i) visually compare two embeddings with respect to fairness, (ii) locate nodes or graph communities that are unfairly embedded, and (iii) understand the source of bias by interactively linking the relevant embedding subspace with the corresponding graph topology. Experts' feedback confirms that our tool is effective at detecting and diagnosing unfairness. Thus, we envision our tool both as a companion for researchers in designing their algorithms as well as a guide for practitioners who use off-the-shelf graph embeddings.


page 1

page 6

page 7


A Framework for Fairness: A Systematic Review of Existing Fair AI Solutions

In a world of daily emerging scientific inquisition and discovery, the p...

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias

Fairness is an increasingly important concern as machine learning models...

Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases

As machine learning (ML) systems become increasingly widespread, it is n...

Embedding Projector: Interactive Visualization and Interpretation of Embeddings

Embeddings are ubiquitous in machine learning, appearing in recommender ...

VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations

Word vector embeddings have been shown to contain and amplify biases in ...

Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples

AI algorithms are not immune to biases. Traditionally, non-experts have ...

Data Bias Management

Due to the widespread use of data-powered systems in our everyday lives,...

Please sign up or login with your details

Forgot password? Click here to reset