VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations

04/06/2021
by   Archit Rathore, et al.
4

Word vector embeddings have been shown to contain and amplify biases in data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this paper, we utilize interactive visualization to increase the interpretability and accessibility of a collection of state-of-the-art debiasing techniques. To aid this, we present Visualization of Embedding Representations for deBiasing system ("VERB"), an open-source web-based visualization tool that helps the users gain a technical understanding and visual intuition of the inner workings of debiasing techniques, with a focus on their geometric properties. In particular, VERB offers easy-to-follow use cases in exploring the effects of these debiasing techniques on the geometry of high-dimensional word vectors. To help understand how various debiasing techniques change the underlying geometry, VERB decomposes each technique into interpretable sequences of primitive transformations and highlights their effect on the word vectors using dimensionality reduction and interactive visual exploration. VERB is designed to target natural language processing (NLP) practitioners who are designing decision-making systems on top of word embeddings, and also researchers working with fairness and ethics of machine learning systems in NLP. It can also serve as a visual medium for education, which helps an NLP novice to understand and mitigate biases in word embeddings.

READ FULL TEXT
research
10/08/2018

Understanding the Origins of Bias in Word Embeddings

The power of machine learning systems not only promises great technical ...
research
07/14/2022

A tool to overcome technical barriers for bias assessment in human language technologies

Automatic processing of language is becoming pervasive in our lives, oft...
research
11/04/2019

Assessing Social and Intersectional Biases in Contextualized Word Representations

Social bias in machine learning has drawn significant attention, with wo...
research
11/16/2016

Embedding Projector: Interactive Visualization and Interpretation of Embeddings

Embeddings are ubiquitous in machine learning, appearing in recommender ...
research
03/24/2022

Gender and Racial Stereotype Detection in Legal Opinion Word Embeddings

Studies have shown that some Natural Language Processing (NLP) systems e...
research
09/04/2020

Going Beyond T-SNE: Exposing whatlies in Text Embeddings

We introduce whatlies, an open source toolkit for visually inspecting wo...
research
10/12/2022

BiaScope: Visual Unfairness Diagnosis for Graph Embeddings

The issue of bias (i.e., systematic unfairness) in machine learning mode...

Please sign up or login with your details

Forgot password? Click here to reset