Seeing the Unseen: Errors and Bias in Visual Datasets

11/03/2022
by   Hongrui Jin, et al.
0

From face recognition in smartphones to automatic routing on self-driving cars, machine vision algorithms lie in the core of these features. These systems solve image based tasks by identifying and understanding objects, subsequently making decisions from these information. However, errors in datasets are usually induced or even magnified in algorithms, at times resulting in issues such as recognising black people as gorillas and misrepresenting ethnicities in search results. This paper tracks the errors in datasets and their impacts, revealing that a flawed dataset could be a result of limited categories, incomprehensive sourcing and poor classification.

READ FULL TEXT

page 8

page 10

research
03/24/2020

Dataset Cleaning – A Cross Validation Methodology for Large Facial Datasets using Face Recognition

In recent years, large "in the wild" face datasets have been released in...
research
05/25/2023

Human-Machine Comparison for Cross-Race Face Verification: Race Bias at the Upper Limits of Performance?

Face recognition algorithms perform more accurately than humans in some ...
research
11/26/2022

The Impact of Racial Distribution in Training Data on Face Recognition Bias: A Closer Look

Face recognition algorithms, when used in the real world, can be very us...
research
03/19/2018

Visual Psychophysics for Making Face Recognition Algorithms More Explainable

Scientific fields that are interested in faces have developed their own ...
research
10/18/2022

How to Boost Face Recognition with StyleGAN?

State-of-the-art face recognition systems require huge amounts of labele...
research
11/17/2021

Two-Face: Adversarial Audit of Commercial Face Recognition Systems

Computer vision applications like automated face detection are used for ...

Please sign up or login with your details

Forgot password? Click here to reset