Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud

09/07/2021
by   Michaela Hardt, et al.
30

Understanding the predictions made by machine learning (ML) models and their potential biases remains a challenging and labor-intensive task that depends on the application, the dataset, and the specific model. We present Amazon SageMaker Clarify, an explainability feature for Amazon SageMaker that launched in December 2020, providing insights into data and ML models by identifying biases and explaining predictions. It is deeply integrated into Amazon SageMaker, a fully managed service that enables data scientists and developers to build, train, and deploy ML models at any scale. Clarify supports bias detection and feature importance computation across the ML lifecycle, during data preparation, model evaluation, and post-deployment monitoring. We outline the desiderata derived from customer input, the modular architecture, and the methodology for bias and explanation computations. Further, we describe the technical challenges encountered and the tradeoffs we had to make. For illustration, we discuss two customer use cases. We present our deployment results including qualitative customer feedback and a quantitative evaluation. Finally, we summarize lessons learned, and discuss best practices for the successful adoption of fairness and explanation tools in practice.

READ FULL TEXT

page 5

page 6

research
11/26/2021

Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models

With the increasing adoption of machine learning (ML) models and systems...
research
08/14/2020

LiFT: A Scalable Framework for Measuring Fairness in ML Applications

Many internet applications are powered by machine learned models, which ...
research
02/27/2020

Do ML Experts Discuss Explainability for AI Systems? A discussion case in the industry for a domain-specific solution

The application of Artificial Intelligence (AI) tools in different domai...
research
10/14/2022

Machine Learning in Transaction Monitoring: The Prospect of xAI

Banks hold a societal responsibility and regulatory requirements to miti...
research
10/06/2020

Astraea: Grammar-based Fairness Testing

Software often produces biased outputs. In particular, machine learning ...
research
09/16/2022

Operationalizing Machine Learning: An Interview Study

Organizations rely on machine learning engineers (MLEs) to operationaliz...
research
11/08/2022

Towards Algorithmic Fairness in Space-Time: Filling in Black Holes

New technologies and the availability of geospatial data have drawn atte...

Please sign up or login with your details

Forgot password? Click here to reset