PiML Toolbox for Interpretable Machine Learning Model Development and Validation

05/07/2023
by   Agus Sudjianto, et al.
0

PiML (read π-ML, /`pai.`em.`el/) is an integrated and open-access Python toolbox for interpretable machine learning model development and model diagnostics. It is designed with machine learning workflows in both low-code and high-code modes, including data pipeline, model training, model interpretation and explanation, and model diagnostics and comparison. The toolbox supports a growing list of interpretable models (e.g. GAM, GAMI-Net, XGB2) with inherent local and/or global interpretability. It also supports model-agnostic explainability tools (e.g. PFI, PDP, LIME, SHAP) and a powerful suite of model-agnostic diagnostics (e.g. weakness, uncertainty, robustness, fairness). Integration of PiML models and tests to existing MLOps platforms for quality assurance are enabled by flexible high-code APIs. Furthermore, PiML toolbox comes with a comprehensive user guide and hands-on examples, including the applications for model development and validation in banking. The project is available at https://github.com/SelfExplainML/PiML-Toolbox.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2021

Designing Inherently Interpretable Machine Learning Models

Interpretable machine learning (IML) becomes increasingly important in h...
research
12/28/2020

dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python

The increasing amount of available data, computing power, and the consta...
research
09/19/2019

InterpretML: A Unified Framework for Machine Learning Interpretability

InterpretML is an open-source Python package which exposes machine learn...
research
12/02/2022

Safe machine learning model release from Trusted Research Environments: The AI-SDC package

We present AI-SDC, an integrated suite of open source Python tools to fa...
research
02/09/2019

Assessing the Local Interpretability of Machine Learning Models

The increasing adoption of machine learning tools has led to calls for a...
research
10/14/2021

The Irrationality of Neural Rationale Models

Neural rationale models are popular for interpretable predictions of NLP...
research
12/30/2019

A Consistently Oriented Basis for Eigenanalysis

Repeated application of machine-learning, eigen-centric methods to an ev...

Please sign up or login with your details

Forgot password? Click here to reset