Reconnoitering the class distinguishing abilities of the features, to know them better

by   Payel Sadhukhan, et al.

The relevance of machine learning (ML) in our daily lives is closely intertwined with its explainability. Explainability can allow end-users to have a transparent and humane reckoning of a ML scheme's capability and utility. It will also foster the user's confidence in the automated decisions of a system. Explaining the variables or features to explain a model's decision is a need of the present times. We could not really find any work, which explains the features on the basis of their class-distinguishing abilities (specially when the real world data are mostly of multi-class nature). In any given dataset, a feature is not equally good at making distinctions between the different possible categorizations (or classes) of the data points. In this work, we explain the features on the basis of their class or category-distinguishing capabilities. We particularly estimate the class-distinguishing capabilities (scores) of the variables for pair-wise class combinations. We validate the explainability given by our scheme empirically on several real-world, multi-class datasets. We further utilize the class-distinguishing scores in a latent feature context and propose a novel decision making protocol. Another novelty of this work lies with a refuse to render decision option when the latent variable (of the test point) has a high class-distinguishing potential for the likely classes.


page 9

page 10

page 12

page 13

page 14


Multi-Class Data Description for Out-of-distribution Detection

The capability of reliably detecting out-of-distribution samples is one ...

Explainability in Machine Learning: a Pedagogical Perspective

Given the importance of integrating of explainability into machine learn...

WebSHAP: Towards Explaining Any Machine Learning Models Anywhere

As machine learning (ML) is increasingly integrated into our everyday We...

A Meta-heuristic Approach to Estimate and Explain Classifier Uncertainty

Trust is a crucial factor affecting the adoption of machine learning (ML...

DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement

As the complexity of machine learning (ML) models increases, resulting i...

Can Users Correctly Interpret Machine Learning Explanations and Simultaneously Identify Their Limitations?

Automated decision-making systems are becoming increasingly ubiquitous, ...

Towards Deep Machine Reasoning: a Prototype-based Deep Neural Network with Decision Tree Inference

In this paper we introduce the DMR – a prototype-based method and networ...

Please sign up or login with your details

Forgot password? Click here to reset