Human Interpretable AI: Enhancing Tsetlin Machine Stochasticity with Drop Clause

05/30/2021
by   Jivitesh Sharma, et al.
0

In this article, we introduce a novel variant of the Tsetlin machine (TM) that randomly drops clauses, the key learning elements of a TM. In effect, TM with drop clause ignores a random selection of the clauses in each epoch, selected according to a predefined probability. In this way, additional stochasticity is introduced in the learning phase of TM. Along with producing more distinct and well-structured patterns that improve the performance, we also show that dropping clauses increases learning robustness. To explore the effects clause dropping has on accuracy, training time, and interpretability, we conduct extensive experiments on various benchmark datasets in natural language processing (NLP) (IMDb and SST2) as well as computer vision (MNIST and CIFAR10). In brief, we observe from +2 4x faster learning. We further employ the Convolutional TM to document interpretable results on the CIFAR10 dataset. To the best of our knowledge, this is the first time an interpretable machine learning algorithm has been used to produce pixel-level human-interpretable results on CIFAR10. Also, unlike previous interpretable methods that focus on attention visualisation or gradient interpretability, we show that the TM is a more general interpretable method. That is, by producing rule-based propositional logic expressions that are human-interpretable, the TM can explain how it classifies a particular instance at the pixel level for computer vision and at the word level for NLP.

READ FULL TEXT

page 6

page 7

page 9

research
03/23/2015

Superpixelizing Binary MRF for Image Labeling Problems

Superpixels have become prevalent in computer vision. They have been use...
research
04/14/2021

Distributed Word Representation in Tsetlin Machine

Tsetlin Machine (TM) is an interpretable pattern recognition algorithm b...
research
12/06/2021

HIVE: Evaluating the Human Interpretability of Visual Explanations

As machine learning is increasingly applied to high-impact, high-risk do...
research
11/13/2022

Demystify Self-Attention in Vision Transformers from a Semantic Perspective: Analysis and Application

Self-attention mechanisms, especially multi-head self-attention (MSA), h...
research
01/13/2021

Towards Interpretable Ensemble Learning for Image-based Malware Detection

Deep learning (DL) models for image-based malware detection have exhibit...
research
05/06/2023

ANTONIO: Towards a Systematic Method of Generating NLP Benchmarks for Verification

Verification of machine learning models used in Natural Language Process...
research
07/12/2017

A Formal Framework to Characterize Interpretability of Procedures

We provide a novel notion of what it means to be interpretable, looking ...

Please sign up or login with your details

Forgot password? Click here to reset