Explaining Hate Speech Classification with Model Agnostic Methods

by   Durgesh Nandini, et al.

There have been remarkable breakthroughs in Machine Learning and Artificial Intelligence, notably in the areas of Natural Language Processing and Deep Learning. Additionally, hate speech detection in dialogues has been gaining popularity among Natural Language Processing researchers with the increased use of social media. However, as evidenced by the recent trends, the need for the dimensions of explainability and interpretability in AI models has been deeply realised. Taking note of the factors above, the research goal of this paper is to bridge the gap between hate speech prediction and the explanations generated by the system to support its decision. This has been achieved by first predicting the classification of a text and then providing a posthoc, model agnostic and surrogate interpretability approach for explainability and to prevent model bias. The bidirectional transformer model BERT has been used for prediction because of its state of the art efficiency over other Machine Learning models. The model agnostic algorithm LIME generates explanations for the output of a trained classifier and predicts the features that influence the model decision. The predictions generated from the model were evaluated manually, and after thorough evaluation, we observed that the model performs efficiently in predicting and explaining its prediction. Lastly, we suggest further directions for the expansion of the provided research work.


page 1

page 2

page 3

page 4


Towards a Rigorous Evaluation of Explainability for Multivariate Time Series

Machine learning-based systems are rapidly gaining popularity and in-lin...

Explaining Relation Classification Models with Semantic Extents

In recent years, the development of large pretrained language models, su...

Is ChatGPT better than Human Annotators? Potential and Limitations of ChatGPT in Explaining Implicit Hate Speech

Recent studies have alarmed that many online hate speeches are implicit....

RECAST: Interactive Auditing of Automatic Toxicity Detection Models

As toxic language becomes nearly pervasive online, there has been increa...

XAI in Computational Linguistics: Understanding Political Leanings in the Slovenian Parliament

The work covers the development and explainability of machine learning m...

Explainability for Large Language Models: A Survey

Large language models (LLMs) have demonstrated impressive capabilities i...

Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning

Artificial intelligence, particularly through recent advancements in dee...

Please sign up or login with your details

Forgot password? Click here to reset