Using Explainable Boosting Machine to Compare Idiographic and Nomothetic Approaches for Ecological Momentary Assessment Data

04/04/2022
by   Mandani Ntekouli, et al.
0

Previous research on EMA data of mental disorders was mainly focused on multivariate regression-based approaches modeling each individual separately. This paper goes a step further towards exploring the use of non-linear interpretable machine learning (ML) models in classification problems. ML models can enhance the ability to accurately predict the occurrence of different behaviors by recognizing complicated patterns between variables in data. To evaluate this, the performance of various ensembles of trees are compared to linear models using imbalanced synthetic and real-world datasets. After examining the distributions of AUC scores in all cases, non-linear models appear to be superior to baseline linear models. Moreover, apart from personalized approaches, group-level prediction models are also likely to offer an enhanced performance. According to this, two different nomothetic approaches to integrate data of more than one individuals are examined, one using directly all data during training and one based on knowledge distillation. Interestingly, it is observed that in one of the two real-world datasets, knowledge distillation method achieves improved AUC scores (mean relative change of +17% compared to personalized) showing how it can benefit EMA data classification and performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2021

Certifiable Machine Unlearning for Linear Models

Machine unlearning is the task of updating machine learning (ML) models ...
research
02/23/2021

Learning to Fairly Classify the Quality of Wireless Links

Machine learning (ML) has been used to develop increasingly accurate lin...
research
09/30/2022

Using Knowledge Distillation to improve interpretable models in a retail banking context

This article sets forth a review of knowledge distillation techniques wi...
research
05/27/2023

Knowledge Distillation Performs Partial Variance Reduction

Knowledge distillation is a popular approach for enhancing the performan...
research
03/13/2020

Neural Generators of Sparse Local Linear Models for Achieving both Accuracy and Interpretability

For reliability, it is important that the predictions made by machine le...
research
02/16/2023

Entity Aware Modelling: A Survey

Personalized prediction of responses for individual entities caused by e...
research
05/19/2023

Self-Reinforcement Attention Mechanism For Tabular Learning

Apart from the high accuracy of machine learning models, what interests ...

Please sign up or login with your details

Forgot password? Click here to reset