Evaluating Predictive Models of Student Success: Closing the Methodological Gap

01/19/2018
by   Josh Gardner, et al.
0

Model evaluation -- the process of making inferences about the performance of predictive models -- is a critical component of predictive modeling research in learning analytics. In this work, we present an overview of the state-of-the-practice of model evaluation in learning analytics, which overwhelmingly uses only naive methods for model evaluation or, less commonly, statistical tests which are not appropriate for predictive model evaluation. We then provide an overview of more appropriate methods for model evaluation, presenting both frequentist and a preferred Bayesian method. Finally, we apply three methods -- the naive average commonly used in learning analytics, frequentist null hypothesis significance test (NHST), and hierarchical Bayesian model evaluation -- to a large set of MOOC data. We compare 96 different predictive modeling techniques, including different feature sets, statistical modeling algorithms, and tuning hyperparameters for each, using this case study to demonstrate the different experimental conclusions these evaluation techniques provide.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset