Should College Dropout Prediction Models Include Protected Attributes?
Early identification of college dropouts can provide tremendous value for improving student success and institutional effectiveness, and predictive analytics are increasingly used for this purpose. However, ethical concerns have emerged about whether including protected attributes in the prediction models discriminates against underrepresented student groups and exacerbates existing inequities. We examine this issue in the context of a large U.S. research university with both residential and fully online degree-seeking students. Based on comprehensive institutional records for this entire student population across multiple years, we build machine learning models to predict student dropout after one academic year of study, and compare the overall performance and fairness of model predictions with or without four protected attributes (gender, URM, first-generation student, and high financial need). We find that including protected attributes does not impact the overall prediction performance and it only marginally improves algorithmic fairness of predictions. While these findings suggest that including protected attributes is preferred, our analysis also offers guidance on how to evaluate the impact in a local context, where institutional stakeholders seek to leverage predictive analytics to support student success.
READ FULL TEXT