Technical report: Impact of evaluation metrics and sampling on the comparison of machine learning methods for biodiversity indicators prediction
Machine learning (ML) approaches are used more and more widely in biodiversity monitoring. In particular, an important application is the problem of predicting biodiversity indicators such as species abundance, species occurrence or species richness, based on predictor sets containing, e.g., climatic and anthropogenic factors. Considering the impressive number of different ML methods available in the litterature and the pace at which they are being published, it is crucial to develop uniform evaluation procedures, to allow the production of sound and fair empirical studies. However, defining fair evaluation procedures is challenging: because well-documented, intrinsic properties of biodiversity indicators such as their zero-inflation and over-dispersion, it is not trivial to design good sampling schemes for cross-validation nor good evaluation metrics. Indeed, the classical Mean Squared Error (MSE) fails to capture subtle differences in the performance of different methods, particularly in terms of prediction of very small, or very large values (e.g., zero counts or large counts). In this report, we illustrate this phenomenon by comparing ten statistical and machine learning models on the task of predicting waterbirds abundance in the North-African area, based on geographical, meteorological and spatio-temporal factors. Our results highlight that differnte off-the-shelf evaluation metrics and cross-validation sampling approaches yield drastically different rankings of the metrics, and fail to capture interpretable conclusions.
READ FULL TEXT