Data, Depth, and Design: Learning Reliable Models for Melanoma Screening

11/01/2017
by   Eduardo Valle, et al.
0

State of the art on melanoma screening evolved rapidly in the last two years, with the adoption of deep learning. Those models, however, pose challenges of their own, as they are expensive to train and difficult to parameterize. Objective: We investigate the methodological issues for designing and evaluating deep learning models for melanoma screening, by exploring nine choices often faced to design deep networks: model architecture, training dataset, image resolution, type of data augmentation, input normalization, use of segmentation, duration of training, additional use of SVM, and test data augmentation. Methods: We perform a two-level full factorial experiment, for five different test datasets, resulting in 2560 exhaustive trials, which we analyze using a multi-way ANOVA. Results: The main finding is that the size of training data has a disproportionate influence, explaining almost half the variation in performance. Of the other factors, test data augmentation and input resolution are the most helpful. Deeper models, when combined, with extra data, also help. We show that the costly full factorial design, or the unreliable sequential optimization, are not the only options: ensembles of models provide reliable results with limited resources. Conclusions and Significance: To move research forward on automated melanoma screening, we need to curate larger shared datasets. Optimizing hyperparameters and measuring performance on the same dataset is common, but leads to overoptimistic results. Ensembles of models are a cost-effective alternative to the expensive full-factorial and to the unstable sequential designs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset