Deep learning models for segmentation of mobile-acquired dermatological images
With the ever-increasing occurrence of skin cancer, timely and accurate skin cancer detection has become clinically more imperative. A clinical mobile-based deep learning approach is a possible solution for this challenge. Nevertheless, there is a major impediment in the development of such a model: the scarce availability of labelled data acquired with mobile devices, namely macroscopic images. In this work, we present two experiments to assemble a robust deep learning model for macroscopic skin lesion segmentation and to capitalize on the sizable dermoscopic databases. In the first experiment two groups of deep learning models, U-Net based and DeepLab based, were created and tested exclusively in the available macroscopic images. In the second experiment, the possibility of transferring knowledge between the domains was tested. To accomplish this, the selected model was retrained in the dermoscopic images and, subsequently, fine-tuned with the macroscopic images. The best model implemented in the first experiment was a DeepLab based model with a MobileNetV2 as feature extractor with a width multiplier of 0.35 and optimized with the soft Dice loss. This model comprehended 0.4 million parameters and obtained a thresholded Jaccard coefficient of 72.97% and 78.51% in the Dermofit and SMARTSKINS databases, respectively. In the second experiment, with the usage of transfer learning, the performance of this model was significantly improved in the first database to 75.46% and slightly decreased to 78.04% in the second.