Zur Seitenansicht
 

Titelaufnahme

Titel
Multimodal skin lesion classification using deep learning
Verfasser / VerfasserinTschandl, Philipp ; Yap, Jordan ; Yolland, William
Erschienen in
Experimental Dermatology, 2018, Jg. 27, H. 11, S. 1261-1267
ErschienenWiley-Blackwell, 2018
SpracheEnglisch
DokumenttypAufsatz in einer Zeitschrift
Schlagwörter (EN)deep learning / dermatology / dermatoscopy / feature fusion / multimodal
URNurn:nbn:at:at-ubmuw:3-489 Persistent Identifier (URN)
DOI10.1111/exd.13777 
Zugriffsbeschränkung
 Das Werk ist frei verfügbar
Dateien
Multimodal skin lesion classification using deep learning [0.47 mb]
Links
Nachweis
Klassifikation
Zusammenfassung (Englisch)

While convolutional neural networks (CNNs) have successfully been applied for skin lesion classification, previous studies have generally considered only a single clinical/macroscopic image and output a binary decision. In this work, we have presented a method which combines multiple imaging modalities together with patient metadata to improve the performance of automated skin lesion diagnosis. We evaluated our method on a binary classification task for comparison with previous studies as well as a five class classification task representative of a realworld clinical scenario. We showed that our multimodal classifier outperforms a baseline classifier that only uses a single macroscopic image in both binary melanoma detection (AUC 0.866 vs 0.784) and in multiclass classification (mAP 0.729 vs 0.598). In addition, we have quantitatively showed the automated diagnosis of skin lesions using dermatoscopic images obtains a higher performance when compared to using macroscopic images. We performed experiments on a new data set of 2917 cases where each case contains a dermatoscopic image, macroscopic image and patient metadata.

Statistik
Das PDF-Dokument wurde 2 mal heruntergeladen.
Lizenz
CC-BY-Lizenz (4.0)Creative Commons Namensnennung 4.0 International Lizenz