Automated identification of malignancy in whole-slide pathological images: identification of eyelid malignant melanoma in gigapixel pathological slides using deep learning. Wang Linyan,Ding Longqian,Liu Zhifang,Sun Lingling,Chen Lirong,Jia Renbing,Dai Xizhe,Cao Jing,Ye Juan The British journal of ophthalmology BACKGROUND/AIMS:To develop a deep learning system (DLS) that can automatically detect malignant melanoma (MM) in the eyelid from histopathological sections with colossal information density. METHODS:Setting: Double institutional study. STUDY POPULATION:We retrospectively reviewed 225 230 pathological patches (small section cut from pathologist-labelled area from an H&E image), cut from 155 H&E-stained whole-slide images (WSI). OBSERVATION PROCEDURES:Labelled gigapixel pathological WSIs were used to train and test a model designed to assign patch-level classification. Using malignant probability from a convolutional neural network, the patches were embedded back into each WSI to generate a visualisation heatmap and leveraged a random forest model to establish a WSI-level diagnosis. MAIN OUTCOME MEASURE(S):For classification, the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity and specificity were used to evaluate the efficacy of the DLS in detecting MM. RESULTS:For patch diagnosis, the model achieved an AUC of 0.989 (95% CI 0.989 to 0.991), with an accuracy, sensitivity and specificity of 94.9%, 94.7% and 95.3%, respectively. We displayed the lesion area on the WSIs as graded by malignant potential. For WSI, the obtained sensitivity, specificity and accuracy were 100%, 96.5% and 98.2%, respectively, with an AUC of 0.998 (95% CI 0.994 to 1.000). CONCLUSION:Our DLS, which uses artificial intelligence, can automatically detect MM in histopathological slides and highlight the lesion area on WSIs using a probabilistic heatmap. In addition, our approach has the potential to be applied to the histopathological sections of other tumour types. 10.1136/bjophthalmol-2018-313706
    Development and validation of a plasma-based melanoma biomarker suitable for clinical use. Van Laar Ryan,Lincoln Mitchel,Van Laar Barton British journal of cancer This corrects the article DOI: 10.1038/bjc.2017.85. 10.1038/bjc.2017.477
    Support vector machine learning model for the prediction of sentinel node status in patients with cutaneous melanoma. Mocellin Simone,Ambrosi Alessandro,Montesco Maria Cristina,Foletto Mirto,Zavagno Giorgio,Nitti Donato,Lise Mario,Rossi Carlo Riccardo Annals of surgical oncology BACKGROUND:Currently, approximately 80% of melanoma patients undergoing sentinel node biopsy (SNB) have negative sentinel lymph nodes (SLNs), and no prediction system is reliable enough to be implemented in the clinical setting to reduce the number of SNB procedures. In this study, the predictive power of support vector machine (SVM)-based statistical analysis was tested. METHODS:The clinical records of 246 patients who underwent SNB at our institution were used for this analysis. The following clinicopathologic variables were considered: the patient's age and sex and the tumor's histological subtype, Breslow thickness, Clark level, ulceration, mitotic index, lymphocyte infiltration, regression, angiolymphatic invasion, microsatellitosis, and growth phase. The results of SVM-based prediction of SLN status were compared with those achieved with logistic regression. RESULTS:The SLN positivity rate was 22% (52 of 234). When the accuracy was > or = 80%, the negative predictive value, positive predictive value, specificity, and sensitivity were 98%, 54%, 94%, and 77% and 82%, 41%, 69%, and 93% by using SVM and logistic regression, respectively. Moreover, SVM and logistic regression were associated with a diagnostic error and an SNB percentage reduction of (1) 1% and 60% and (2) 15% and 73%, respectively. CONCLUSIONS:The results from this pilot study suggest that SVM-based prediction of SLN status might be evaluated as a prognostic method to avoid the SNB procedure in 60% of patients currently eligible, with a very low error rate. If validated in larger series, this strategy would lead to obvious advantages in terms of both patient quality of life and costs for the health care system. 10.1245/ASO.2006.03.019
    Computer-Aided Diagnosis of Micro-Malignant Melanoma Lesions Applying Support Vector Machines. Jaworek-Korjakowska Joanna BioMed research international Background. One of the fatal disorders causing death is malignant melanoma, the deadliest form of skin cancer. The aim of the modern dermatology is the early detection of skin cancer, which usually results in reducing the mortality rate and less extensive treatment. This paper presents a study on classification of melanoma in the early stage of development using SVMs as a useful technique for data classification. Method. In this paper an automatic algorithm for the classification of melanomas in their early stage, with a diameter under 5 mm, has been presented. The system contains the following steps: image enhancement, lesion segmentation, feature calculation and selection, and classification stage using SVMs. Results. The algorithm has been tested on 200 images including 70 melanomas and 130 benign lesions. The SVM classifier achieved sensitivity of 90% and specificity of 96%. The results indicate that the proposed approach captured most of the malignant cases and could provide reliable information for effective skin mole examination. Conclusions. Micro-melanomas due to the small size and low advancement of development create enormous difficulties during the diagnosis even for experts. The use of advanced equipment and sophisticated computer systems can help in the early diagnosis of skin lesions. 10.1155/2016/4381972
    Joint reconstruction and classification of tumor cells and cell interactions in melanoma tissue sections with synthesized training data. Effland Alexander,Kobler Erich,Brandenburg Anne,Klatzer Teresa,Neuhäuser Leonie,Hölzel Michael,Landsberg Jennifer,Pock Thomas,Rumpf Martin International journal of computer assisted radiology and surgery PURPOSE:Cancers are almost always diagnosed by morphologic features in tissue sections. In this context, machine learning tools provide new opportunities to describe tumor immune cell interactions within the tumor microenvironment and thus provide phenotypic information that might be predictive for the response to immunotherapy. METHODS:We develop a machine learning approach using variational networks for joint image denoising and classification of tissue sections for melanoma, which is an established model tumor for immuno-oncology research. The manual annotation of real training data would require substantial user interaction of experienced pathologists for each single training image, and the training of larger networks would rely on a very large number of such data sets with ground truth annotation. To overcome this bottleneck, we synthesize training data together with a proper tissue structure classification. To this end, a stochastic data generation process is used to mimic cell morphology, cell distribution and tissue architecture in the tumor microenvironment. Particular components of this tool are random placement and rotation of a large number of patches for presegmented cell nuclei, a stochastic fast marching approach to mimic the geometry of cells and texture generation based on a color covariance analysis of real data. Here, the generated training data reflect a large range of interaction patterns. RESULTS:In several applications to histological tissue sections, we analyze the efficiency and accuracy of the proposed approach. As a result, depending on the scenario considered, almost all cells and nuclei which ought to be detected are actually marked as classified and hardly any misclassifications occur. CONCLUSIONS:The proposed method allows for a computer-aided screening of histological tissue sections utilizing variational networks with a particular emphasis on tumor immune cell interactions and on the robust cell nuclei classification. 10.1007/s11548-019-01919-z
    Deep Learning in Medicine. Are We Ready? Ting Daniel Sw,Rim Tyler H,Choi Yoon Seong,Ledsam Joseph R Annals of the Academy of Medicine, Singapore
    An Image Processing and Genetic Algorithm-based Approach for the Detection of Melanoma in Patients. Salem Christian,Azar Danielle,Tokajian Sima Methods of information in medicine Melanoma skin cancer is the most aggressive type of skin cancer. It is most commonly caused by excessive exposure to Ultraviolet radiation which triggers uncontrollable proliferation of melanocytes. Early detection makes melanoma relatively easily curable. Diagnosis is usually done using traditional methods such as dermoscopy which consists of a manual examination performed by the physician. However, these methods are not always well founded because they depend heavily on the physician's experience. Hence, there is a great need for a new automated approach in order to make diagnosis more reliable. In this paper, we present a twophase technique to classify images of lesions into benign or malignant. The first phase consists of an image processing-based method that extracts the Asymmetry, Border Irregularity, Color Variation and Diameter of a given mole. The second phase classifies lesions using a Genetic Algorithm. Our technique shows a significant improvement over other well-known algorithms and proves to be more stable on both training and testing data. 10.3412/ME17-01-0061
    Modelling survival after treatment of intraocular melanoma using artificial neural networks and Bayes theorem. Taktak Azzam F G,Fisher Anthony C,Damato Bertil E Physics in medicine and biology This paper describes the development of an artificial intelligence (AI) system for survival prediction from intraocular melanoma. The system used artificial neural networks (ANNs) with five input parameters: coronal and sagittal tumour location, anterior tumour margin, largest basal tumour diameter and the cell type. After excluding records with missing data, 2331 patients were included in the study. These were split randomly into training and test sets. Date censorship was applied to the records to deal with patients who were lost to follow-up and patients who died from general causes. Bayes theorem was then applied to the ANN output to construct survival probability curves. A validation set with 34 patients unseen to both training and test sets was used to compare the AI system with Cox's regression (CR) and Kaplan-Meier (KM) analyses. Results showed large differences in the mean 5 year survival probability figures when the number of records with matching characteristics was small. However, as the number of matches increased to > 100 the system tended to agree with CR and KM. The validation set was also used to compare the system with a clinical expert in predicting time to metastatic death. The rms error was 3.7 years for the system and 4.3 years for the clinical expert for 15 years survival. For < 10 years survival, these figures were 2.7 and 4.2, respectively. We concluded that the AI system can match if not better the clinical expert's prediction. There were significant differences with CR and KM analyses when the number of records was small, but it was not known which model is more accurate.
    A comprehensive genome-wide analysis of melanoma Breslow thickness identifies interaction between CDC42 and SCIN genetic variants. Vaysse Amaury,Fang Shenying,Brossard Myriam,Wei Qingyi,Chen Wei V,Mohamdi Hamida,Vincent-Fetita Lynda,Margaritte-Jeannin Patricia,Lavielle Nolwenn,Maubec Eve,Lathrop Mark,Avril Marie-Françoise,Amos Christopher I,Lee Jeffrey E,Demenais Florence International journal of cancer Breslow thickness (BT) is a major prognostic factor of cutaneous melanoma (CM), the most fatal skin cancer. The genetic component of BT has only been explored by candidate gene studies with inconsistent results. Our objective was to uncover the genetic factors underlying BT using an hypothesis-free genome-wide approach. Our analysis strategy integrated a genome-wide association study (GWAS) of single nucleotide polymorphisms (SNPs) for BT followed by pathway analysis of GWAS outcomes using the gene-set enrichment analysis (GSEA) method and epistasis analysis within BT-associated pathways. This strategy was applied to two large CM datasets with Hapmap3-imputed SNP data: the French MELARISK study for discovery (966 cases) and the MD Anderson Cancer Center study (1,546 cases) for replication. While no marginal effect of individual SNPs was revealed through GWAS, three pathways, defined by gene ontology (GO) categories were significantly enriched in genes associated with BT (false discovery rate ≤5% in both studies): hormone activity, cytokine activity and myeloid cell differentiation. Epistasis analysis, within each significant GO, identified a statistically significant interaction between CDC42 and SCIN SNPs (pmeta-int =2.2 × 10(-6) , which met the overall multiple-testing corrected threshold of 2.5 × 10(-6) ). These two SNPs (and proxies) are strongly associated with CDC42 and SCIN gene expression levels and map to regulatory elements in skin cells. This interaction has important biological relevance since CDC42 and SCIN proteins have opposite effects in actin cytoskeleton organization and dynamics, a key mechanism underlying melanoma cell migration and invasion. 10.1002/ijc.30245
    Toward predicting metastatic progression of melanoma based on gene expression data. Li Yuanyuan,Krahn Juno M,Flake Gordon P,Umbach David M,Li Leping Pigment cell & melanoma research Primary and metastatic melanoma tumors share the same cell origin, making it challenging to identify genomic biomarkers that can differentiate them. Primary tumors themselves can be heterogeneous, reflecting ongoing genomic changes as they progress toward metastasizing. We developed a computational method to explore this heterogeneity and to predict metastatic progression of the primary tumors. We applied our method separately to gene expression and to microRNA (miRNA) expression data from ~450 primary and metastatic skin cutaneous melanoma (SKCM) samples from the Cancer Genome Atlas (TCGA). Metastatic progression scores from RNA-seq data were significantly associated with clinical staging of patients' lymph nodes, whereas scores from miRNA-seq data were significantly associated with Clark's level. The loss of expression of many characteristic epithelial lineage genes in primary SKCM tumor samples was highly correlated with predicted progression scores. We suggest that those genes/miRNAs might serve as putative biomarkers for SKCM metastatic progression. 10.1111/pcmr.12374
    The Long Non-Coding RNA RHPN1-AS1 Promotes Uveal Melanoma Progression. Lu Linna,Yu Xiaoyu,Zhang Leilei,Ding Xia,Pan Hui,Wen Xuyang,Xu Shiqiong,Xing Yue,Fan Jiayan,Ge Shengfang,Zhang He,Jia Renbing,Fan Xianqun International journal of molecular sciences Increasing evidence suggests that aberrant long non-coding RNAs (lncRNAs) are significantly correlated with the pathogenesis, development and metastasis of cancers. () is a 2030-bp transcript originating from human chromosome 8q24. However, the role of in uveal melanoma (UM) remains to be clarified. In this study, we aimed to elucidate the molecular function of in UM. The RNA levels of in UM cell lines were examined using the quantitative real-time polymerase chain reaction (qRT-PCR). Short interfering RNAs (siRNAs) were designed to quench expression, and UM cells stably expressing short hairpin (sh) were established. Next, the cell proliferation and migration abilities were determined using a colony formation assay and a transwell migration/invasion assay. A tumor xenograft model in nude mice was established to confirm the function of in vivo. was significantly upregulated in a number of UM cell lines compared with the normal human retinal pigment epithelium (RPE) cell line. knockdown significantly inhibited UM cell proliferation and migration in vitro and in vivo. Our data suggest that could be an oncoRNA in UM, which may serve as a candidate prognostic biomarker and target for new therapies in malignant UM. 10.3390/ijms18010226
    Computer-assisted diagnosis techniques (dermoscopy and spectroscopy-based) for diagnosing skin cancer in adults. Ferrante di Ruffano Lavinia,Takwoingi Yemisi,Dinnes Jacqueline,Chuchu Naomi,Bayliss Susan E,Davenport Clare,Matin Rubeta N,Godfrey Kathie,O'Sullivan Colette,Gulati Abha,Chan Sue Ann,Durack Alana,O'Connell Susan,Gardiner Matthew D,Bamber Jeffrey,Deeks Jonathan J,Williams Hywel C, The Cochrane database of systematic reviews BACKGROUND:Early accurate detection of all skin cancer types is essential to guide appropriate management and to improve morbidity and survival. Melanoma and cutaneous squamous cell carcinoma (cSCC) are high-risk skin cancers which have the potential to metastasise and ultimately lead to death, whereas basal cell carcinoma (BCC) is usually localised with potential to infiltrate and damage surrounding tissue. Anxiety around missing early curable cases needs to be balanced against inappropriate referral and unnecessary excision of benign lesions. Computer-assisted diagnosis (CAD) systems use artificial intelligence to analyse lesion data and arrive at a diagnosis of skin cancer. When used in unreferred settings ('primary care'), CAD may assist general practitioners (GPs) or other clinicians to more appropriately triage high-risk lesions to secondary care. Used alongside clinical and dermoscopic suspicion of malignancy, CAD may reduce unnecessary excisions without missing melanoma cases. OBJECTIVES:To determine the accuracy of CAD systems for diagnosing cutaneous invasive melanoma and atypical intraepidermal melanocytic variants, BCC or cSCC in adults, and to compare its accuracy with that of dermoscopy. SEARCH METHODS:We undertook a comprehensive search of the following databases from inception up to August 2016: Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE; Embase; CINAHL; CPCI; Zetoc; Science Citation Index; US National Institutes of Health Ongoing Trials Register; NIHR Clinical Research Network Portfolio Database; and the World Health Organization International Clinical Trials Registry Platform. We studied reference lists and published systematic review articles. SELECTION CRITERIA:Studies of any design that evaluated CAD alone, or in comparison with dermoscopy, in adults with lesions suspicious for melanoma or BCC or cSCC, and compared with a reference standard of either histological confirmation or clinical follow-up. DATA COLLECTION AND ANALYSIS:Two review authors independently extracted all data using a standardised data extraction and quality assessment form (based on QUADAS-2). We contacted authors of included studies where information related to the target condition or diagnostic threshold were missing. We estimated summary sensitivities and specificities separately by type of CAD system, using the bivariate hierarchical model. We compared CAD with dermoscopy using (a) all available CAD data (indirect comparisons), and (b) studies providing paired data for both tests (direct comparisons). We tested the contribution of human decision-making to the accuracy of CAD diagnoses in a sensitivity analysis by removing studies that gave CAD results to clinicians to guide diagnostic decision-making. MAIN RESULTS:We included 42 studies, 24 evaluating digital dermoscopy-based CAD systems (Derm-CAD) in 23 study cohorts with 9602 lesions (1220 melanomas, at least 83 BCCs, 9 cSCCs), providing 32 datasets for Derm-CAD and seven for dermoscopy. Eighteen studies evaluated spectroscopy-based CAD (Spectro-CAD) in 16 study cohorts with 6336 lesions (934 melanomas, 163 BCC, 49 cSCCs), providing 32 datasets for Spectro-CAD and six for dermoscopy. These consisted of 15 studies using multispectral imaging (MSI), two studies using electrical impedance spectroscopy (EIS) and one study using diffuse-reflectance spectroscopy. Studies were incompletely reported and at unclear to high risk of bias across all domains. Included studies inadequately address the review question, due to an abundance of low-quality studies, poor reporting, and recruitment of highly selected groups of participants.Across all CAD systems, we found considerable variation in the hardware and software technologies used, the types of classification algorithm employed, methods used to train the algorithms, and which lesion morphological features were extracted and analysed across all CAD systems, and even between studies evaluating CAD systems. Meta-analysis found CAD systems had high sensitivity for correct identification of cutaneous invasive melanoma and atypical intraepidermal melanocytic variants in highly selected populations, but with low and very variable specificity, particularly for Spectro-CAD systems. Pooled data from 22 studies estimated the sensitivity of Derm-CAD for the detection of melanoma as 90.1% (95% confidence interval (CI) 84.0% to 94.0%) and specificity as 74.3% (95% CI 63.6% to 82.7%). Pooled data from eight studies estimated the sensitivity of multispectral imaging CAD (MSI-CAD) as 92.9% (95% CI 83.7% to 97.1%) and specificity as 43.6% (95% CI 24.8% to 64.5%). When applied to a hypothetical population of 1000 lesions at the mean observed melanoma prevalence of 20%, Derm-CAD would miss 20 melanomas and would lead to 206 false-positive results for melanoma. MSI-CAD would miss 14 melanomas and would lead to 451 false diagnoses for melanoma. Preliminary findings suggest CAD systems are at least as sensitive as assessment of dermoscopic images for the diagnosis of invasive melanoma and atypical intraepidermal melanocytic variants. We are unable to make summary statements about the use of CAD in unreferred populations, or its accuracy in detecting keratinocyte cancers, or its use in any setting as a diagnostic aid, because of the paucity of studies. AUTHORS' CONCLUSIONS:In highly selected patient populations all CAD types demonstrate high sensitivity, and could prove useful as a back-up for specialist diagnosis to assist in minimising the risk of missing melanomas. However, the evidence base is currently too poor to understand whether CAD system outputs translate to different clinical decision-making in practice. Insufficient data are available on the use of CAD in community settings, or for the detection of keratinocyte cancers. The evidence base for individual systems is too limited to draw conclusions on which might be preferred for practice. Prospective comparative studies are required that evaluate the use of already evaluated CAD systems as diagnostic aids, by comparison to face-to-face dermoscopy, and in participant populations that are representative of those in which the test would be used in practice. 10.1002/14651858.CD013186
    Acral melanoma detection using a convolutional neural network for dermoscopy images. Yu Chanki,Yang Sejung,Kim Wonoh,Jung Jinwoong,Chung Kee-Yang,Lee Sang Wook,Oh Byungho PloS one BACKGROUND/PURPOSE:Acral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. METHODS:A total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist's and non-expert's evaluation. RESULTS:The accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert's evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden's index like 0.6795, 0.6073, which were similar score with the expert. CONCLUSION:Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet. 10.1371/journal.pone.0193321
    Automated detection of malignant features in confocal microscopy on superficial spreading melanoma versus nevi. Gareau Dan,Hennessy Ricky,Wan Eric,Pellacani Giovanni,Jacques Steven L Journal of biomedical optics In-vivo reflectance confocal microscopy (RCM) shows promise for the early detection of superficial spreading melanoma (SSM). RCM of SSM shows pagetoid melanocytes (PMs) in the epidermis and disarray at the dermal-epidermal junction (DEJ), which are automatically quantified with a computer algorithm that locates depth of the most superficial pigmented surface [D(SPS)(x,y)] containing PMs in the epidermis and pigmented basal cells near the DEJ. The algorithm uses 200 noninvasive confocal optical sections that image the superficial 200 μm of ten skin sites: five unequivocal SSMs and five nevi. The pattern recognition algorithm automatically identifies PMs in all five SSMs and finds none in the nevi. A large mean gradient ψ (roughness) between laterally adjacent points on D(SPS)(x,y) identifies DEJ disruption in SSM ψ = 11.7 ± 3.7 [-] for n = 5 SSMs versus a small ψ = 5.5 ± 1.0 [-] for n = 5 nevi (significance, p = 0.0035). Quantitative endpoint metrics for malignant characteristics make digital RCM data an attractive diagnostic asset for pathologists, augmenting studies thus far, which have relied largely on visual assessment. 10.1117/1.3524301
    Accuracy of Computer-Aided Diagnosis of Melanoma: A Meta-analysis. Dick Vincent,Sinz Christoph,Mittlböck Martina,Kittler Harald,Tschandl Philipp JAMA dermatology Importance:The recent advances in the field of machine learning have raised expectations that computer-aided diagnosis will become the standard for the diagnosis of melanoma. Objective:To critically review the current literature and compare the diagnostic accuracy of computer-aided diagnosis with that of human experts. Data Sources:The MEDLINE, arXiv, and PubMed Central databases were searched to identify eligible studies published between January 1, 2002, and December 31, 2018. Study Selection:Studies that reported on the accuracy of automated systems for melanoma were selected. Search terms included melanoma, diagnosis, detection, computer aided, and artificial intelligence. Data Extraction and Synthesis:Evaluation of the risk of bias was performed using the QUADAS-2 tool, and quality assessment was based on predefined criteria. Data were analyzed from February 1 to March 10, 2019. Main Outcomes and Measures:Summary estimates of sensitivity and specificity and summary receiver operating characteristic curves were the primary outcomes. Results:The literature search yielded 1694 potentially eligible studies, of which 132 were included and 70 offered sufficient information for a quantitative analysis. Most studies came from the field of computer science. Prospective clinical studies were rare. Combining the results for automated systems gave a melanoma sensitivity of 0.74 (95% CI, 0.66-0.80) and a specificity of 0.84 (95% CI, 0.79-0.88). Sensitivity was lower in studies that used independent test sets than in those that did not (0.51; 95% CI, 0.34-0.69 vs 0.82; 95% CI, 0.77-0.86; P < .001); however, the specificity was similar (0.83; 95% CI, 0.71-0.91 vs 0.85; 95% CI, 0.80-0.88; P = .67). In comparison with dermatologists' diagnosis, computer-aided diagnosis showed similar sensitivities and a 10 percentage points lower specificity, but the difference was not statistically significant. Studies were heterogeneous and substantial risk of bias was found in all but 4 of the 70 studies included in the quantitative analysis. Conclusions and Relevance:Although the accuracy of computer-aided diagnosis for melanoma detection is comparable to that of experts, the real-world applicability of these systems is unknown and potentially limited owing to overfitting and the risk of bias of the studies at hand. 10.1001/jamadermatol.2019.1375
    A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task. Brinker Titus J,Hekler Achim,Enk Alexander H,Klode Joachim,Hauschild Axel,Berking Carola,Schilling Bastian,Haferkamp Sebastian,Schadendorf Dirk,Fröhling Stefan,Utikal Jochen S,von Kalle Christof, European journal of cancer (Oxford, England : 1990) BACKGROUND:Recent studies have demonstrated the use of convolutional neural networks (CNNs) to classify images of melanoma with accuracies comparable to those achieved by board-certified dermatologists. However, the performance of a CNN exclusively trained with dermoscopic images in a clinical image classification task in direct competition with a large number of dermatologists has not been measured to date. This study compares the performance of a convolutional neuronal network trained with dermoscopic images exclusively for identifying melanoma in clinical photographs with the manual grading of the same images by dermatologists. METHODS:We compared automatic digital melanoma classification with the performance of 145 dermatologists of 12 German university hospitals. We used methods from enhanced deep learning to train a CNN with 12,378 open-source dermoscopic images. We used 100 clinical images to compare the performance of the CNN to that of the dermatologists. Dermatologists were compared with the deep neural network in terms of sensitivity, specificity and receiver operating characteristics. FINDINGS:The mean sensitivity and specificity achieved by the dermatologists with clinical images was 89.4% (range: 55.0%-100%) and 64.4% (range: 22.5%-92.5%). At the same sensitivity, the CNN exhibited a mean specificity of 68.2% (range 47.5%-86.25%). Among the dermatologists, the attendings showed the highest mean sensitivity of 92.8% at a mean specificity of 57.7%. With the same high sensitivity of 92.8%, the CNN had a mean specificity of 61.1%. INTERPRETATION:For the first time, dermatologist-level image classification was achieved on a clinical image classification task without training on clinical images. The CNN had a smaller variance of results indicating a higher robustness of computer vision compared with human assessment for dermatologic image classification tasks. 10.1016/j.ejca.2019.02.005
    Prediction of BAP1 Expression in Uveal Melanoma Using Densely-Connected Deep Classification Networks. Sun Muyi,Zhou Wei,Qi Xingqun,Zhang Guanhong,Girnita Leonard,Seregard Stefan,Grossniklaus Hans E,Yao Zeyi,Zhou Xiaoguang,Stålhammar Gustav Cancers Uveal melanoma is the most common primary intraocular malignancy in adults, with nearly half of all patients eventually developing metastases, which are invariably fatal. Manual assessment of the level of expression of the tumor suppressor BRCA1-associated protein 1 (BAP1) in tumor cell nuclei can identify patients with a high risk of developing metastases, but may suffer from poor reproducibility. In this study, we verified whether artificial intelligence could predict manual assessments of BAP1 expression in 47 enucleated eyes with uveal melanoma, collected from one European and one American referral center. Digitally scanned pathology slides were divided into 8176 patches, each with a size of 256 × 256 pixels. These were in turn divided into a training cohort of 6800 patches and a validation cohort of 1376 patches. A densely-connected classification network based on deep learning was then applied to each patch. This achieved a sensitivity of 97.1%, a specificity of 98.1%, an overall diagnostic accuracy of 97.1%, and an F1-score of 97.8% for the prediction of BAP1 expression in individual high resolution patches, and slightly less with lower resolution. The area under the receiver operating characteristic (ROC) curves of the deep learning model achieved an average of 0.99. On a full tumor level, our network classified all 47 tumors identically with an ophthalmic pathologist. We conclude that this deep learning model provides an accurate and reproducible method for the prediction of BAP1 expression in uveal melanoma. 10.3390/cancers11101579
    Pathologist-level classification of histopathological melanoma images with deep neural networks. Hekler Achim,Utikal Jochen Sven,Enk Alexander H,Berking Carola,Klode Joachim,Schadendorf Dirk,Jansen Philipp,Franklin Cindy,Holland-Letz Tim,Krahl Dieter,von Kalle Christof,Fröhling Stefan,Brinker Titus Josef European journal of cancer (Oxford, England : 1990) BACKGROUND:The diagnosis of most cancers is made by a board-certified pathologist based on a tissue biopsy under the microscope. Recent research reveals a high discordance between individual pathologists. For melanoma, the literature reports 25-26% of discordance for classifying a benign nevus versus malignant melanoma. Deep learning was successfully implemented to enhance the precision of lung and breast cancer diagnoses. The aim of this study is to illustrate the potential of deep learning to assist human assessment for a histopathologic melanoma diagnosis. METHODS:Six hundred ninety-five lesions were classified by an expert histopathologist in accordance with current guidelines (350 nevi and 345 melanomas). Only the haematoxylin and eosin stained (H&E) slides of these lesions were digitalised using a slide scanner and then randomly cropped. Five hundred ninety-five of the resulting images were used for the training of a convolutional neural network (CNN). The additional 100 H&E image sections were used to test the results of the CNN in comparison with the original class labels. FINDINGS:The total discordance with the histopathologist was 18% for melanoma (95% confidence interval [CI]: 7.4-28.6%), 20% for nevi (95% CI: 8.9-31.1%) and 19% for the full set of images (95% CI: 11.3-26.7%). INTERPRETATION:Even in the worst case, the discordance of the CNN was about the same compared with the discordance between human pathologists as reported in the literature. Despite the vastly reduced amount of data, time necessary for diagnosis and cost compared with the pathologist, our CNN archived on-par performance. Conclusively, CNNs indicate to be a valuable tool to assist human melanoma diagnoses. 10.1016/j.ejca.2019.04.021
    Artificial Intelligence Estimates the Importance of Baseline Factors in Predicting Response to Anti-PD1 in Metastatic Melanoma. Indini Alice,Di Guardo Lorenza,Cimminiello Carolina,De Braud Filippo,Del Vecchio Michele American journal of clinical oncology OBJECTIVE:Prognosis of patients with metastatic melanoma has dramatically improved over recent years because of the advent of antibodies targeting programmed cell death protein-1 (PD1). However, the response rate is ~40% and baseline biomarkers for the outcome are yet to be identified. Here, we aimed to determine whether artificial intelligence might be useful in weighting the importance of baseline variables in predicting response to anti-PD1. METHODS:This is a retrospective study evaluating 173 patients receiving anti-PD1 for melanoma. Using an artificial neuronal network analysis, the importance of different variables was estimated and used in predicting response rate and overall survival. RESULTS:After a mean follow-up of 12.8 (±11.9) months, disease control rate was 51%. Using artificial neuronal network, we observed that 3 factors predicted response to anti-PD1: neutrophil-to-lymphocyte ratio (NLR) (importance: 0.195), presence of ≥3 metastatic sites (importance: 0.156), and baseline lactate dehydrogenase (LDH) > upper limit of normal (importance: 0.154). Looking at connections between different covariates and overall survival, the most important variables influencing survival were: presence of ≥3 metastatic sites (importance: 0.202), age (importance: 0.189), NLR (importance: 0.164), site of primary melanoma (cutaneous vs. noncutaneous) (importance: 0.112), and LDH > upper limit of normal (importance: 0.108). CONCLUSIONS:NLR, presence of ≥3 metastatic sites, LDH levels, age, and site of primary melanoma are important baseline factors influencing response and survival. Further studies are warranted to estimate a model to drive the choice to administered anti-PD1 treatments in patients with melanoma. 10.1097/COC.0000000000000566
    Artificial intelligence for melanoma diagnosis: how can we deliver on the promise? Mar V J,Soyer H P Annals of oncology : official journal of the European Society for Medical Oncology 10.1093/annonc/mdy193
    Deep learning outperformed 11 pathologists in the classification of histopathological melanoma images. Hekler Achim,Utikal Jochen S,Enk Alexander H,Solass Wiebke,Schmitt Max,Klode Joachim,Schadendorf Dirk,Sondermann Wiebke,Franklin Cindy,Bestvater Felix,Flaig Michael J,Krahl Dieter,von Kalle Christof,Fröhling Stefan,Brinker Titus J European journal of cancer (Oxford, England : 1990) BACKGROUND:The diagnosis of most cancers is made by a board-certified pathologist based on a tissue biopsy under the microscope. Recent research reveals a high discordance between individual pathologists. For melanoma, the literature reports on 25-26% of discordance for classifying a benign nevus versus malignant melanoma. A recent study indicated the potential of deep learning to lower these discordances. However, the performance of deep learning in classifying histopathologic melanoma images was never compared directly to human experts. The aim of this study is to perform such a first direct comparison. METHODS:A total of 695 lesions were classified by an expert histopathologist in accordance with current guidelines (350 nevi/345 melanoma). Only the haematoxylin & eosin (H&E) slides of these lesions were digitalised via a slide scanner and then randomly cropped. A total of 595 of the resulting images were used to train a convolutional neural network (CNN). The additional 100 H&E image sections were used to test the results of the CNN in comparison to 11 histopathologists. Three combined McNemar tests comparing the results of the CNNs test runs in terms of sensitivity, specificity and accuracy were predefined to test for significance (p < 0.05). FINDINGS:The CNN achieved a mean sensitivity/specificity/accuracy of 76%/60%/68% over 11 test runs. In comparison, the 11 pathologists achieved a mean sensitivity/specificity/accuracy of 51.8%/66.5%/59.2%. Thus, the CNN was significantly (p = 0.016) superior in classifying the cropped images. INTERPRETATION:With limited image information available, a CNN was able to outperform 11 histopathologists in the classification of histopathological melanoma images and thus shows promise to assist human melanoma diagnoses. 10.1016/j.ejca.2019.06.012
    [Melanoma early detection and automatic diagnosis of pigmented lesions]. Stolz Wilhelm Journal der Deutschen Dermatologischen Gesellschaft = Journal of the German Society of Dermatology : JDDG 10.1111/ddg.12399
    Improving the early diagnosis of early nodular melanoma: can we do better? Corneli Paola,Zalaudek Iris,Magaton Rizzi Giovanni,di Meo Nicola Expert review of anticancer therapy INTRODUCTION:Cutaneous melanoma is the sixth most common malignant cancer in the USA. Among different subtypes of melanoma, nodular melanoma (NM) accounts about 14% of all cases but is responsible for more than 40% of melanoma deaths. Early diagnosis is the best method to improve melanoma prognosis. Unfortunately, early diagnosis of NM is particularly challenging given that patients often lack identifiable risk factors such as many moles or freckles. Moreover, early NM may mimic a range of benign skin lesions that are not routinely excised or biopsied in every day practice. For this reason, specific clinical and skin imaging clues have been proposed to improve early detection of NM. Areas covered: The review discusses about the noninvasive tools to diagnose thin melanoma, particularly NM. Expert commentary: Currently, dermatologists present a wide opportunity of diagnostic tools. Current data suggest that the early diagnosis of NM is a major challenge as the majority of early NM are symmetric, roundish, and lack specific pattern. Another promising strategy is based on recent data suggesting that artificial intelligence based on deep convolutional neural networking is able to outperform average dermatologist. Further research is necessary to validate the performance of this method in the real world and in the clinical setting. 10.1080/14737140.2018.1507822
    Automatic diagnosis of melanoma using machine learning methods on a spectroscopic system. Li Lin,Zhang Qizhi,Ding Yihua,Jiang Huabei,Thiers Bruce H,Wang James Z BMC medical imaging BACKGROUND:Early and accurate diagnosis of melanoma, the deadliest type of skin cancer, has the potential to reduce morbidity and mortality rate. However, early diagnosis of melanoma is not trivial even for experienced dermatologists, as it needs sampling and laboratory tests which can be extremely complex and subjective. The accuracy of clinical diagnosis of melanoma is also an issue especially in distinguishing between melanoma and mole. To solve these problems, this paper presents an approach that makes non-subjective judgements based on quantitative measures for automatic diagnosis of melanoma. METHODS:Our approach involves image acquisition, image processing, feature extraction, and classification. 187 images (19 malignant melanoma and 168 benign lesions) were collected in a clinic by a spectroscopic device that combines single-scattered, polarized light spectroscopy with multiple-scattered, un-polarized light spectroscopy. After noise reduction and image normalization, features were extracted based on statistical measurements (i.e. mean, standard deviation, mean absolute deviation, L1 norm, and L2 norm) of image pixel intensities to characterize the pattern of melanoma. Finally, these features were fed into certain classifiers to train learning models for classification. RESULTS:We adopted three classifiers - artificial neural network, naïve bayes, and k-nearest neighbour to evaluate our approach separately. The naive bayes classifier achieved the best performance - 89% accuracy, 89% sensitivity and 89% specificity, which was integrated with our approach in a desktop application running on the spectroscopic system for diagnosis of melanoma. CONCLUSIONS:Our work has two strengths. (1) We have used single scattered polarized light spectroscopy and multiple scattered unpolarized light spectroscopy to decipher the multilayered characteristics of human skin. (2) Our approach does not need image segmentation, as we directly probe tiny spots in the lesion skin and the image scans do not involve background skin. The desktop application for automatic diagnosis of melanoma can help dermatologists get a non-subjective second opinion for their diagnosis decision. 10.1186/1471-2342-14-36
    Hyperspectral imaging in automated digital dermoscopy screening for melanoma. Hosking Anna-Marie,Coakley Brandon J,Chang Dorothy,Talebi-Liasi Faezeh,Lish Samantha,Lee Sung Won,Zong Amanda M,Moore Ian,Browning James,Jacques Steven L,Krueger James G,Kelly Kristen M,Linden Kenneth G,Gareau Daniel S Lasers in surgery and medicine OBJECTIVES:Early melanoma detection decreases morbidity and mortality. Early detection classically involves dermoscopy to identify suspicious lesions for which biopsy is indicated. Biopsy and histological examination then diagnose benign nevi, atypical nevi, or cancerous growths. With current methods, a considerable number of unnecessary biopsies are performed as only 11% of all biopsied, suspicious lesions are actually melanomas. Thus, there is a need for more advanced noninvasive diagnostics to guide the decision of whether or not to biopsy. Artificial intelligence can generate screening algorithms that transform a set of imaging biomarkers into a risk score that can be used to classify a lesion as a melanoma or a nevus by comparing the score to a classification threshold. Melanoma imaging biomarkers have been shown to be spectrally dependent in Red, Green, Blue (RGB) color channels, and hyperspectral imaging may further enhance diagnostic power. The purpose of this study was to use the same melanoma imaging biomarkers previously described, but over a wider range of wavelengths to determine if, in combination with machine learning algorithms, this could result in enhanced melanoma detection. METHODS:We used the melanoma advanced imaging dermatoscope (mAID) to image pigmented lesions assessed by dermatologists as requiring a biopsy. The mAID is a 21-wavelength imaging device in the 350-950 nm range. We then generated imaging biomarkers from these hyperspectral dermoscopy images, and, with the help of artificial intelligence algorithms, generated a melanoma Q-score for each lesion (0 = nevus, 1 = melanoma). The Q-score was then compared to the histopathologic diagnosis. RESULTS:The overall sensitivity and specificity of hyperspectral dermoscopy in detecting melanoma when evaluated in a set of lesions selected by dermatologists as requiring biopsy was 100% and 36%, respectively. CONCLUSION:With widespread application, and if validated in larger clinical trials, this non-invasive methodology could decrease unnecessary biopsies and potentially increase life-saving early detection events. Lasers Surg. Med. 51:214-222, 2019. © 2019 The Authors. Lasers in Surgery and Medicine Published by Wiley Periodicals, Inc. 10.1002/lsm.23055
    Deep Tissue Sequencing Using Hypodermoscopy and Augmented Intelligence to Analyze Atypical Pigmented Lesions. Khodadad Iman,Shafiee Javad,Wong Alexander,Kazemzadeh Farnoud,Arlette John Journal of cutaneous medicine and surgery BACKGROUND:Over the past decade, new technologies, devices, and methods have been developed to assist in the diagnosis of cutaneous melanocytic lesions. OBJECTIVE:Our objective was to evaluate the performance of an augmented intelligence system in the assessment of atypical pigmented lesions. METHODS:Nine atypical pigmented lesions on 8 patients were evaluated prior to surgical removal. No lesions had received previous treatment other than a diagnostic biopsy. Prior to surgical removal, each lesion was evaluated by an Augmented Intelligence Dermal Imager (AID) and the assessment parameters reviewed in light of the final histopathological diagnosis. RESULTS:The AID was used to evaluate a limited set of atypical pigmented lesions and showed sensitivity and specificity of 82% and 61%, respectively, based on its internal risk assessment algorithms. LIMITATIONS:These cases represent early assessments of the AID in a clinical setting, all prior assessments having been carried out on digital images. The information received from these evaluations requires further validation and analysis to be able to extrapolate its clinical usefulness. CONCLUSION:The AID combines dermoscopy, hypodermoscopy, and a trained augmented algorithm to produce a diffusion map representing the features of each lesion compared to the learned characteristics from a database of known dermoscopy images of lesions with definitive prior diagnosis. The information gathered from the diffusion map might be used to calculate a malignancy risk factor for the lesion compared to known melanoma features. This malignancy risk factor could be helpful in providing information to justify the biopsy of an atypical pigmented lesion. 10.1177/1203475418792000
    Artificial intelligence for melanoma diagnosis: how can we deliver on the promise? Mar V J,Soyer H P Annals of oncology : official journal of the European Society for Medical Oncology 10.1093/annonc/mdy191
    Artificial intelligence and melanoma diagnosis: ignoring human nature may lead to false predictions. Lallas Aimilios,Argenziano Giuseppe Dermatology practical & conceptual 10.5826/dpc.0804a01
    Automatic Classification of Specific Melanocytic Lesions Using Artificial Intelligence. Jaworek-Korjakowska Joanna,Kłeczek Paweł BioMed research international BACKGROUND:Given its propensity to metastasize, and lack of effective therapies for most patients with advanced disease, early detection of melanoma is a clinical imperative. Different computer-aided diagnosis (CAD) systems have been proposed to increase the specificity and sensitivity of melanoma detection. Although such computer programs are developed for different diagnostic algorithms, to the best of our knowledge, a system to classify different melanocytic lesions has not been proposed yet. METHOD:In this research we present a new approach to the classification of melanocytic lesions. This work is focused not only on categorization of skin lesions as benign or malignant but also on specifying the exact type of a skin lesion including melanoma, Clark nevus, Spitz/Reed nevus, and blue nevus. The proposed automatic algorithm contains the following steps: image enhancement, lesion segmentation, feature extraction, and selection as well as classification. RESULTS:The algorithm has been tested on 300 dermoscopic images and achieved accuracy of 92% indicating that the proposed approach classified most of the melanocytic lesions correctly. CONCLUSIONS:A proposed system can not only help to precisely diagnose the type of the skin mole but also decrease the amount of biopsies and reduce the morbidity related to skin lesion excision. 10.1155/2016/8934242
    Superior skin cancer classification by the combination of human and artificial intelligence. Hekler Achim,Utikal Jochen S,Enk Alexander H,Hauschild Axel,Weichenthal Michael,Maron Roman C,Berking Carola,Haferkamp Sebastian,Klode Joachim,Schadendorf Dirk,Schilling Bastian,Holland-Letz Tim,Izar Benjamin,von Kalle Christof,Fröhling Stefan,Brinker Titus J, European journal of cancer (Oxford, England : 1990) BACKGROUND:In recent studies, convolutional neural networks (CNNs) outperformed dermatologists in distinguishing dermoscopic images of melanoma and nevi. In these studies, dermatologists and artificial intelligence were considered as opponents. However, the combination of classifiers frequently yields superior results, both in machine learning and among humans. In this study, we investigated the potential benefit of combining human and artificial intelligence for skin cancer classification. METHODS:Using 11,444 dermoscopic images, which were divided into five diagnostic categories, novel deep learning techniques were used to train a single CNN. Then, both 112 dermatologists of 13 German university hospitals and the trained CNN independently classified a set of 300 biopsy-verified skin lesions into those five classes. Taking into account the certainty of the decisions, the two independently determined diagnoses were combined to a new classifier with the help of a gradient boosting method. The primary end-point of the study was the correct classification of the images into five designated categories, whereas the secondary end-point was the correct classification of lesions as either benign or malignant (binary classification). FINDINGS:Regarding the multiclass task, the combination of man and machine achieved an accuracy of 82.95%. This was 1.36% higher than the best of the two individual classifiers (81.59% achieved by the CNN). Owing to the class imbalance in the binary problem, sensitivity, but not accuracy, was examined and demonstrated to be superior (89%) to the best individual classifier (CNN with 86.1%). The specificity in the combined classifier decreased from 89.2% to 84%. However, at an equal sensitivity of 89%, the CNN achieved a specificity of only 81.5% INTERPRETATION: Our findings indicate that the combination of human and artificial intelligence achieves superior results over the independent results of both of these systems. 10.1016/j.ejca.2019.07.019
    Prediction of melanoma evolution in melanocytic nevi via artificial intelligence: A call for prospective data. Sondermann Wiebke,Utikal Jochen Sven,Enk Alexander H,Schadendorf Dirk,Klode Joachim,Hauschild Axel,Weichenthal Michael,French Lars E,Berking Carola,Schilling Bastian,Haferkamp Sebastian,Fröhling Stefan,von Kalle Christof,Brinker Titus J European journal of cancer (Oxford, England : 1990) Recent research revealed the superiority of artificial intelligence over dermatologists to diagnose melanoma from images. However, 30-50% of all melanomas and more than half of those in young patients evolve from initially benign lesions. Despite its high relevance for melanoma screening, neither clinicians nor computers are yet able to reliably predict a nevus' oncologic transformation. The cause of this lies in the static nature of lesion presentation in the current standard of care, both for clinicians and algorithms. The status quo makes it difficult to train algorithms (and clinicians) to precisely assess the likelihood of a benign skin lesion to transform into melanoma. In addition, it inhibits the precision of current algorithms since 'evolution' image features may not be part of their decision. The current literature reveals certain types of melanocytic nevi (i.e. 'spitzoid' or 'dysplastic' nevi) and criteria (i.e. visible vasculature) that, in general, appear to have a higher chance to transform into melanoma. However, owing to the cumulative nature of oncogenic mutations in melanoma, a more fine-grained early morphologic footprint is likely to be detectable by an algorithm. In this perspective article, the concept of melanoma prediction is further explored by the discussion of the evolution of melanoma, the concept for training of such a nevi classifier and the implications of early melanoma prediction for clinical practice. In conclusion, the authors believe that artificial intelligence trained on prospective image data could be transformative for skin cancer diagnostics by (a) predicting melanoma before it occurs (i.e. pre-in situ) and (b) further enhancing the accuracy of current melanoma classifiers. Necessary prospective images for this research are obtained via free mole-monitoring mobile apps. 10.1016/j.ejca.2019.07.009