logo logo
A Computer Vision Platform to Automatically Locate Critical Events in Surgical Videos: Documenting Safety in Laparoscopic Cholecystectomy. Annals of surgery OBJECTIVE:The aim of this study was to develop a computer vision platform to automatically locate critical events in surgical videos and provide short video clips documenting the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). BACKGROUND:Intraoperative events are typically documented through operator-dictated reports that do not always translate the operative reality. Surgical videos provide complete information on surgical procedures, but the burden associated with storing and manually analyzing full-length videos has so far limited their effective use. METHODS:A computer vision platform named EndoDigest was developed and used to analyze LC videos. The mean absolute error (MAE) of the platform in automatically locating the manually annotated time of the cystic duct division in full-length videos was assessed. The relevance of the automatically extracted short video clips was evaluated by calculating the percentage of video clips in which the CVS was assessable by surgeons. RESULTS:A total of 155 LC videos were analyzed: 55 of these videos were used to develop EndoDigest, whereas the remaining 100 were used to test it. The time of the cystic duct division was automatically located with a MAE of 62.8 ± 130.4 seconds (1.95% of full-length video duration). CVS was assessable in 91% of the 2.5 minutes long video clips automatically extracted from the considered test procedures. CONCLUSIONS:Deep learning models for workflow analysis can be used to reliably locate critical events in surgical videos and document CVS in LC. Further studies are needed to assess the clinical impact of surgical data science solutions for safer laparoscopic cholecystectomy. 10.1097/SLA.0000000000004736
RSDNet: Learning to Predict Remaining Surgery Duration from Laparoscopic Videos Without Manual Annotations. Twinanda Andru Putra,Yengera Gaurav,Mutter Didier,Marescaux Jacques,Padoy Nicolas IEEE transactions on medical imaging Accurate surgery duration estimation is necessary for optimal OR planning, which plays an important role in patient comfort and safety as well as resource optimization. It is, however, challenging to preoperatively predict surgery duration since it varies significantly depending on the patient condition, surgeon skills, and intraoperative situation. In this paper, we propose a deep learning pipeline, referred to as RSDNet, which automatically estimates the remaining surgery duration (RSD) intraoperatively by using only visual information from laparoscopic videos. The previous state-of-the-art approaches for RSD prediction are dependent on manual annotation, whose generation requires expensive expert knowledge and is time-consuming, especially considering the numerous types of surgeries performed in a hospital and the large number of laparoscopic videos available. A crucial feature of RSDNet is that it does not depend on any manual annotation during training, making it easily scalable to many kinds of surgeries. The generalizability of our approach is demonstrated by testing the pipeline on two large datasets containing different types of surgeries: 120 cholecystectomy and 170 gastric bypass videos. The experimental results also show that the proposed network significantly outperforms a traditional method of estimating RSD without utilizing manual annotation. Further, this paper provides a deeper insight into the deep learning network through visualization and interpretation of the features that are automatically learned. 10.1109/TMI.2018.2878055
Deep neural networks are superior to dermatologists in melanoma image classification. Brinker Titus J,Hekler Achim,Enk Alexander H,Berking Carola,Haferkamp Sebastian,Hauschild Axel,Weichenthal Michael,Klode Joachim,Schadendorf Dirk,Holland-Letz Tim,von Kalle Christof,Fröhling Stefan,Schilling Bastian,Utikal Jochen S European journal of cancer (Oxford, England : 1990) BACKGROUND:Melanoma is the most dangerous type of skin cancer but is curable if detected early. Recent publications demonstrated that artificial intelligence is capable in classifying images of benign nevi and melanoma with dermatologist-level precision. However, a statistically significant improvement compared with dermatologist classification has not been reported to date. METHODS:For this comparative study, 4204 biopsy-proven images of melanoma and nevi (1:1) were used for the training of a convolutional neural network (CNN). New techniques of deep learning were integrated. For the experiment, an additional 804 biopsy-proven dermoscopic images of melanoma and nevi (1:1) were randomly presented to dermatologists of nine German university hospitals, who evaluated the quality of each image and stated their recommended treatment (19,296 recommendations in total). Three McNemar's tests comparing the results of the CNN's test runs in terms of sensitivity, specificity and overall correctness were predefined as the main outcomes. FINDINGS:The respective sensitivity and specificity of lesion classification by the dermatologists were 67.2% (95% confidence interval [CI]: 62.6%-71.7%) and 62.2% (95% CI: 57.6%-66.9%). In comparison, the trained CNN achieved a higher sensitivity of 82.3% (95% CI: 78.3%-85.7%) and a higher specificity of 77.9% (95% CI: 73.8%-81.8%). The three McNemar's tests in 2 × 2 tables all reached a significance level of p < 0.001. This significance level was sustained for both subgroups. INTERPRETATION:For the first time, automated dermoscopic melanoma image classification was shown to be significantly superior to both junior and board-certified dermatologists (p < 0.001). 10.1016/j.ejca.2019.05.023
Dermatologist-level classification of skin cancer with deep neural networks. Esteva Andre,Kuprel Brett,Novoa Roberto A,Ko Justin,Swetter Susan M,Blau Helen M,Thrun Sebastian Nature Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images-two orders of magnitude larger than previous datasets-consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care. 10.1038/nature21056
Use of Augmented Reality in Gynecologic Surgery to Visualize Adenomyomas. Bourdel Nicolas,Chauvet Pauline,Calvet Lilian,Magnin Benoit,Bartoli Adrien,Canis Michel Journal of minimally invasive gynecology Augmented reality (AR) is a surgical guidance technology that allows key hidden subsurface structures to be visualized by endoscopic imaging. We report here 2 cases of patients with adenomyoma selected for the AR technique. The adenomyomas were localized using AR during laparoscopy. Three-dimensional models of the uterus, uterine cavity, and adenomyoma were constructed before surgery from T2-weighted magnetic resonance imaging, allowing an intraoperative 3-dimensional shape of the uterus to be obtained. These models were automatically aligned and "fused" with the laparoscopic video in real time, giving the uterus a semitransparent appearance and allowing the surgeon in real time to both locate the position of the adenomyoma and uterine cavity and rapidly decide how best to access the adenomyoma. In conclusion, the use of our AR system designed for gynecologic surgery leads to improvements in laparoscopic adenomyomectomy and surgical safety. 10.1016/j.jmig.2019.04.003
Machine Learning and Deep Neural Networks Applications in Coronary Flow Assessment: The Case of Computed Tomography Fractional Flow Reserve. Tesche Christian,Gray Hunter N Journal of thoracic imaging Coronary computed tomography angiography (cCTA) is a reliable and clinically proven method for the evaluation of coronary artery disease. cCTA data sets can be used to derive fractional flow reserve (FFR) as CT-FFR. This method has respectable results when compared in previous trials to invasive FFR, with the aim of detecting lesion-specific ischemia. Results from previous studies have shown many benefits, including improved therapeutic guidance to efficiently justify the management of patients with suspected coronary artery disease and enhanced outcomes and reduced health care costs. More recently, a technical approach to the calculation of CT-FFR using an artificial intelligence deep machine learning (ML) algorithm has been introduced. ML algorithms provide information in a more objective, reproducible, and rational manner and with improved diagnostic accuracy in comparison to cCTA. This review gives an overview of the technical background, clinical validation, and implementation of ML applications in CT-FFR. 10.1097/RTI.0000000000000483
Machine Learning and Deep Neural Networks in Thoracic and Cardiovascular Imaging. Journal of thoracic imaging Advances in technology have always had the potential and opportunity to shape the practice of medicine, and in no medical specialty has technology been more rapidly embraced and adopted than radiology. Machine learning and deep neural networks promise to transform the practice of medicine, and, in particular, the practice of diagnostic radiology. These technologies are evolving at a rapid pace due to innovations in computational hardware and novel neural network architectures. Several cutting-edge postprocessing analysis applications are actively being developed in the fields of thoracic and cardiovascular imaging, including applications for lesion detection and characterization, lung parenchymal characterization, coronary artery assessment, cardiac volumetry and function, and anatomic localization. Cardiothoracic and cardiovascular imaging lies at the technological forefront of radiology due to a confluence of technical advances. Enhanced equipment has enabled computed tomography and magnetic resonance imaging scanners that can safely capture images that freeze the motion of the heart to exquisitely delineate fine anatomic structures. Computing hardware developments have enabled an explosion in computational capabilities and in data storage. Progress in software and fluid mechanical models is enabling complex 3D and 4D reconstructions to not only visualize and assess the dynamic motion of the heart, but also quantify its blood flow and hemodynamics. And now, innovations in machine learning, particularly in the form of deep neural networks, are enabling us to leverage the increasingly massive data repositories that are prevalent in the field. Here, we discuss developments in machine learning techniques and deep neural networks to highlight their likely role in future radiologic practice, both in and outside of image interpretation and analysis. We discuss the concepts of validation, generalizability, and clinical utility, as they pertain to this and other new technologies, and we reflect upon the opportunities and challenges of bringing these into daily use. 10.1097/RTI.0000000000000385
Machine Learning and Deep Neural Networks Applications in Computed Tomography for Coronary Artery Disease and Myocardial Perfusion. Monti Caterina B,Codari Marina,van Assen Marly,De Cecco Carlo N,Vliegenthart Rozemarijn Journal of thoracic imaging During the latest years, artificial intelligence, and especially machine learning (ML), have experienced a growth in popularity due to their versatility and potential in solving complex problems. In fact, ML allows the efficient handling of big volumes of data, allowing to tackle issues that were unfeasible before, especially with deep learning, which utilizes multilayered neural networks. Cardiac computed tomography (CT) is also experiencing a rise in examination numbers, and ML might help handle the increasing derived information. Moreover, cardiac CT presents some fields wherein ML may be pivotal, such as coronary calcium scoring, CT angiography, and perfusion. In particular, the main applications of ML involve image preprocessing and postprocessing, and the development of risk assessment models based on imaging findings. Concerning image preprocessing, ML can help improve image quality by optimizing acquisition protocols or removing artifacts that may hinder image analysis and interpretation. ML in image postprocessing might help perform automatic segmentations and shorten examination processing times, also providing tools for tissue characterization, especially concerning plaques. The development of risk assessment models from ML using data from cardiac CT could aid in the stratification of patients who undergo cardiac CT in different risk classes and better tailor their treatment to individual conditions. While ML is a powerful tool with great potential, applications in the field of cardiac CT are still expanding, and not yet routinely available in clinical practice due to the need for extensive validation. Nevertheless, ML is expected to have a big impact on cardiac CT in the near future. 10.1097/RTI.0000000000000490
Artificial intelligence and algorithmic computational pathology: an introduction with renal allograft examples. Histopathology Whole slide imaging, which is an important technique in the field of digital pathology, has recently been the subject of increased interest and avenues for utilisation, and with more widespread whole slide image (WSI) utilisation, there will also be increased interest in and implementation of image analysis (IA) techniques. IA includes artificial intelligence (AI) and targeted or hypothesis-driven algorithms. In the overall pathology field, the number of citations related to these topics has increased in recent years. Renal pathology is one anatomical pathology subspecialty that has utilised WSIs and IA algorithms; it can be argued that renal transplant pathology could be particularly suited for whole slide imaging and IA, as renal transplant pathology is frequently classified by use of the semiquantitative Banff classification of renal allograft pathology. Hypothesis-driven/targeted algorithms have been used in the past for the assessment of a variety of features in the kidney (e.g. interstitial fibrosis, tubular atrophy, inflammation); in recent years, the amount of research has particularly increased in the area of AI/machine learning for the identification of glomeruli, for histological segmentation, and for other applications. Deep learning is the form of machine learning that is most often used for such AI approaches to the 'big data' of pathology WSIs, and deep learning methods such as artificial neural networks (ANNs)/convolutional neural networks (CNNs) are utilised. Unsupervised and supervised AI algorithms can be employed to accomplish image or semantic classification. In this review, AI and other IA algorithms applied to WSIs are discussed, and examples from renal pathology are covered, with an emphasis on renal transplant pathology. 10.1111/his.14304
Sparse Data-Driven Learning for Effective and Efficient Biomedical Image Segmentation. Annual review of biomedical engineering Sparsity is a powerful concept to exploit for high-dimensional machine learning and associated representational and computational efficiency. Sparsity is well suited for medical image segmentation. We present a selection of techniques that incorporate sparsity, including strategies based on dictionary learning and deep learning, that are aimed at medical image segmentation and related quantification. 10.1146/annurev-bioeng-060418-052147
Artificial intelligence in multiparametric prostate cancer imaging with focus on deep-learning methods. Wildeboer Rogier R,van Sloun Ruud J G,Wijkstra Hessel,Mischi Massimo Computer methods and programs in biomedicine Prostate cancer represents today the most typical example of a pathology whose diagnosis requires multiparametric imaging, a strategy where multiple imaging techniques are combined to reach an acceptable diagnostic performance. However, the reviewing, weighing and coupling of multiple images not only places additional burden on the radiologist, it also complicates the reviewing process. Prostate cancer imaging has therefore been an important target for the development of computer-aided diagnostic (CAD) tools. In this survey, we discuss the advances in CAD for prostate cancer over the last decades with special attention to the deep-learning techniques that have been designed in the last few years. Moreover, we elaborate and compare the methods employed to deliver the CAD output to the operator for further medical decision making. 10.1016/j.cmpb.2020.105316
Progress in diagnosis of bone metastasis of prostate cancer. Zhong nan da xue xue bao. Yi xue ban = Journal of Central South University. Medical sciences The diagnosis of bone metastasis of prostate cancer (PC) is of great significance to the treatment and prognosis of patients with PC.Bone scan is the most commonly used in the early diagnosis of bone metastasis, but its specificity is low and there is a high false positive.In recent years, with the in-depth study of the application of CT, MRI, emission computed tomography (ECT), positron emission computed tomography/computed tomography (PET/CT) and deep learning algorithm-convolutional neural networks (CNN) in the diagnosis of bone metastasis, the combined application of various auxiliary parameters in the diagnosis of bone metastasis has significantly been improved. The therapeutic effect of PC patients with bone metastasis can also be evaluated, which is expected to achieve the treatment of bone metastasis as well as diagnosis. By systematically expounding the research progress of the above-mentioned techniques in the diagnosis of bone metastasis, it can provide clinicians with new methods for the diagnosis of bone metastasis and improve the diagnostic efficiency for bone metastasis. 10.11817/j.issn.1672-7347.2021.200999
A review of current advancements and limitations of artificial intelligence in genitourinary cancers. Pai Raghav K,Van Booven Derek J,Parmar Madhumita,Lokeshwar Soum D,Shah Khushi,Ramasamy Ranjith,Arora Himanshu American journal of clinical and experimental urology Advances in deep learning and neural networking have allowed clinicians to understand the impact that artificial intelligence (AI) could have on improving clinical outcomes and resources expenditures. In the realm of genitourinary (GU) cancers, AI has had particular success in improving the diagnosis and treatment of prostate, renal, and bladder cancers. Numerous studies have developed methods to utilize neural networks to automate prognosis prediction, treatment plan optimization, and patient education. Furthermore, many groups have explored other techniques, including digital pathology and expert 3D modeling systems. Compared to established methods, nearly all the studies showed some level of improvement and there is evidence that AI pipelines can reduce the subjectivity in the diagnosis and management of GU malignancies. However, despite the many potential benefits of utilizing AI in urologic oncology, there are some notable limitations of AI when combating real-world data sets. Thus, it is vital that more prospective studies be conducted that will allow for a better understanding of the benefits of AI to both cancer patients and urologists.
Development of an artificial intelligence diagnostic system for lower urinary tract dysfunction in men. Matsukawa Yoshihisa,Kameya Yoshitaka,Takahashi Tomoichi,Shimazu Atsuki,Ishida Shohei,Yamada Muneo,Sassa Naoto,Yamamoto Tokunori International journal of urology : official journal of the Japanese Urological Association OBJECTIVES:To establish an artificial intelligence diagnostic system for lower urinary tract function in men with lower urinary tract symptoms using only uroflowmetry data and to evaluate its usefulness. METHODS:Uroflowmetry data of 256 treatment-naive men with detrusor underactivity, bladder outlet obstruction, or detrusor underactivity + bladder outlet obstruction were used for artificial intelligence learning and validation using neural networks. An optimal artificial intelligence diagnostic model was established using 10-fold stratified cross-validation and data augmentation. Correlations of bladder contractility index and bladder outlet obstruction index values for the artificial intelligence system and pressure flow study values were examined using Spearman's correlation coefficients. Additionally, diagnostic accuracy was compared between the established artificial intelligence system and trained urologists with uroflowmetry data of 25 additional patients by χ -tests. Detrusor underactivity was defined as bladder contractility index ≤100 and bladder outlet obstruction index ≤40, bladder outlet obstruction was defined as bladder contractility index >100 and bladder outlet obstruction index >40, and detrusor underactivity + bladder outlet obstruction was defined as bladder contractility index ≤100 and bladder outlet obstruction index >40. RESULTS:The artificial intelligence system's estimated bladder contractility index and bladder outlet obstruction index values showed significant positive correlations with pressure flow study values (bladder contractility index: r = 0.60, P < 0.001; bladder outlet obstruction index: r = 0.46, P < 0.001). The artificial intelligence system's detrusor underactivity diagnosis had a sensitivity and specificity of 79.7% and 88.7%, respectively, and those for bladder outlet obstruction diagnosis were 76.8% and 84.7%, respectively. The artificial intelligence system's average diagnostic accuracy was 84%, which was significantly higher than that of urologists (56%). CONCLUSIONS:Our artificial intelligence diagnostic system developed using the uroflowmetry waveform distinguished between detrusor underactivity and bladder outlet obstruction with high sensitivity and specificity in men with lower urinary tract symptoms. 10.1111/iju.14661
Freely available artificial intelligence for pelvic lymph node metastases in PSMA PET-CT that performs on par with nuclear medicine physicians. European journal of nuclear medicine and molecular imaging PURPOSE:The aim of this study was to develop and validate an artificial intelligence (AI)-based method using convolutional neural networks (CNNs) for the detection of pelvic lymph node metastases in scans obtained using [F]PSMA-1007 positron emission tomography-computed tomography (PET-CT) from patients with high-risk prostate cancer. The second goal was to make the AI-based method available to other researchers. METHODS:[F]PSMA PET-CT scans were collected from 211 patients. Suspected pelvic lymph node metastases were marked by three independent readers. A CNN was developed and trained on a training and validation group of 161 of the patients. The performance of the AI method and the inter-observer agreement between the three readers were assessed in a separate test group of 50 patients. RESULTS:The sensitivity of the AI method for detecting pelvic lymph node metastases was 82%, and the corresponding sensitivity for the human readers was 77% on average. The average number of false positives was 1.8 per patient. A total of 5-17 false negative lesions in the whole cohort were found, depending on which reader was used as a reference. The method is available for researchers at www.recomia.org . CONCLUSION:This study shows that AI can obtain a sensitivity on par with that of physicians with a reasonable number of false positives. The difficulty in achieving high inter-observer sensitivity emphasizes the need for automated methods. On the road to qualifying AI tools for clinical use, independent validation is critical and allows performance to be assessed in studies from different hospitals. Therefore, we have made our AI tool freely available to other researchers. 10.1007/s00259-022-05806-9
Artificial intelligence in bladder cancer prognosis: a pathway for personalized medicine. Current opinion in urology PURPOSE OF REVIEW:This review aims to provide an update of the results of studies published in the last 2 years involving the use of artificial intelligence in bladder cancer (BCa) prognosis. RECENT FINDINGS:Recently, many studies evaluated various artificial intelligence models to predict BCa evolution using either deep learning or machine learning. Many trials evidenced a better prediction of recurrence-free survival and overall survival for muscle invasive BCa (MIBC) for deep learning-based models compared with clinical stages. Improvements in imaging associated with the development of deep learning neural networks and radiomics seem to improve post neo-adjuvant chemotherapy response. One study showed that digitalized histology could predict nonmuscle invasive BCa recurrence. SUMMARY:BCa prognosis could be better assessed using artificial intelligence models not only in the case of MIBC but also NMIBC. Many studies evaluated its role for the prediction of overall survival and recurrence-free survival but there is still little data in the case of NMIBC. Recent findings showed that artificial intelligence could lead to a better assessment of BCa prognosis before treatment and to personalized medicine. 10.1097/MOU.0000000000000882
Kidney edge detection in laparoscopic image data for computer-assisted surgery : Kidney edge detection. Hattab Georges,Arnold Marvin,Strenger Leon,Allan Max,Arsentjeva Darja,Gold Oliver,Simpfendörfer Tobias,Maier-Hein Lena,Speidel Stefanie International journal of computer assisted radiology and surgery PURPOSE:In robotic-assisted kidney surgery, computational methods make it possible to augment the surgical scene and potentially improve patient outcome. Most often, soft-tissue registration is a prerequisite for the visualization of tumors and vascular structures hidden beneath the surface. State-of-the-art volume-to-surface registration methods, however, are computationally demanding and require a sufficiently large target surface. To overcome this limitation, the first step toward registration is the extraction of the outer edge of the kidney. METHODS:To tackle this task, we propose a deep learning-based solution. Rather than working only on the raw laparoscopic images, the network is given depth information and distance fields to predict whether a pixel of the image belongs to an edge. We evaluate our method on expert-labeled in vivo data from the EndoVis sub-challenge 2017 Kidney Boundary Detection and define the current state of the art. RESULTS:By using a leave-one-out cross-validation, we report results for the most suitable network with a median precision-like, recall-like, and intersection over union (IOU) of 39.5 px, 143.3 px, and 0.3, respectively. CONCLUSION:We conclude that our approach succeeds in predicting the edges of the kidney, except in instances where high occlusion occurs, which explains the average decrease in the IOU score. All source code, reference data, models, and evaluation results are openly available for download: https://github.com/ghattab/kidney-edge-detection/. 10.1007/s11548-019-02102-0
Automated Detection and Grading of Non-Muscle-Invasive Urothelial Cell Carcinoma of the Bladder. Jansen Ilaria,Lucas Marit,Bosschieter Judith,de Boer Onno J,Meijer Sybren L,van Leeuwen Ton G,Marquering Henk A,Nieuwenhuijzen Jakko A,de Bruin Daniel M,Savci-Heijink C Dilara The American journal of pathology Accurate grading of non-muscle-invasive urothelial cell carcinoma is of major importance; however, high interobserver variability exists. A fully automated detection and grading network based on deep learning is proposed to enhance reproducibility. A total of 328 transurethral resection specimens from 232 patients were included, and a consensus reading by three specialized pathologists was used. The slides were digitized, and the urothelium was annotated by expert observers. The U-Net-based segmentation network was trained to automatically detect urothelium. This detection was used as input for the classification network. The classification network aimed to grade the tumors according to the World Health Organization grading system adopted in 2004. The automated grading was compared with the consensus and individual grading. The segmentation network resulted in an accurate detection of urothelium. The automated grading shows moderate agreement (κ = 0.48 ± 0.14 SEM) with the consensus reading. The agreement among pathologists ranges between fair (κ = 0.35 ± 0.13 SEM and κ = 0.38 ± 0.11 SEM) and moderate (κ = 0.52 ± 0.13 SEM). The automated classification correctly graded 76% of the low-grade cancers and 71% of the high-grade cancers according to the consensus reading. These results indicate that deep learning can be used for the fully automated detection and grading of urothelial cell carcinoma. 10.1016/j.ajpath.2020.03.013
Predicting intra-operative and postoperative consequential events using machine-learning techniques in patients undergoing robot-assisted partial nephrectomy: a Vattikuti Collective Quality Initiative database study. Bhandari Mahendra,Nallabasannagari Anubhav Reddy,Reddiboina Madhu,Porter James R,Jeong Wooju,Mottrie Alexandre,Dasgupta Prokar,Challacombe Ben,Abaza Ronney,Rha Koon Ho,Parekh Dipen J,Ahlawat Rajesh,Capitanio Umberto,Yuvaraja Thyavihally B,Rawal Sudhir,Moon Daniel A,Buffi Nicolò M,Sivaraman Ananthakrishnan,Maes Kris K,Porpiglia Francesco,Gautam Gagan,Turkeri Levent,Meyyazhgan Kohul Raj,Patil Preethi,Menon Mani,Rogers Craig BJU international OBJECTIVE:To predict intra-operative (IOEs) and postoperative events (POEs) consequential to the derailment of the ideal clinical course of patient recovery. MATERIALS AND METHODS:The Vattikuti Collective Quality Initiative is a multi-institutional dataset of patients who underwent robot-assisted partial nephectomy for kidney tumours. Machine-learning (ML) models were constructed to predict IOEs and POEs using logistic regression, random forest and neural networks. The models to predict IOEs used patient demographics and preoperative data. In addition to these, intra-operative data were used to predict POEs. Performance on the test dataset was assessed using area under the receiver-operating characteristic curve (AUC-ROC) and area under the precision-recall curve (PR-AUC). RESULTS:The rates of IOEs and POEs were 5.62% and 20.98%, respectively. Models for predicting IOEs were constructed using data from 1690 patients and 38 variables; the best model had an AUC-ROC of 0.858 (95% confidence interval [CI] 0.762, 0.936) and a PR-AUC of 0.590 (95% CI 0.400, 0.759). Models for predicting POEs were trained using data from 1406 patients and 59 variables; the best model had an AUC-ROC of 0.875 (95% CI 0.834, 0.913) and a PR-AUC 0.706 (95% CI, 0.610, 0.790). CONCLUSIONS:The performance of the ML models in the present study was encouraging. Further validation in a multi-institutional clinical setting with larger datasets would be necessary to establish their clinical value. ML models can be used to predict significant events during and after surgery with good accuracy, paving the way for application in clinical practice to predict and intervene at an opportune time to avert complications and improve patient outcomes. 10.1111/bju.15087
Toward Automated Bladder Tumor Stratification Using Confocal Laser Endomicroscopy. Lucas Marit,Liem Esmee I M L,Savci-Heijink C Dilara,Freund Jan Erik,Marquering Henk A,van Leeuwen Ton G,de Bruin Daniel M Journal of endourology Urothelial carcinoma of the bladder (UCB) is the most common urinary cancer. White-light cystoscopy (WLC) forms the corner stone for the diagnosis of UCB. However, histopathological assessment is required for adjuvant treatment selection. Probe-based confocal laser endomicroscopy (pCLE) enables visualization of the microarchitecture of bladder lesions during WLC, which allows for real-time tissue differentiation and grading of UCB. To improve the diagnostic process of UCB, computer-aided classification of pCLE videos of bladder lesions were evaluated in this study. We implemented preprocessing methods to optimize contrast and to reduce striping artifacts in each individual pCLE frame. Subsequently, a semiautomatic frame selection was performed. The selected frames were used to train a feature extractor based on pretrained ImageNet networks. A recurrent neural network, in specific long short-term memory (LSTM), was used to predict the grade of bladder lesions. Differentiation of lesions was performed at two levels, namely (i) healthy and benign malignant tissue and (ii) low-grade high-grade papillary UCB. A total of 53 patients with 72 lesions were included in this study, resulting in ∼140,000 pCLE frames. The semiautomated frame selection reduced the number of frames to ∼66,500 informative frames. The accuracy for differentiation of (i) healthy and benign malignant urothelium was 79% and (ii) high-grade and low-grade papillary UCB was 82%. A feature extractor in combination with LSTM results in proper stratification of pCLE videos of bladder lesions. 10.1089/end.2019.0354
Towards realistic laparoscopic image generation using image-domain translation. Marzullo Aldo,Moccia Sara,Catellani Michele,Calimeri Francesco,Momi Elena De Computer methods and programs in biomedicine Background and ObjectivesOver the last decade, Deep Learning (DL) has revolutionized data analysis in many areas, including medical imaging. However, there is a bottleneck in the advancement of DL in the surgery field, which can be seen in a shortage of large-scale data, which in turn may be attributed to the lack of a structured and standardized methodology for storing and analyzing surgical images in clinical centres. Furthermore, accurate annotations manually added are expensive and time consuming. A great help can come from the synthesis of artificial images; in this context, in the latest years, the use of Generative Adversarial Neural Networks (GANs) achieved promising results in obtaining photo-realistic images. MethodsIn this study, a method for Minimally Invasive Surgery (MIS) image synthesis is proposed. To this aim, the generative adversarial network pix2pix is trained to generate paired annotated MIS images by transforming rough segmentation of surgical instruments and tissues into realistic images. An additional regularization term was added to the original optimization problem, in order to enhance realism of surgical tools with respect to the background. Results Quantitative and qualitative (i.e., human-based) evaluations of generated images have been carried out in order to assess the effectiveness of the method. ConclusionsExperimental results show that the proposed method is actually able to translate MIS segmentations to realistic MIS images, which can in turn be used to augment existing data sets and help at overcoming the lack of useful images; this allows physicians and algorithms to take advantage from new annotated instances for their training. 10.1016/j.cmpb.2020.105834
"Deep-Onto" network for surgical workflow and context recognition. Nakawala Hirenkumar,Bianchi Roberto,Pescatori Laura Erica,De Cobelli Ottavio,Ferrigno Giancarlo,De Momi Elena International journal of computer assisted radiology and surgery PURPOSE:Surgical workflow recognition and context-aware systems could allow better decision making and surgical planning by providing the focused information, which may eventually enhance surgical outcomes. While current developments in computer-assisted surgical systems are mostly focused on recognizing surgical phases, they lack recognition of surgical workflow sequence and other contextual element, e.g., "Instruments." Our study proposes a hybrid approach, i.e., using deep learning and knowledge representation, to facilitate recognition of the surgical workflow. METHODS:We implemented "Deep-Onto" network, which is an ensemble of deep learning models and knowledge management tools, ontology and production rules. As a prototypical scenario, we chose robot-assisted partial nephrectomy (RAPN). We annotated RAPN videos with surgical entities, e.g., "Step" and so forth. We performed different experiments, including the inter-subject variability, to recognize surgical steps. The corresponding subsequent steps along with other surgical contexts, i.e., "Actions," "Phase" and "Instruments," were also recognized. RESULTS:The system was able to recognize 10 RAPN steps with the prevalence-weighted macro-average (PWMA) recall of 0.83, PWMA precision of 0.74, PWMA F1 score of 0.76, and the accuracy of 74.29% on 9 videos of RAPN. CONCLUSION:We found that the combined use of deep learning and knowledge representation techniques is a promising approach for the multi-level recognition of RAPN surgical workflow. 10.1007/s11548-018-1882-8
Artificial intelligence (AI) in urology-Current use and future directions: An iTRUE study. Turkish journal of urology OBJECTIVE:Artificial intelligence (AI) is used in various urological conditions such as urolithiasis, pediatric urology, urogynecology, benign prostate hyperplasia (BPH), renal transplant, and uro-oncology. The various models of AI and its application in urology subspecialties are reviewed and discussed. MATERIAL AND METHODS:Search strategy was adapted to identify and review the literature pertaining to the application of AI in urology using the keywords "urology," "artificial intelligence," "machine learning," "deep learning," "artificial neural networks," "computer vision," and "natural language processing" were included and categorized. Review articles, editorial comments, and non-urologic studies were excluded. RESULTS:The article reviewed 47 articles that reported characteristics and implementation of AI in urological cancer. In all cases with benign conditions, artificial intelligence was used to predict outcomes of the surgical procedure. In urolithiasis, it was used to predict stone composition, whereas in pediatric urology and BPH, it was applied to predict the severity of condition. In cases with malignant conditions, it was applied to predict the treatment response, survival, prognosis, and recurrence on the basis of the genomic and biomarker studies. These results were also found to be statistically better than routine approaches. Application of radiomics in classification and nuclear grading of renal masses, cystoscopic diagnosis of bladder cancers, predicting Gleason score, and magnetic resonance imaging with computer-assisted diagnosis for prostate cancers are few applications of AI that have been studied extensively. CONCLUSIONS:In the near future, we will see a shift in the clinical paradigm as AI applications will find their place in the guidelines and revolutionize the decision-making process. 10.5152/tud.2020.20117
Pathomics in urology. Schuettfort Victor M,Pradere Benjamin,Rink Michael,Comperat Eva,Shariat Shahrokh F Current opinion in urology PURPOSE OF REVIEW:Pathomics, the fusion of digitalized pathology and artificial intelligence, is currently changing the landscape of medical pathology and biologic disease classification. In this review, we give an overview of Pathomics and summarize its most relevant applications in urology. RECENT FINDINGS:There is a steady rise in the number of studies employing Pathomics, and especially deep learning, in urology. In prostate cancer, several algorithms have been developed for the automatic differentiation between benign and malignant lesions and to differentiate Gleason scores. Furthermore, several applications have been developed for the automatic cancer cell detection in urine and for tumor assessment in renal cancer. Despite the explosion in research, Pathomics is not fully ready yet for widespread clinical application. SUMMARY:In prostate cancer and other urologic pathologies, Pathomics is avidly being researched with commercial applications on the close horizon. Pathomics is set to improve the accuracy, speed, reliability, cost-effectiveness and generalizability of pathology, especially in uro-oncology. 10.1097/MOU.0000000000000813
Utility of the Simulated Outcomes Following Carotid Artery Laceration Video Data Set for Machine Learning Applications. JAMA network open Importance:Surgical data scientists lack video data sets that depict adverse events, which may affect model generalizability and introduce bias. Hemorrhage may be particularly challenging for computer vision-based models because blood obscures the scene. Objective:To assess the utility of the Simulated Outcomes Following Carotid Artery Laceration (SOCAL)-a publicly available surgical video data set of hemorrhage complication management with instrument annotations and task outcomes-to provide benchmarks for surgical data science techniques, including computer vision instrument detection, instrument use metrics and outcome associations, and validation of a SOCAL-trained neural network using real operative video. Design, Setting, and Participants:For this quailty improvement study, a total of 75 surgeons with 1 to 30 years' experience (mean, 7 years) were filmed from January 1, 2017, to December 31, 2020, managing catastrophic surgical hemorrhage in a high-fidelity cadaveric training exercise at nationwide training courses. Videos were annotated from January 1 to June 30, 2021. Interventions:Surgeons received expert coaching between 2 trials. Main Outcomes and Measures:Hemostasis within 5 minutes (task success, dichotomous), time to hemostasis (in seconds), and blood loss (in milliliters) were recorded. Deep neural networks (DNNs) were trained to detect surgical instruments in view. Model performance was measured using mean average precision (mAP), sensitivity, and positive predictive value. Results:SOCAL contains 31 443 frames with 65 071 surgical instrument annotations from 147 trials with associated surgeon demographic characteristics, time to hemostasis, and recorded blood loss for each trial. Computer vision-based instrument detection methods using DNNs trained on SOCAL achieved a mAP of 0.67 overall and 0.91 for the most common surgical instrument (suction). Hemorrhage control challenges standard object detectors: detection of some surgical instruments remained poor (mAP, 0.25). On real intraoperative video, the model achieved a sensitivity of 0.77 and a positive predictive value of 0.96. Instrument use metrics derived from the SOCAL video were significantly associated with performance (blood loss). Conclusions and Relevance:Hemorrhage control is a high-stakes adverse event that poses unique challenges for video analysis, but no data sets of hemorrhage control exist. The use of SOCAL, the first data set to depict hemorrhage control, allows the benchmarking of data science applications, including object detection, performance metric development, and identification of metrics associated with outcomes. In the future, SOCAL may be used to build and validate surgical data science models. 10.1001/jamanetworkopen.2022.3177
The Growing Role for Semantic Segmentation in Urology. European urology focus As the quantity and quality of cross-sectional imaging data increase, it is important to be able to make efficient use of the information. Semantic segmentation is an emerging technology that promises to improve the speed, reproducibility, and accuracy of analysis of medical imaging, and to allow visualization methods that were previously impossible. Manual image segmentation often requires expert knowledge and is both time- and cost-prohibitive in many clinical situations. However, automated methods, especially those using deep learning, show promise in alleviating this burden to make segmentation a standard tool for clinical intervention in the future. It is therefore important for clinicians to have a functional understanding of what segmentation is and to be aware of its uses. Here we include a number of examples of ways in which semantic segmentation has been put into practice in urology. PATIENT SUMMARY: This mini-review highlights the growing role of segmentation methods for medical images in urology to inform clinical practice. Segmentation methods show promise in improving the reliability of diagnosis and aiding in visualization, which may become a tool for patient education. 10.1016/j.euf.2021.07.017
Thermal Change Index-Based Diabetic Foot Thermogram Image Classification Using Machine Learning Techniques. Khandakar Amith,Chowdhury Muhammad E H,Reaz Mamun Bin Ibne,Ali Sawal Hamid Md,Abbas Tariq O,Alam Tanvir,Ayari Mohamed Arselene,Mahbub Zaid B,Habib Rumana,Rahman Tawsifur,Tahir Anas M,Bakar Ahmad Ashrif A,Malik Rayaz A Sensors (Basel, Switzerland) Diabetes mellitus (DM) can lead to plantar ulcers, amputation and death. Plantar foot thermogram images acquired using an infrared camera have been shown to detect changes in temperature distribution associated with a higher risk of foot ulceration. Machine learning approaches applied to such infrared images may have utility in the early diagnosis of diabetic foot complications. In this work, a publicly available dataset was categorized into different classes, which were corroborated by domain experts, based on a temperature distribution parameter-the thermal change index (TCI). We then explored different machine-learning approaches for classifying thermograms of the TCI-labeled dataset. Classical machine learning algorithms with feature engineering and the convolutional neural network (CNN) with image enhancement techniques were extensively investigated to identify the best performing network for classifying thermograms. The multilayer perceptron (MLP) classifier along with the features extracted from thermogram images showed an accuracy of 90.1% in multi-class classification, which outperformed the literature-reported performance metrics on this dataset. 10.3390/s22051793
Deep learning for automatic Gleason pattern classification for grade group determination of prostate biopsies. Lucas Marit,Jansen Ilaria,Savci-Heijink C Dilara,Meijer Sybren L,de Boer Onno J,van Leeuwen Ton G,de Bruin Daniel M,Marquering Henk A Virchows Archiv : an international journal of pathology Histopathologic grading of prostate cancer using Gleason patterns (GPs) is subject to a large inter-observer variability, which may result in suboptimal treatment of patients. With the introduction of digitization and whole-slide images of prostate biopsies, computer-aided grading becomes feasible. Computer-aided grading has the potential to improve histopathological grading and treatment selection for prostate cancer. Automated detection of GPs and determination of the grade groups (GG) using a convolutional neural network. In total, 96 prostate biopsies from 38 patients are annotated on pixel-level. Automated detection of GP 3 and GP ≥ 4 in digitized prostate biopsies is performed by re-training the Inception-v3 convolutional neural network (CNN). The outcome of the CNN is subsequently converted into probability maps of GP ≥ 3 and GP ≥ 4, and the GG of the whole biopsy is obtained according to these probability maps. Differentiation between non-atypical and malignant (GP ≥ 3) areas resulted in an accuracy of 92% with a sensitivity and specificity of 90 and 93%, respectively. The differentiation between GP ≥ 4 and GP ≤ 3 was accurate for 90%, with a sensitivity and specificity of 77 and 94%, respectively. Concordance of our automated GG determination method with a genitourinary pathologist was obtained in 65% (κ = 0.70), indicating substantial agreement. A CNN allows for accurate differentiation between non-atypical and malignant areas as defined by GPs, leading to a substantial agreement with the pathologist in defining the GG. 10.1007/s00428-019-02577-x
Assessing kidney stone composition using deep learning. Stone Louise Nature reviews. Urology 10.1038/s41585-020-0301-4
Classification of Bladder Emptying Patterns by LSTM Neural Network Trained Using Acoustic Signatures. Jin Jie,Chung Youngbeen,Kim Wanseung,Heo Yonggi,Jeon Jinyong,Hoh Jeongkyu,Park Junhong,Jo Jungki Sensors (Basel, Switzerland) (1) Background: Non-invasive uroflowmetry is used in clinical practice for diagnosing lower urinary tract symptoms (LUTS) and the health status of a patient. To establish a smart system for measuring the flowrate during urination without any temporospatial constraints for patients with a urinary disorder, the acoustic signatures from the uroflow of patients being treated for LUTS at a tertiary hospital were utilized. (2) Methods: Uroflowmetry data were collected for construction and verification of a long short-term memory (LSTM) deep-learning algorithm. The initial sample size comprised 34 patients; 27 patients were included in the final analysis. Uroflow sounds generated from flow impacts on a structure were analyzed by loudness and roughness parameters. (3) Results: A similar signal pattern to the clinical urological measurements was observed and applied for health diagnosis. (4) Conclusions: Consistent flowrate values were obtained by applying the uroflow sound samples from the randomly selected patients to the constructed model for validation. The flowrate predicted using the acoustic signature accurately demonstrated actual physical characteristics. This could be used for developing a new smart flowmetry device applicable in everyday life with minimal constraints from settings and enable remote diagnosis of urinary system diseases by objective continuous measurements of bladder emptying function. 10.3390/s21165328
Deep Learning to Automate Technical Skills Assessment in Robotic Surgery. Hung Andrew J,Liu Yan,Anandkumar Animashree JAMA surgery 10.1001/jamasurg.2021.3651
Biologically informed deep neural network for prostate cancer discovery. Nature The determination of molecular features that mediate clinically aggressive phenotypes in prostate cancer remains a major biological and clinical challenge. Recent advances in interpretability of machine learning models as applied to biomedical problems may enable discovery and prediction in clinical cancer genomics. Here we developed P-NET-a biologically informed deep learning model-to stratify patients with prostate cancer by treatment-resistance state and evaluate molecular drivers of treatment resistance for therapeutic targeting through complete model interpretability. We demonstrate that P-NET can predict cancer state using molecular data with a performance that is superior to other modelling approaches. Moreover, the biological interpretability within P-NET revealed established and novel molecularly altered candidates, such as MDM4 and FGFR1, which were implicated in predicting advanced disease and validated in vitro. Broadly, biologically informed fully interpretable neural networks enable preclinical discovery and clinical prediction in prostate cancer and may have general applicability across cancer types. 10.1038/s41586-021-03922-4
Bladder Cancer Treatment Response Assessment in CT using Radiomics with Deep-Learning. Cha Kenny H,Hadjiiski Lubomir,Chan Heang-Ping,Weizer Alon Z,Alva Ajjai,Cohan Richard H,Caoili Elaine M,Paramagul Chintana,Samala Ravi K Scientific reports Cross-sectional X-ray imaging has become the standard for staging most solid organ malignancies. However, for some malignancies such as urinary bladder cancer, the ability to accurately assess local extent of the disease and understand response to systemic chemotherapy is limited with current imaging approaches. In this study, we explored the feasibility that radiomics-based predictive models using pre- and post-treatment computed tomography (CT) images might be able to distinguish between bladder cancers with and without complete chemotherapy responses. We assessed three unique radiomics-based predictive models, each of which employed different fundamental design principles ranging from a pattern recognition method via deep-learning convolution neural network (DL-CNN), to a more deterministic radiomics feature-based approach and then a bridging method between the two, utilizing a system which extracts radiomics features from the image patterns. Our study indicates that the computerized assessment using radiomics information from the pre- and post-treatment CT of bladder cancer patients has the potential to assist in assessment of treatment response. 10.1038/s41598-017-09315-w
Deep learning based prediction of prognosis in nonmetastatic clear cell renal cell carcinoma. Byun Seok-Soo,Heo Tak Sung,Choi Jeong Myeong,Jeong Yeong Seok,Kim Yu Seop,Lee Won Ki,Kim Chulho Scientific reports Survival analyses for malignancies, including renal cell carcinoma (RCC), have primarily been conducted using the Cox proportional hazards (CPH) model. We compared the random survival forest (RSF) and DeepSurv models with the CPH model to predict recurrence-free survival (RFS) and cancer-specific survival (CSS) in non-metastatic clear cell RCC (nm-cRCC) patients. Our cohort included 2139 nm-cRCC patients who underwent curative-intent surgery at six Korean institutions between 2000 and 2014. The data of two largest hospitals' patients were assigned into the training and validation dataset, and the data of the remaining hospitals were assigned into the external validation dataset. The performance of the RSF and DeepSurv models was compared with that of CPH using Harrel's C-index. During the follow-up, recurrence and cancer-specific deaths were recorded in 190 (12.7%) and 108 (7.0%) patients, respectively, in the training-dataset. Harrel's C-indices for RFS in the test-dataset were 0.794, 0.789, and 0.802 for CPH, RSF, and DeepSurv, respectively. Harrel's C-indices for CSS in the test-dataset were 0.831, 0.790, and 0.834 for CPH, RSF, and DeepSurv, respectively. In predicting RFS and CSS in nm-cRCC patients, the performance of DeepSurv was superior to that of CPH and RSF. In no distant time, deep learning-based survival predictions may be useful in RCC patients. 10.1038/s41598-020-80262-9
A deep-learning model using automated performance metrics and clinical features to predict urinary continence recovery after robot-assisted radical prostatectomy. BJU international OBJECTIVES:To predict urinary continence recovery after robot-assisted radical prostatectomy (RARP) using a deep learning (DL) model, which was then used to evaluate surgeon's historical patient outcomes. SUBJECTS AND METHODS:Robotic surgical automated performance metrics (APMs) during RARP, and patient clinicopathological and continence data were captured prospectively from 100 contemporary RARPs. We used a DL model (DeepSurv) to predict postoperative urinary continence. Model features were ranked based on their importance in prediction. We stratified eight surgeons based on the five top-ranked features. The top four surgeons were categorized in 'Group 1/APMs', while the remaining four were categorized in 'Group 2/APMs'. A separate historical cohort of RARPs (January 2015 to August 2016) performed by these two surgeon groups was then used for comparison. Concordance index (C-index) and mean absolute error (MAE) were used to measure the model's prediction performance. Outcomes of historical cases were compared using the Kruskal-Wallis, chi-squared and Fisher's exact tests. RESULTS:Continence was attained in 79 patients (79%) after a median of 126 days. The DL model achieved a C-index of 0.6 and an MAE of 85.9 in predicting continence. APMs were ranked higher by the model than clinicopathological features. In the historical cohort, patients in Group 1/APMs had superior rates of urinary continence at 3 and 6 months postoperatively (47.5 vs 36.7%, P = 0.034, and 68.3 vs 59.2%, P = 0.047, respectively). CONCLUSION:Using APMs and clinicopathological data, the DeepSurv DL model was able to predict continence after RARP. In this feasibility study, surgeons with more efficient APMs achieved higher continence rates at 3 and 6 months after RARP. 10.1111/bju.14735
The future of CT: deep learning reconstruction. McLeavy C M,Chunara M H,Gravell R J,Rauf A,Cushnie A,Staley Talbot C,Hawkins R M Clinical radiology There have been substantial advances in computed tomography (CT) technology since its introduction in the 1970s. More recently, these advances have focused on image reconstruction. Deep learning reconstruction (DLR) is the latest complex reconstruction algorithm to be introduced, which harnesses advances in artificial intelligence (AI) and affordable supercomputer technology to achieve the previously elusive triad of high image quality, low radiation dose, and fast reconstruction speeds. The dose reductions achieved with DLR are redefining ultra-low-dose into the realm of plain radiographs whilst maintaining image quality. This review aims to demonstrate the advantages of DLR over other reconstruction methods in terms of dose reduction and image quality in addition to being able to tailor protocols to specific clinical situations. DLR is the future of CT technology and should be considered when procuring new scanners. 10.1016/j.crad.2021.01.010
Differentiation of Small (≤ 4 cm) Renal Masses on Multiphase Contrast-Enhanced CT by Deep Learning. Tanaka Takashi,Huang Yong,Marukawa Yohei,Tsuboi Yuka,Masaoka Yoshihisa,Kojima Katsuhide,Iguchi Toshihiro,Hiraki Takao,Gobara Hideo,Yanai Hiroyuki,Nasu Yasutomo,Kanazawa Susumu AJR. American journal of roentgenology This study evaluated the utility of a deep learning method for determining whether a small (≤ 4 cm) solid renal mass was benign or malignant on multiphase contrast-enhanced CT. This retrospective study included 1807 image sets from 168 pathologically diagnosed small (≤ 4 cm) solid renal masses with four CT phases (unenhanced, corticomedullary, nephrogenic, and excretory) in 159 patients between 2012 and 2016. Masses were classified as malignant ( = 136) or benign ( = 32). The dataset was randomly divided into five subsets: four were used for augmentation and supervised training (48,832 images), and one was used for testing (281 images). The Inception-v3 architecture convolutional neural network (CNN) model was used. The AUC for malignancy and accuracy at optimal cutoff values of output data were evaluated in six different CNN models. Multivariate logistic regression analysis was also performed. Malignant and benign lesions showed no significant difference of size. The AUC value of corticomedullary phase was higher than that of other phases (corticomedullary vs excretory, = 0.022). The highest accuracy (88%) was achieved in corticomedullary phase images. Multivariate analysis revealed that the CNN model of corticomedullary phase was a significant predictor for malignancy compared with other CNN models, age, sex, and lesion size. A deep learning method with a CNN allowed acceptable differentiation of small (≤ 4 cm) solid renal masses in dynamic CT images, especially in the corticomedullary image model. 10.2214/AJR.19.22074
Expert surgeons and deep learning models can predict the outcome of surgical hemorrhage from 1 min of video. Scientific reports Major vascular injury resulting in uncontrolled bleeding is a catastrophic and often fatal complication of minimally invasive surgery. At the outset of these events, surgeons do not know how much blood will be lost or whether they will successfully control the hemorrhage (achieve hemostasis). We evaluate the ability of a deep learning neural network (DNN) to predict hemostasis control ability using the first minute of surgical video and compare model performance with human experts viewing the same video. The publicly available SOCAL dataset contains 147 videos of attending and resident surgeons managing hemorrhage in a validated, high-fidelity cadaveric simulator. Videos are labeled with outcome and blood loss (mL). The first minute of 20 videos was shown to four, blinded, fellowship trained skull-base neurosurgery instructors, and to SOCALNet (a DNN trained on SOCAL videos). SOCALNet architecture included a convolutional network (ResNet) identifying spatial features and a recurrent network identifying temporal features (LSTM). Experts independently assessed surgeon skill, predicted outcome and blood loss (mL). Outcome and blood loss predictions were compared with SOCALNet. Expert inter-rater reliability was 0.95. Experts correctly predicted 14/20 trials (Sensitivity: 82%, Specificity: 55%, Positive Predictive Value (PPV): 69%, Negative Predictive Value (NPV): 71%). SOCALNet correctly predicted 17/20 trials (Sensitivity 100%, Specificity 66%, PPV 79%, NPV 100%) and correctly identified all successful attempts. Expert predictions of the highest and lowest skill surgeons and expert predictions reported with maximum confidence were more accurate. Experts systematically underestimated blood loss (mean error - 131 mL, RMSE 350 mL, R 0.70) and fewer than half of expert predictions identified blood loss > 500 mL (47.5%, 19/40). SOCALNet had superior performance (mean error - 57 mL, RMSE 295 mL, R 0.74) and detected most episodes of blood loss > 500 mL (80%, 8/10). In validation experiments, SOCALNet evaluation of a critical on-screen surgical maneuver and high/low-skill composite videos were concordant with expert evaluation. Using only the first minute of video, experts and SOCALNet can predict outcome and blood loss during surgical hemorrhage. Experts systematically underestimated blood loss, and SOCALNet had no false negatives. DNNs can provide accurate, meaningful assessments of surgical video. We call for the creation of datasets of surgical adverse events for quality improvement research. 10.1038/s41598-022-11549-2
Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery. Surgery BACKGROUND:Our previous work classified a taxonomy of suturing gestures during a vesicourethral anastomosis of robotic radical prostatectomy in association with tissue tears and patient outcomes. Herein, we train deep learning-based computer vision to automate the identification and classification of suturing gestures for needle driving attempts. METHODS:Using two independent raters, we manually annotated live suturing video clips to label timepoints and gestures. Identification (2,395 videos) and classification (511 videos) datasets were compiled to train computer vision models to produce 2- and 5-class label predictions, respectively. Networks were trained on inputs of raw red/blue/green pixels as well as optical flow for each frame. Each model was trained on 80/20 train/test splits. RESULTS:In this study, all models were able to reliably predict either the presence of a gesture (identification, area under the curve: 0.88) as well as the type of gesture (classification, area under the curve: 0.87) at significantly above chance levels. For both gesture identification and classification datasets, we observed no effect of recurrent classification model choice (long short-term memory unit versus convolutional long short-term memory unit) on performance. CONCLUSION:Our results demonstrate computer vision's ability to recognize features that not only can identify the action of suturing but also distinguish between different classifications of suturing gestures. This demonstrates the potential to utilize deep learning computer vision toward future automation of surgical skill assessment. 10.1016/j.surg.2020.08.016
Annotation-efficient deep learning for automatic medical image segmentation. Nature communications Automatic medical image segmentation plays a critical role in scientific research and medical care. Existing high-performance deep learning methods typically rely on large training datasets with high-quality manual annotations, which are difficult to obtain in many clinical applications. Here, we introduce Annotation-effIcient Deep lEarning (AIDE), an open-source framework to handle imperfect training datasets. Methodological analyses and empirical evaluations are conducted, and we demonstrate that AIDE surpasses conventional fully-supervised models by presenting better performance on open datasets possessing scarce or noisy annotations. We further test AIDE in a real-life case study for breast tumor segmentation. Three datasets containing 11,852 breast images from three medical centers are employed, and AIDE, utilizing 10% training annotations, consistently produces segmentation maps comparable to those generated by fully-supervised counterparts or provided by independent radiologists. The 10-fold enhanced efficiency in utilizing expert labels has the potential to promote a wide range of biomedical applications. 10.1038/s41467-021-26216-9
Spatio-temporal deep learning models for tip force estimation during needle insertion. International journal of computer assisted radiology and surgery PURPOSE:Precise placement of needles is a challenge in a number of clinical applications such as brachytherapy or biopsy. Forces acting at the needle cause tissue deformation and needle deflection which in turn may lead to misplacement or injury. Hence, a number of approaches to estimate the forces at the needle have been proposed. Yet, integrating sensors into the needle tip is challenging and a careful calibration is required to obtain good force estimates. METHODS:We describe a fiber-optic needle tip force sensor design using a single OCT fiber for measurement. The fiber images the deformation of an epoxy layer placed below the needle tip which results in a stream of 1D depth profiles. We study different deep learning approaches to facilitate calibration between this spatio-temporal image data and the related forces. In particular, we propose a novel convGRU-CNN architecture for simultaneous spatial and temporal data processing. RESULTS:The needle can be adapted to different operating ranges by changing the stiffness of the epoxy layer. Likewise, calibration can be adapted by training the deep learning models. Our novel convGRU-CNN architecture results in the lowest mean absolute error of [Formula: see text] and a cross-correlation coefficient of 0.9997 and clearly outperforms the other methods. Ex vivo experiments in human prostate tissue demonstrate the needle's application. CONCLUSIONS:Our OCT-based fiber-optic sensor presents a viable alternative for needle tip force estimation. The results indicate that the rich spatio-temporal information included in the stream of images showing the deformation throughout the epoxy layer can be effectively used by deep learning models. Particularly, we demonstrate that the convGRU-CNN architecture performs favorably, making it a promising approach for other spatio-temporal learning problems. 10.1007/s11548-019-02006-z
Feasibility of a deep learning-based diagnostic platform to evaluate lower urinary tract disorders in men using simple uroflowmetry. Investigative and clinical urology PURPOSE:To diagnose lower urinary tract symptoms (LUTS) in a noninvasive manner, we created a prediction model for bladder outlet obstruction (BOO) and detrusor underactivity (DUA) using simple uroflowmetry. In this study, we used deep learning to analyze simple uroflowmetry. MATERIALS AND METHODS:We performed a retrospective review of 4,835 male patients aged ≥40 years who underwent a urodynamic study at a single center. We excluded patients with a disease or a history of surgery that could affect LUTS. A total of 1,792 patients were included in the study. We extracted a simple uroflowmetry graph automatically using the ABBYY Flexicapture image capture program (ABBYY, Moscow, Russia). We applied a convolutional neural network (CNN), a deep learning method to predict DUA and BOO. A 5-fold cross-validation average value of the area under the receiver operating characteristic (AUROC) curve was chosen as an evaluation metric. When it comes to binary classification, this metric provides a richer measure of classification performance. Additionally, we provided the corresponding average precision-recall (PR) curves. RESULTS:Among the 1,792 patients, 482 (26.90%) had BOO, and 893 (49.83%) had DUA. The average AUROC scores of DUA and BOO, which were measured using 5-fold cross-validation, were 73.30% (mean average precision [mAP]=0.70) and 72.23% (mAP=0.45), respectively. CONCLUSIONS:Our study suggests that it is possible to differentiate DUA from non-DUA and BOO from non-BOO using a simple uroflowmetry graph with a fine-tuned VGG16, which is a well-known CNN model. 10.4111/icu.20210434
Deep Learning Algorithm for Fully Automated Detection of Small (≤4 cm) Renal Cell Carcinoma in Contrast-Enhanced Computed Tomography Using a Multicenter Database. Investigative radiology OBJECTIVES:Renal cell carcinoma (RCC) is often found incidentally in asymptomatic individuals undergoing abdominal computed tomography (CT) examinations. The purpose of our study is to develop a deep learning-based algorithm for fully automated detection of small (≤4 cm) RCCs in contrast-enhanced CT images using a multicenter database and to evaluate its performance. MATERIALS AND METHODS:For the algorithmic detection of RCC, we retrospectively selected contrast-enhanced CT images of patients with histologically confirmed single RCC with a tumor diameter of 4 cm or less between January 2005 and May 2020 from 7 centers in the Japan Medical Image Database. A total of 453 patients from 6 centers were selected as dataset A, and 132 patients from 1 center were selected as dataset B. Dataset A was used for training and internal validation. Dataset B was used only for external validation. Nephrogenic phase images of multiphase CT or single-phase postcontrast CT images were used. Our algorithm consisted of 2-step segmentation models, kidney segmentation and tumor segmentation. For internal validation with dataset A, 10-fold cross-validation was applied. For external validation, the models trained with dataset A were tested on dataset B. The detection performance of the models was evaluated using accuracy, sensitivity, specificity, and the area under the curve (AUC). RESULTS:The mean ± SD diameters of RCCs in dataset A and dataset B were 2.67 ± 0.77 cm and 2.64 ± 0.78 cm, respectively. Our algorithm yielded an accuracy, sensitivity, and specificity of 88.3%, 84.3%, and 92.3%, respectively, with dataset A and 87.5%, 84.8%, and 90.2%, respectively, with dataset B. The AUC of the algorithm with dataset A and dataset B was 0.930 and 0.933, respectively. CONCLUSIONS:The proposed deep learning-based algorithm achieved high accuracy, sensitivity, specificity, and AUC for the detection of small RCCs with both internal and external validations, suggesting that this algorithm could contribute to the early detection of small RCCs. 10.1097/RLI.0000000000000842
Deep learning computer vision algorithm for detecting kidney stone composition. Black Kristian M,Law Hei,Aldoukhi Ali,Deng Jia,Ghani Khurshid R BJU international OBJECTIVES:To assess the recall of a deep learning (DL) method to automatically detect kidney stones composition from digital photographs of stones. MATERIALS AND METHODS:A total of 63 human kidney stones of varied compositions were obtained from a stone laboratory including calcium oxalate monohydrate (COM), uric acid (UA), magnesium ammonium phosphate hexahydrate (MAPH/struvite), calcium hydrogen phosphate dihydrate (CHPD/brushite), and cystine stones. At least two images of the stones, both surface and inner core, were captured on a digital camera for all stones. A deep convolutional neural network (CNN), ResNet-101 (ResNet, Microsoft), was applied as a multi-class classification model, to each image. This model was assessed using leave-one-out cross-validation with the primary outcome being network prediction recall. RESULTS:The composition prediction recall for each composition was as follows: UA 94% (n = 17), COM 90% (n = 21), MAPH/struvite 86% (n = 7), cystine 75% (n = 4), CHPD/brushite 71% (n = 14). The overall weighted recall of the CNNs composition analysis was 85% for the entire cohort. Specificity and precision for each stone type were as follows: UA (97.83%, 94.12%), COM (97.62%, 95%), struvite (91.84%, 71.43%), cystine (98.31%, 75%), and brushite (96.43%, 75%). CONCLUSION:Deep CNNs can be used to identify kidney stone composition from digital photographs with good recall. Future work is needed to see if DL can be used for detecting stone composition during digital endoscopy. This technology may enable integrated endoscopic and laser systems that automatically provide laser settings based on stone composition recognition with the goal to improve surgical efficiency. 10.1111/bju.15035
Deep-learning-aided forward optical coherence tomography endoscope for percutaneous nephrostomy guidance. Biomedical optics express Percutaneous renal access is the critical initial step in many medical settings. In order to obtain the best surgical outcome with minimum patient morbidity, an improved method for access to the renal calyx is needed. In our study, we built a forward-view optical coherence tomography (OCT) endoscopic system for percutaneous nephrostomy (PCN) guidance. Porcine kidneys were imaged in our experiment to demonstrate the feasibility of the imaging system. Three tissue types of porcine kidneys (renal cortex, medulla, and calyx) can be clearly distinguished due to the morphological and tissue differences from the OCT endoscopic images. To further improve the guidance efficacy and reduce the learning burden of the clinical doctors, a deep-learning-based computer aided diagnosis platform was developed to automatically classify the OCT images by the renal tissue types. Convolutional neural networks (CNN) were developed with labeled OCT images based on the ResNet34, MobileNetv2 and ResNet50 architectures. Nested cross-validation and testing was used to benchmark the classification performance with uncertainty quantification over 10 kidneys, which demonstrated robust performance over substantial biological variability among kidneys. ResNet50-based CNN models achieved an average classification accuracy of 82.6%±3.0%. The classification precisions were 79%±4% for cortex, 85%±6% for medulla, and 91%±5% for calyx and the classification recalls were 68%±11% for cortex, 91%±4% for medulla, and 89%±3% for calyx. Interpretation of the CNN predictions showed the discriminative characteristics in the OCT images of the three renal tissue types. The results validated the technical feasibility of using this novel imaging platform to automatically recognize the images of renal tissue structures ahead of the PCN needle in PCN surgery. 10.1364/BOE.421299
Deep Learning Approach for Assessment of Bladder Cancer Treatment Response. Tomography (Ann Arbor, Mich.) We compared the performance of different Deep learning-convolutional neural network (DL-CNN) models for bladder cancer treatment response assessment based on transfer learning by freezing different DL-CNN layers and varying the DL-CNN structure. Pre- and posttreatment computed tomography scans of 123 patients (cancers, 129; pre- and posttreatment cancer pairs, 158) undergoing chemotherapy were collected. After chemotherapy 33% of patients had T0 stage cancer (complete response). Regions of interest in pre- and posttreatment scans were extracted from the segmented lesions and combined into hybrid pre -post image pairs (h-ROIs). Training (pairs, 94; h-ROIs, 6209), validation (10 pairs) and test sets (54 pairs) were obtained. The DL-CNN consisted of 2 convolution (C1-C2), 2 locally connected (L3-L4), and 1 fully connected layers. The DL-CNN was trained with h-ROIs to classify cancers as fully responding (stage T0) or not fully responding to chemotherapy. Two radiologists provided lesion likelihood of being stage T0 posttreatment. The test area under the ROC curve (AUC) was 0.73 for T0 prediction by the base DL-CNN structure with randomly initialized weights. The base DL-CNN structure with pretrained weights and transfer learning (no frozen layers) achieved test AUC of 0.79. The test AUCs for 3 modified DL-CNN structures (different C1-C2 max pooling filter sizes, strides, and padding, with transfer learning) were 0.72, 0.86, and 0.69. For the base DL-CNN with (C1) frozen, (C1-C2) frozen, and (C1-C2-L3) frozen, the test AUCs were 0.81, 0.78, and 0.71, respectively. The radiologists' AUCs were 0.76 and 0.77. DL-CNN performed better with pretrained than randomly initialized weights. 10.18383/j.tom.2018.00036
Using Deep Learning Algorithms to Grade Hydronephrosis Severity: Toward a Clinical Adjunct. Frontiers in pediatrics Grading hydronephrosis severity relies on subjective interpretation of renal ultrasound images. Deep learning is a data-driven algorithmic approach to classifying data, including images, presenting a promising option for grading hydronephrosis. The current study explored the potential of deep convolutional neural networks (CNN), a type of deep learning algorithm, to grade hydronephrosis ultrasound images according to the 5-point Society for Fetal Urology (SFU) classification system, and discusses its potential applications in developing decision and teaching aids for clinical practice. We developed a five-layer CNN to grade 2,420 sagittal hydronephrosis ultrasound images [191 SFU 0 (8%), 407 SFU I (17%), 666 SFU II (28%), 833 SFU III (34%), and 323 SFU IV (13%)], from 673 patients ranging from 0 to 116.29 months old ( = 16.53, = 17.80). Five-way (all grades) and two-way classification problems [i.e., II vs. III, and low (0-II) vs. high (III-IV)] were explored. The CNN classified 94% (95% CI, 93-95%) of the images correctly or within one grade of the provided label in the five-way classification problem. Fifty-one percent of these images (95% CI, 49-53%) were correctly predicted, with an average weighted F1 score of 0.49 (95% CI, 0.47-0.51). The CNN achieved an average accuracy of 78% (95% CI, 75-82%) with an average weighted F1 of 0.78 (95% CI, 0.74-0.82) when classifying low vs. high grades, and an average accuracy of 71% (95% CI, 68-74%) with an average weighted F1 score of 0.71 (95% CI, 0.68-0.75) when discriminating between grades II vs. III. Our model performs well above chance level, and classifies almost all images either correctly or within one grade of the provided label. We have demonstrated the applicability of a CNN approach to hydronephrosis ultrasound image classification. Further investigation into a deep learning-based clinical adjunct for hydronephrosis is warranted. 10.3389/fped.2020.00001
Robust Prediction of Prognosis and Immunotherapeutic Response for Clear Cell Renal Cell Carcinoma Through Deep Learning Algorithm. Chen Siteng,Zhang Encheng,Jiang Liren,Wang Tao,Guo Tuanjie,Gao Feng,Zhang Ning,Wang Xiang,Zheng Junhua Frontiers in immunology It is of great urgency to explore useful prognostic markers and develop a robust prognostic model for patients with clear-cell renal cell carcinoma (ccRCC). Three independent patient cohorts were included in this study. We applied a high-level neural network based on TensorFlow to construct the robust model by using the deep learning algorithm. The deep learning-based model (FB-risk) could perform well in predicting the survival status in the 5-year follow-up, which could also significantly distinguish the patients with high overall survival risk in three independent patient cohorts of ccRCC and a pan-cancer cohort. High FB-risk was found to be partially associated with negative regulation of the immune system. In addition, the novel phenotyping of ccRCC based on the F-box gene family could robustly stratify patients with different survival risks. The different mutation landscapes and immune characteristics were also found among different clusters. Furthermore, the novel phenotyping of ccRCC based on the F-box gene family could perform well in the robust stratification of survival and immune response in ccRCC, which might have potential for application in clinical practices. 10.3389/fimmu.2022.798471
Real-Time Detection of Ureteral Orifice in Urinary Endoscopy Videos Based on Deep Learning. Peng Xin,Liu Dingyi,Li Yiming,Xue Wei,Qian Dahong Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference In urology endoscopic procedures, the Ureteral Orifice (UO) finding is crucial but may be challenging for inexperienced doctors. Generally, it is difficult to identify UOs intraoperatively due to the presence of a large median lobe, obstructing tumor, previous surgery, etc. To automatically identify various types of UOs in the video, we propose a real-time deep learning system in UO identification and localization in urinary endoscopy videos, and it can be applied to different types of urinary endoscopes. Our UO detection system is mainly based on Single Shot MultiBox Detector (SSD), which is one of the state-of-the-art deep-learning based detection networks in natural image domain. For the preprocessing, we apply both general and specific data augmentation strategies which have significantly improved all evaluation metrics. For the training steps, we only utilize rescetoscopy images which have more complex background information, and then, we use ureteroscopy images for testing. Simultaneously, we demonstrate that the model trained with rescetoscopy images can be successfully applied in the other type of urinary endoscopy images with four evaluation metrics (precision, recall, F1 and F2 scores) greater than 0.8. We further evaluate our model based on four independent video datasets which comprise both rescectoscopy videos and ureteroscopy videos. Extensive experiments on the four video datasets demonstrate that our deep-learning based UO detection system can identify and locate UOs of two different urinary endoscopes in real time with average processing time equal to 25 ms per frame and simultaneously achieve satisfactory recall and specificity. 10.1109/EMBC.2019.8856484
An integrated nomogram combining deep learning, Prostate Imaging-Reporting and Data System (PI-RADS) scoring, and clinical variables for identification of clinically significant prostate cancer on biparametric MRI: a retrospective multicentre study. Hiremath Amogh,Shiradkar Rakesh,Fu Pingfu,Mahran Amr,Rastinehad Ardeshir R,Tewari Ashutosh,Tirumani Sree Harsha,Purysko Andrei,Ponsky Lee,Madabhushi Anant The Lancet. Digital health BACKGROUND:Biparametric MRI (comprising T2-weighted MRI and apparent diffusion coefficient maps) is increasingly being used to characterise prostate cancer. Although previous studies have combined Prostate Imaging-Reporting & Data System (PI-RADS)-based MRI findings with routinely available clinical variables and with deep learning-based imaging predictors, respectively, for prostate cancer risk stratification, none have combined all three. We aimed to construct an integrated nomogram (referred to as ClaD) combining deep learning-based imaging predictions, PI-RADS scoring, and clinical variables to identify clinically significant prostate cancer on biparametric MRI. METHODS:In this retrospective multicentre study, we included patients with prostate cancer, with histopathology or biopsy reports and a screening or diagnostic MRI scan in the axial view, from four cohorts in the USA (from University Hospitals Cleveland Medical Center, Icahn School of Medicine at Mount Sinai, Cleveland Clinic, and Long Island Jewish Medical Center) and from the PROSTATEx Challenge dataset in the Netherlands. We constructed an integrated nomogram combining deep learning, PI-RADS score, and clinical variables (prostate-specific antigen, prostate volume, and lesion volume) using multivariable logistic regression to identify clinically significant prostate cancer on biparametric MRI. We used data from the first three cohorts to train the nomogram and data from the remaining two cohorts for independent validation. We compared the performance of our ClaD integrated nomogram with that of integrated nomograms combining clinical variables with either the deep learning-based imaging predictor (referred to as DIN) or PI-RADS score (referred to as PIN) using area under the receiver operating characteristic curves (AUCs). We also compared the ability of the nomograms to predict biochemical recurrence on a subset of patients who had undergone radical prostatectomy. We report cross-validation AUCs as means for the training set and used AUCs with 95% CIs to assess the performance on the test set. The difference in AUCs between the models were tested for statistical significance using DeLong's test. We used log-rank tests and Kaplan-Meier curves to analyse survival. FINDINGS:We investigated 592 patients (823 lesions) with prostate cancer who underwent 3T multiparametric MRI at five hospitals in the USA between Jan 8, 2009, and June 3, 2017. The training data set consisted of 368 patients from three sites (the PROSTATEx Challenge cohort [n=204], University Hospitals Cleveland Medical Center [n=126], and Icahn School of Medicine at Mount Sinai [n=38]), and the independent validation data set consisted of 224 patients from two sites (Cleveland Clinic [n=151] and Long Island Jewish Medical Center [n=73]). The ClaD clinical nomogram yielded an AUC of 0·81 (95% CI 0·76-0·85) for identification of clinically significant prostate cancer in the validation data set, significantly improving performance over the DIN (0·74 [95% CI 0·69-0·80], p=0·0005) and PIN (0·76 [0·71-0·81], p<0·0001) nomograms. In the subset of patients who had undergone radical prostatectomy (n=81), the ClaD clinical nomogram resulted in a significant separation in Kaplan-Meier survival curves between patients with and without biochemical recurrence (HR 5·92 [2·34-15·00], p=0·044), whereas the DIN (1·22 [0·54-2·79], p=0·65) and PIN nomograms did not (1·30 [0·62-2·71], p=0·51). INTERPRETATION:Risk stratification of patients with prostate cancer using the integrated ClaD nomogram could help to identify patients with prostate cancer who are at low risk, very low risk, and favourable intermediate risk, who might be candidates for active surveillance, and could also help to identify patients with lethal prostate cancer who might benefit from adjuvant therapy. FUNDING:National Cancer Institute of the US National Institutes of Health, National Institute for Biomedical Imaging and Bioengineering, National Center for Research Resources, US Department of Veterans Affairs Biomedical Laboratory Research and Development Service, US Department of Defense, US National Institute of Diabetes and Digestive and Kidney Diseases, The Ohio Third Frontier Technology Validation Fund, Case Western Reserve University, Dana Foundation, and Clinical and Translational Science Collaborative. 10.1016/S2589-7500(21)00082-0
Development of a Deep Learning Algorithm for the Histopathologic Diagnosis and Gleason Grading of Prostate Cancer Biopsies: A Pilot Study. Kott Ohad,Linsley Drew,Amin Ali,Karagounis Andreas,Jeffers Carleen,Golijanin Dragan,Serre Thomas,Gershman Boris European urology focus BACKGROUND:The pathologic diagnosis and Gleason grading of prostate cancer are time-consuming, error-prone, and subject to interobserver variability. Machine learning offers opportunities to improve the diagnosis, risk stratification, and prognostication of prostate cancer. OBJECTIVE:To develop a state-of-the-art deep learning algorithm for the histopathologic diagnosis and Gleason grading of prostate biopsy specimens. DESIGN, SETTING, AND PARTICIPANTS:A total of 85 prostate core biopsy specimens from 25 patients were digitized at 20× magnification and annotated for Gleason 3, 4, and 5 prostate adenocarcinoma by a urologic pathologist. From these virtual slides, we sampled 14803 image patches of 256×256 pixels, approximately balanced for malignancy. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS:We trained and tested a deep residual convolutional neural network to classify each patch at two levels: (1) coarse (benign vs malignant) and (2) fine (benign vs Gleason 3 vs 4 vs 5). Model performance was evaluated using fivefold cross-validation. Randomization tests were used for hypothesis testing of model performance versus chance. RESULTS AND LIMITATIONS:The model demonstrated 91.5% accuracy (p<0.001) at coarse-level classification of image patches as benign versus malignant (0.93 sensitivity, 0.90 specificity, and 0.95 average precision). The model demonstrated 85.4% accuracy (p<0.001) at fine-level classification of image patches as benign versus Gleason 3 versus Gleason 4 versus Gleason 5 (0.83 sensitivity, 0.94 specificity, and 0.83 average precision), with the greatest number of confusions in distinguishing between Gleason 3 and 4, and between Gleason 4 and 5. Limitations include the small sample size and the need for external validation. CONCLUSIONS:In this study, a deep learning-based computer vision algorithm demonstrated excellent performance for the histopathologic diagnosis and Gleason grading of prostate cancer. PATIENT SUMMARY:We developed a deep learning algorithm that demonstrated excellent performance for the diagnosis and grading of prostate cancer. 10.1016/j.euf.2019.11.003
A Deep Learning Approach to Diagnostic Classification of Prostate Cancer Using Pathology-Radiology Fusion. Journal of magnetic resonance imaging : JMRI BACKGROUND:A definitive diagnosis of prostate cancer requires a biopsy to obtain tissue for pathologic analysis, but this is an invasive procedure and is associated with complications. PURPOSE:To develop an artificial intelligence (AI)-based model (named AI-biopsy) for the early diagnosis of prostate cancer using magnetic resonance (MR) images labeled with histopathology information. STUDY TYPE:Retrospective. POPULATION:Magnetic resonance imaging (MRI) data sets from 400 patients with suspected prostate cancer and with histological data (228 acquired in-house and 172 from external publicly available databases). FIELD STRENGTH/SEQUENCE:1.5 to 3.0 Tesla, T2-weighted image pulse sequences. ASSESSMENT:MR images reviewed and selected by two radiologists (with 6 and 17 years of experience). The patient images were labeled with prostate biopsy including Gleason Score (6 to 10) or Grade Group (1 to 5) and reviewed by one pathologist (with 15 years of experience). Deep learning models were developed to distinguish 1) benign from cancerous tumor and 2) high-risk tumor from low-risk tumor. STATISTICAL TESTS:To evaluate our models, we calculated negative predictive value, positive predictive value, specificity, sensitivity, and accuracy. We also calculated areas under the receiver operating characteristic (ROC) curves (AUCs) and Cohen's kappa. RESULTS:Our computational method (https://github.com/ih-lab/AI-biopsy) achieved AUCs of 0.89 (95% confidence interval [CI]: [0.86-0.92]) and 0.78 (95% CI: [0.74-0.82]) to classify cancer vs. benign and high- vs. low-risk of prostate disease, respectively. DATA CONCLUSION:AI-biopsy provided a data-driven and reproducible way to assess cancer risk from MR images and a personalized strategy to potentially reduce the number of unnecessary biopsies. AI-biopsy highlighted the regions of MR images that contained the predictive features the algorithm used for diagnosis using the class activation map method. It is a fully automatic method with a drag-and-drop web interface (https://ai-biopsy.eipm-research.org) that allows radiologists to review AI-assessed MR images in real time. LEVEL OF EVIDENCE:1 TECHNICAL EFFICACY STAGE: 2. 10.1002/jmri.27599
Illuminating Clues of Cancer Buried in Prostate MR Image: Deep Learning and Expert Approaches. Biomolecules Deep learning algorithms have achieved great success in cancer image classification. However, it is imperative to understand the differences between the deep learning and human approaches. Using an explainable model, we aimed to compare the deep learning-focused regions of magnetic resonance (MR) images with cancerous locations identified by radiologists and pathologists. First, 307 prostate MR images were classified using a well-established deep neural network without locational information of cancers. Subsequently, we assessed whether the deep learning-focused regions overlapped the radiologist-identified targets. Furthermore, pathologists provided histopathological diagnoses on 896 pathological images, and we compared the deep learning-focused regions with the genuine cancer locations through 3D reconstruction of pathological images. The area under the curve (AUC) for MR images classification was sufficiently high (AUC = 0.90, 95% confidence interval 0.87-0.94). Deep learning-focused regions overlapped radiologist-identified targets by 70.5% and pathologist-identified cancer locations by 72.1%. Lymphocyte aggregation and dilated prostatic ducts were observed in non-cancerous regions focused by deep learning. Deep learning algorithms can achieve highly accurate image classification without necessarily identifying radiological targets or cancer locations. Deep learning may find clues that can help a clinical diagnosis even if the cancer is not visible. 10.3390/biom9110673
Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on Magnetic Resonance Imaging for Targeted Biopsy. Soerensen Simon John Christoph,Fan Richard E,Seetharaman Arun,Chen Leo,Shao Wei,Bhattacharya Indrani,Kim Yong-Hun,Sood Rewa,Borre Michael,Chung Benjamin I,To'o Katherine J,Rusu Mirabela,Sonn Geoffrey A The Journal of urology PURPOSE:Targeted biopsy improves prostate cancer diagnosis. Accurate prostate segmentation on magnetic resonance imaging (MRI) is critical for accurate biopsy. Manual gland segmentation is tedious and time-consuming. We sought to develop a deep learning model to rapidly and accurately segment the prostate on MRI and to implement it as part of routine magnetic resonance-ultrasound fusion biopsy in the clinic. MATERIALS AND METHODS:A total of 905 subjects underwent multiparametric MRI at 29 institutions, followed by magnetic resonance-ultrasound fusion biopsy at 1 institution. A urologic oncology expert segmented the prostate on axial T2-weighted MRI scans. We trained a deep learning model, ProGNet, on 805 cases. We retrospectively tested ProGNet on 100 independent internal and 56 external cases. We prospectively implemented ProGNet as part of the fusion biopsy procedure for 11 patients. We compared ProGNet performance to 2 deep learning networks (U-Net and holistically-nested edge detector) and radiology technicians. The Dice similarity coefficient (DSC) was used to measure overlap with expert segmentations. DSCs were compared using paired t-tests. RESULTS:ProGNet (DSC=0.92) outperformed U-Net (DSC=0.85, p <0.0001), holistically-nested edge detector (DSC=0.80, p <0.0001), and radiology technicians (DSC=0.89, p <0.0001) in the retrospective internal test set. In the prospective cohort, ProGNet (DSC=0.93) outperformed radiology technicians (DSC=0.90, p <0.0001). ProGNet took just 35 seconds per case (vs 10 minutes for radiology technicians) to yield a clinically utilizable segmentation file. CONCLUSIONS:This is the first study to employ a deep learning model for prostate gland segmentation for targeted biopsy in routine urological clinical practice, while reporting results and releasing the code online. Prospective and retrospective evaluations revealed increased speed and accuracy. 10.1097/JU.0000000000001783
Deep Learning for Real-time, Automatic, and Scanner-adapted Prostate (Zone) Segmentation of Transrectal Ultrasound, for Example, Magnetic Resonance Imaging-transrectal Ultrasound Fusion Prostate Biopsy. van Sloun Ruud J G,Wildeboer Rogier R,Mannaerts Christophe K,Postema Arnoud W,Gayet Maudy,Beerlage Harrie P,Salomon Georg,Wijkstra Hessel,Mischi Massimo European urology focus BACKGROUND:Although recent advances in multiparametric magnetic resonance imaging (MRI) led to an increase in MRI-transrectal ultrasound (TRUS) fusion prostate biopsies, these are time consuming, laborious, and costly. Introduction of deep-learning approach would improve prostate segmentation. OBJECTIVE:To exploit deep learning to perform automatic, real-time prostate (zone) segmentation on TRUS images from different scanners. DESIGN, SETTING, AND PARTICIPANTS:Three datasets with TRUS images were collected at different institutions, using an iU22 (Philips Healthcare, Bothell, WA, USA), a Pro Focus 2202a (BK Medical), and an Aixplorer (SuperSonic Imagine, Aix-en-Provence, France) ultrasound scanner. The datasets contained 436 images from 181 men. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS:Manual delineations from an expert panel were used as ground truth. The (zonal) segmentation performance was evaluated in terms of the pixel-wise accuracy, Jaccard index, and Hausdorff distance. RESULTS AND LIMITATIONS:The developed deep-learning approach was demonstrated to significantly improve prostate segmentation compared with a conventional automated technique, reaching median accuracy of 98% (95% confidence interval 95-99%), a Jaccard index of 0.93 (0.80-0.96), and a Hausdorff distance of 3.0 (1.3-8.7) mm. Zonal segmentation yielded pixel-wise accuracy of 97% (95-99%) and 98% (96-99%) for the peripheral and transition zones, respectively. Supervised domain adaptation resulted in retainment of high performance when applied to images from different ultrasound scanners (p > 0.05). Moreover, the algorithm's assessment of its own segmentation performance showed a strong correlation with the actual segmentation performance (Pearson's correlation 0.72, p < 0.001), indicating that possible incorrect segmentations can be identified swiftly. CONCLUSIONS:Fusion-guided prostate biopsies, targeting suspicious lesions on MRI using TRUS are increasingly performed. The requirement for (semi)manual prostate delineation places a substantial burden on clinicians. Deep learning provides a means for fast and accurate (zonal) prostate segmentation of TRUS images that translates to different scanners. PATIENT SUMMARY:Artificial intelligence for automatic delineation of the prostate on ultrasound was shown to be reliable and applicable to different scanners. This method can, for example, be applied to speed up, and possibly improve, guided prostate biopsies using magnetic resonance imaging-transrectal ultrasound fusion. 10.1016/j.euf.2019.04.009
Deep learning-based classification of blue light cystoscopy imaging during transurethral resection of bladder tumors. Ali Nairveen,Bolenz Christian,Todenhöfer Tilman,Stenzel Arnulf,Deetmar Peer,Kriegmair Martin,Knoll Thomas,Porubsky Stefan,Hartmann Arndt,Popp Jürgen,Kriegmair Maximilian C,Bocklitz Thomas Scientific reports Bladder cancer is one of the top 10 frequently occurring cancers and leads to most cancer deaths worldwide. Recently, blue light (BL) cystoscopy-based photodynamic diagnosis was introduced as a unique technology to enhance the detection of bladder cancer, particularly for the detection of flat and small lesions. Here, we aim to demonstrate a BL image-based artificial intelligence (AI) diagnostic platform using 216 BL images, that were acquired in four different urological departments and pathologically identified with respect to cancer malignancy, invasiveness, and grading. Thereafter, four pre-trained convolution neural networks were utilized to predict image malignancy, invasiveness, and grading. The results indicated that the classification sensitivity and specificity of malignant lesions are 95.77% and 87.84%, while the mean sensitivity and mean specificity of tumor invasiveness are 88% and 96.56%, respectively. This small multicenter clinical study clearly shows the potential of AI based classification of BL images allowing for better treatment decisions and potentially higher detection rates. 10.1038/s41598-021-91081-x
ProsRegNet: A deep learning framework for registration of MRI and histopathology images of the prostate. Medical image analysis Magnetic resonance imaging (MRI) is an increasingly important tool for the diagnosis and treatment of prostate cancer. However, interpretation of MRI suffers from high inter-observer variability across radiologists, thereby contributing to missed clinically significant cancers, overdiagnosed low-risk cancers, and frequent false positives. Interpretation of MRI could be greatly improved by providing radiologists with an answer key that clearly shows cancer locations on MRI. Registration of histopathology images from patients who had radical prostatectomy to pre-operative MRI allows such mapping of ground truth cancer labels onto MRI. However, traditional MRI-histopathology registration approaches are computationally expensive and require careful choices of the cost function and registration hyperparameters. This paper presents ProsRegNet, a deep learning-based pipeline to accelerate and simplify MRI-histopathology image registration in prostate cancer. Our pipeline consists of image preprocessing, estimation of affine and deformable transformations by deep neural networks, and mapping cancer labels from histopathology images onto MRI using estimated transformations. We trained our neural network using MR and histopathology images of 99 patients from our internal cohort (Cohort 1) and evaluated its performance using 53 patients from three different cohorts (an additional 12 from Cohort 1 and 41 from two public cohorts). Results show that our deep learning pipeline has achieved more accurate registration results and is at least 20 times faster than a state-of-the-art registration algorithm. This important advance will provide radiologists with highly accurate prostate MRI answer keys, thereby facilitating improvements in the detection of prostate cancer on MRI. Our code is freely available at https://github.com/pimed//ProsRegNet. 10.1016/j.media.2020.101919
Deep Learning-based Recalibration of the CUETO and EORTC Prediction Tools for Recurrence and Progression of Non-muscle-invasive Bladder Cancer. European urology oncology Despite being standard tools for decision-making, the European Organisation for Research and Treatment of Cancer (EORTC), European Association of Urology (EAU), and Club Urologico Espanol de Tratamiento Oncologico (CUETO) risk groups provide moderate performance in predicting recurrence-free survival (RFS) and progression-free survival (PFS) in non-muscle-invasive bladder cancer (NMIBC). In this retrospective combined-cohort data-mining study, the training group consisted of 3570 patients with de novo diagnosed NMIBC. Predictors included gender, age, T stage, histopathological grading, tumor burden and diameter, EORTC and CUETO scores, and type of intravesical treatment. The models developed were externally validated using an independent cohort of 322 patients. Models were trained using Cox proportional-hazards deep neural networks (deep learning; DeepSurv) with a proprietary grid search of hyperparameters. For patients treated with surgery and bacillus Calmette-Guérin-treated patients, the models achieved a c index of 0.650 (95% confidence interval [CI] 0.649-0.650) for RFS and 0.878 (95% CI 0.873-0.874) for PFS in the training group. In the validation group, the c index was 0.651 (95% CI 0.648-0.654) for RFS and 0.881 (95% CI 0.878-0.885) for PFS. After inclusion of patients treated with mitomycin C, the c index for RFS models was 0.6415 (95% CI 0.6412-0.6417) for the training group and 0.660 (95% CI 0.657-0.664) for the validation group. Models for PFS achieved a c index of 0.885 (95% CI 0.885-0.885) for the training set and 0.876 (95% CI 0.873-0.880) for the validation set. Our tool outperformed standard-of-care risk stratification tools and showed no evidence of overfitting. The application is open source and available at https://biostat.umed.pl/deepNMIBC/. PATIENT SUMMARY: We created and validated a new tool to predict recurrence and progression of early-stage bladder cancer. The application uses advanced artificial intelligence to combine state-of-the-art scales, outperforms these scales for prediction, and is freely available online. 10.1016/j.euo.2021.05.006
Deep learning for segmentation of 49 selected bones in CT scans: First step in automated PET/CT-based 3D quantification of skeletal metastases. Lindgren Belal Sarah,Sadik May,Kaboteh Reza,Enqvist Olof,Ulén Johannes,Poulsen Mads H,Simonsen Jane,Høilund-Carlsen Poul F,Edenbrandt Lars,Trägårdh Elin European journal of radiology PURPOSE:The aim of this study was to develop a deep learning-based method for segmentation of bones in CT scans and test its accuracy compared to manual delineation, as a first step in the creation of an automated PET/CT-based method for quantifying skeletal tumour burden. METHODS:Convolutional neural networks (CNNs) were trained to segment 49 bones using manual segmentations from 100 CT scans. After training, the CNN-based segmentation method was tested on 46 patients with prostate cancer, who had undergone F-choline-PET/CT and F-NaF PET/CT less than three weeks apart. Bone volumes were calculated from the segmentations. The network's performance was compared with manual segmentations of five bones made by an experienced physician. Accuracy of the spatial overlap between automated CNN-based and manual segmentations of these five bones was assessed using the Sørensen-Dice index (SDI). Reproducibility was evaluated applying the Bland-Altman method. RESULTS:The median (SD) volumes of the five selected bones were by CNN and manual segmentation: Th7 41 (3.8) and 36 (5.1), L3 76 (13) and 75 (9.2), sacrum 284 (40) and 283 (26), 7th rib 33 (3.9) and 31 (4.8), sternum 80 (11) and 72 (9.2), respectively. Median SDIs were 0.86 (Th7), 0.85 (L3), 0.88 (sacrum), 0.84 (7th rib) and 0.83 (sternum). The intraobserver volume difference was less with CNN-based than manual approach: Th7 2% and 14%, L3 7% and 8%, sacrum 1% and 3%, 7th rib 1% and 6%, sternum 3% and 5%, respectively. The average volume difference measured as ratio volume difference/mean volume between the two CNN-based segmentations was 5-6% for the vertebral column and ribs and ≤3% for other bones. CONCLUSION:The new deep learning-based method for automated segmentation of bones in CT scans provided highly accurate bone volumes in a fast and automated way and, thus, appears to be a valuable first step in the development of a clinical useful processing procedure providing reliable skeletal segmentation as a key part of quantification of skeletal metastases. 10.1016/j.ejrad.2019.01.028
Augmented Bladder Tumor Detection Using Deep Learning. Shkolyar Eugene,Jia Xiao,Chang Timothy C,Trivedi Dharati,Mach Kathleen E,Meng Max Q-H,Xing Lei,Liao Joseph C European urology Adequate tumor detection is critical in complete transurethral resection of bladder tumor (TURBT) to reduce cancer recurrence, but up to 20% of bladder tumors are missed by standard white light cystoscopy. Deep learning augmented cystoscopy may improve tumor localization, intraoperative navigation, and surgical resection of bladder cancer. We aimed to develop a deep learning algorithm for augmented cystoscopic detection of bladder cancer. Patients undergoing cystoscopy/TURBT were recruited and white light videos were recorded. Video frames containing histologically confirmed papillary urothelial carcinoma were selected and manually annotated. We constructed CystoNet, an image analysis platform based on convolutional neural networks, for automated bladder tumor detection using a development dataset of 95 patients for algorithm training and five patients for testing. Diagnostic performance of CystoNet was validated prospectively in an additional 54 patients. In the validation dataset, per-frame sensitivity and specificity were 90.9% (95% confidence interval [CI], 90.3-91.6%) and 98.6% (95% CI, 98.5-98.8%), respectively. Per-tumor sensitivity was 90.9% (95% CI, 90.3-91.6%). CystoNet detected 39 of 41 papillary and three of three flat bladder cancers. With high sensitivity and specificity, CystoNet may improve the diagnostic yield of cystoscopy and efficacy of TURBT. PATIENT SUMMARY: Conventional cystoscopy has recognized shortcomings in bladder cancer detection, with implications for recurrence. Cystoscopy augmented with artificial intelligence may improve cancer detection and resection. 10.1016/j.eururo.2019.08.032
Automated differentiation of benign renal oncocytoma and chromophobe renal cell carcinoma on computed tomography using deep learning. Baghdadi Amir,Aldhaam Naif A,Elsayed Ahmed S,Hussein Ahmed A,Cavuoto Lora A,Kauffman Eric,Guru Khurshid A BJU international OBJECTIVES:To develop and evaluate the feasibility of an objective method using artificial intelligence (AI) and image processing in a semi-automated fashion for tumour-to-cortex peak early-phase enhancement ratio (PEER) in order to differentiate CD117(+) oncocytoma from the chromophobe subtype of renal cell carcinoma (ChRCC) using convolutional neural networks (CNNs) on computed tomography imaging. METHODS:The CNN was trained and validated to identify the kidney + tumour areas in images from 192 patients. The tumour type was differentiated through automated measurement of PEER after manual segmentation of tumours. The performance of this diagnostic model was compared with that of manual expert identification and tumour pathology with regard to accuracy, sensitivity and specificity, along with the root-mean-square error (RMSE), for the remaining 20 patients with CD117(+) oncocytoma or ChRCC. RESULTS:The mean ± sd Dice similarity score for segmentation was 0.66 ± 0.14 for the CNN model to identify the kidney + tumour areas. PEER evaluation achieved accuracy of 95% in tumour type classification (100% sensitivity and 89% specificity) compared with the final pathology results (RMSE of 0.15 for PEER ratio). CONCLUSIONS:We have shown that deep learning could help to produce reliable discrimination of CD117(+) benign oncocytoma and malignant ChRCC through PEER measurements obtained by computer vision. 10.1111/bju.14985
Deep Learning for Natural Language Processing in Urology: State-of-the-Art Automated Extraction of Detailed Pathologic Prostate Cancer Data From Narratively Written Electronic Health Records. Leyh-Bannurah Sami-Ramzi,Tian Zhe,Karakiewicz Pierre I,Wolffgang Ulrich,Sauter Guido,Fisch Margit,Pehrke Dirk,Huland Hartwig,Graefen Markus,Budäus Lars JCO clinical cancer informatics PURPOSE:Entering all information from narrative documentation for clinical research into databases is time consuming, costly, and nearly impossible. Even high-volume databases do not cover all patient characteristics and drawn results may be limited. A new viable automated solution is machine learning based on deep neural networks applied to natural language processing (NLP), extracting detailed information from narratively written (eg, pathologic radical prostatectomy [RP]) electronic health records (EHRs). METHODS:Within an RP pathologic database, 3,679 RP EHRs were randomly split into 70% training and 30% test data sets. Training EHRs were automatically annotated, providing a semiautomatically annotated corpus of narratively written pathologic reports with initially context-free gold standard encodings. Primary and secondary Gleason pattern, corresponding percentages, tumor stage, nodal stage, total volume, tumor volume and diameter, and surgical margin were variables of interest. Second, state-of-the-art NLP techniques were used to train an industry-standard language model for pathologic EHRs by transfer learning. Finally, accuracy of the named entity extractors was compared with the gold standard encodings. RESULTS:Agreement rates (95% confidence interval) for primary and secondary Gleason patterns each were 91.3% (89.4 to 93.0), corresponding to the following: Gleason percentages, 70.5% (67.6 to 73.3) and 80.9% (78.4 to 83.3); tumor stage, 99.3% (98.6 to 99.7); nodal stage, 98.7% (97.8 to 99.3); total volume, 98.3% (97.3 to 99.0); tumor volume, 93.3% (91.6 to 94.8); maximum diameter, 96.3% (94.9 to 97.3); and surgical margin, 98.7% (97.8 to 99.3). Cumulative agreement was 91.3%. CONCLUSION:Our proposed NLP pipeline offers new abilities for precise and efficient data management from narrative documentation for clinical research. The scalable approach potentially allows the NLP pipeline to be generalized to other genitourinary EHRs, tumor entities, and other medical disciplines. 10.1200/CCI.18.00080
A deep learning-based approach for the diagnosis of adrenal adenoma: a new trial using CT. The British journal of radiology OBJECTIVE:To develop and validate deep convolutional neural network (DCNN) models for the diagnosis of adrenal adenoma (AA) using CT. METHODS:This retrospective study enrolled 112 patients who underwent abdominal CT (non-contrast, early, and delayed phases) with 107 adrenal lesions (83 AAs and 24 non-AAs) confirmed pathologically and with 8 lesions confirmed by follow-up as metastatic carcinomas. Three patients had adrenal lesions on both sides. We constructed six DCNN models from six types of input images for comparison: non-contrast images only (Model A), delayed phase images only (Model B), three phasic images merged into a 3-channel (Model C), relative washout rate (RWR) image maps only (Model D), non-contrast and RWR maps merged into a 2-channel (Model E), and delayed phase and RWR maps merged into a 2-channel (Model F). These input images were prepared manually with cropping and registration of CT images. Each DCNN model with six convolutional layers was trained with data augmentation and hyperparameter tuning. The optimal threshold values for binary classification were determined from the receiver-operating characteristic curve analyses. We adopted the nested cross-validation method, in which the outer fivefold cross-validation was used to assess the diagnostic performance of the models and the inner fivefold cross-validation was used to tune hyperparameters of the models. RESULTS:The areas under the curve with 95% confidence intervals of Models A-F were 0.94 [0.90, 0.98], 0.80 [0.69, 0.89], 0.97 [0.94, 1.00], 0.92 [0.85, 0.97], 0.99 [0.97, 1.00] and 0.94 [0.86, 0.99], respectively. Model E showed high area under the curve greater than 0.95. CONCLUSION:DCNN models may be a useful tool for the diagnosis of AA using CT. ADVANCES IN KNOWLEDGE:The current study demonstrates a deep learning-based approach could differentiate adrenal adenoma from non-adenoma using multiphasic CT. 10.1259/bjr.20211066
Current and future applications of machine and deep learning in urology: a review of the literature on urolithiasis, renal cell carcinoma, and bladder and prostate cancer. Suarez-Ibarrola Rodrigo,Hein Simon,Reis Gerd,Gratzke Christian,Miernik Arkadiusz World journal of urology PURPOSE:The purpose of the study was to provide a comprehensive review of recent machine learning (ML) and deep learning (DL) applications in urological practice. Numerous studies have reported their use in the medical care of various urological disorders; however, no critical analysis has been made to date. METHODS:A detailed search of original articles was performed using the PubMed MEDLINE database to identify recent English literature relevant to ML and DL applications in the fields of urolithiasis, renal cell carcinoma (RCC), bladder cancer (BCa), and prostate cancer (PCa). RESULTS:In total, 43 articles were included addressing these four subfields. The most common ML and DL application in urolithiasis is in the prediction of endourologic surgical outcomes. The main area of research involving ML and DL in RCC concerns the differentiation between benign and malignant small renal masses, Fuhrman nuclear grade prediction, and gene expression-based molecular signatures. BCa studies employ radiomics and texture feature analysis for the distinction between low- and high-grade tumors, address accurate image-based cytology, and use algorithms to predict treatment response, tumor recurrence, and patient survival. PCa studies aim at developing algorithms for Gleason score prediction, MRI computer-aided diagnosis, and surgical outcomes and biochemical recurrence prediction. Studies consistently found the superiority of these methods over traditional statistical methods. CONCLUSIONS:The continuous incorporation of clinical data, further ML and DL algorithm retraining, and generalizability of models will augment the prediction accuracy and enhance individualized medicine. 10.1007/s00345-019-03000-5
Image Enhancement Model Based on Deep Learning Applied to the Ureteroscopic Diagnosis of Ureteral Stones during Pregnancy. Computational and mathematical methods in medicine OBJECTIVE:To explore the image enhancement model based on deep learning on the effect of ureteroscopy with double J tube placement and drainage on ureteral stones during pregnancy. We compare the clinical effect of ureteroscopy with double J tube placement on pregnancy complicated with ureteral stones and use medical imaging to diagnose the patient's condition and design a treatment plan. METHODS:The image enhancement model is constructed using deep learning and implemented for quality improvement in terms of image clarity. In the way, the relationship of the media transmittance and the image with blurring artifacts was established, and the model can estimate the ureteral stone predicted map of each region. Firstly, we proposed the evolution-based detail enhancement method. Then, the feature extraction network is used to capture blurring artifact-related features. Finally, the regression subnetwork is used to predict the media transmittance in the local area. Eighty pregnant patients with ureteral calculi treated in our hospital were selected as the research object and were divided into a test group and a control group according to the random number table method, 40 cases in each group. The test group underwent ureteroscopy double J tube placement, and the control group underwent ureteroscopy lithotripsy. Combined with the ultrasound scan results of the patients before and after the operation, the operation time, time to get out of bed, and hospitalization time of the two groups of patients were compared. The operation success rate and the incidence of complications within 1 month after surgery were counted in the two groups of patients. RESULTS:We are able to improve the quality of the images prior to medical diagnosis. The total effective rate of the observation group was 100.0%, which is higher than that of the control group (90.0%). The difference between the two groups was statistically significant ( < 0.05). The adverse reaction rate in the observation group was 5.0%, which was lower than 17.5% in the control group. The difference between the two groups was statistically significant ( < 0.05). The comparison results are then prepared. CONCLUSIONS:The image enhancement model based on deep learning is able to improve medical diagnosis which can assist radiologists to better locate the ureteral stones. Based on our method, double J tube placement under ureteroscopy has a significant effect on the treatment of ureteral stones during pregnancy, and it has good safety and is worthy of widespread application. 10.1155/2021/9548312
Deep learning approach to predict lymph node metastasis directly from primary tumour histology in prostate cancer. Wessels Frederik,Schmitt Max,Krieghoff-Henning Eva,Jutzi Tanja,Worst Thomas S,Waldbillig Frank,Neuberger Manuel,Maron Roman C,Steeg Matthias,Gaiser Timo,Hekler Achim,Utikal Jochen S,von Kalle Christof,Fröhling Stefan,Michel Maurice S,Nuhn Philipp,Brinker Titus J BJU international OBJECTIVE:To develop a new digital biomarker based on the analysis of primary tumour tissue by a convolutional neural network (CNN) to predict lymph node metastasis (LNM) in a cohort matched for already established risk factors. PATIENTS AND METHODS:Haematoxylin and eosin (H&E) stained primary tumour slides from 218 patients (102 N+; 116 N0), matched for Gleason score, tumour size, venous invasion, perineural invasion and age, who underwent radical prostatectomy were selected to train a CNN and evaluate its ability to predict LN status. RESULTS:With 10 models trained with the same data, a mean area under the receiver operating characteristic curve (AUROC) of 0.68 (95% confidence interval [CI] 0.678-0.682) and a mean balanced accuracy of 61.37% (95% CI 60.05-62.69%) was achieved. The mean sensitivity and specificity was 53.09% (95% CI 49.77-56.41%) and 69.65% (95% CI 68.21-71.1%), respectively. These results were confirmed via cross-validation. The probability score for LNM prediction was significantly higher on image sections from N+ samples (mean [SD] N+ probability score 0.58 [0.17] vs 0.47 [0.15] N0 probability score, P = 0.002). In multivariable analysis, the probability score of the CNN (odds ratio [OR] 1.04 per percentage probability, 95% CI 1.02-1.08; P = 0.04) and lymphovascular invasion (OR 11.73, 95% CI 3.96-35.7; P < 0.001) proved to be independent predictors for LNM. CONCLUSION:In our present study, CNN-based image analyses showed promising results as a potential novel low-cost method to extract relevant prognostic information directly from H&E histology to predict the LN status of patients with prostate cancer. Our ubiquitously available technique might contribute to an improved LN status prediction. 10.1111/bju.15386
A real-time system using deep learning to detect and track ureteral orifices during urinary endoscopy. Liu Dingyi,Peng Xin,Liu Xiaoqing,Li Yiming,Bao Yiming,Xu Jianwei,Bian Xianzhang,Xue Wei,Qian Dahong Computers in biology and medicine BACKGROUND AND OBJECTIVE:To automatically identify and locate various types and states of the ureteral orifice (UO) in real endoscopy scenarios, we developed and verified a real-time computer-aided UO detection and tracking system using an improved real-time deep convolutional neural network and a robust tracking algorithm. METHODS:The single-shot multibox detector (SSD) was refined to perform the detection task. We trained both the SSD and Refined-SSD using 447 resectoscopy images with UO and tested them on 818 ureteroscopy images. We also evaluated the detection performance on endoscopy video frames, which comprised 892 resectoscopy frames and 1366 ureteroscopy frames. UOs could not be identified with certainty because sometimes they appeared on the screen in a closed state of peristaltic contraction. To mitigate this problem and mimic the inspection behavior of urologists, we integrated the SSD and Refined-SSD with five different tracking algorithms. RESULTS:When tested on 818 ureteroscopy images, our proposed UO detection network, Refined-SSD, achieved an accuracy of 0.902. In the video sequence analysis, our detection model yielded test sensitivities of 0.840 and 0.922 on resectoscopy and ureteroscopy video frames, respectively. In addition, by testing Refined-SSD on 1366 ureteroscopy video frames, the sensitivity achieved a value of 0.922, and a lowest false positive per image of 0.049 was obtained. For UO tracking performance, our proposed UO detection and tracking system (Refined-SSD integrated with CSRT) performed the best overall. At an overlap threshold of 0.5, the success rate of our proposed UO detection and tracking system was greater than 0.95 on 17 resectoscopy video clips and achieved nearly 0.95 on 40 ureteroscopy video clips. CONCLUSIONS:We developed a deep learning system that could be used for detecting and tracking UOs in endoscopy scenarios in real time. This system can simultaneously maintain high accuracy. This approach has great potential to serve as an excellent learning and feedback system for trainees and new urologists in clinical settings. 10.1016/j.compbiomed.2020.104104
Assessing kidney stone composition using smartphone microscopy and deep neural networks. BJUI compass Objectives:To propose a point-of-care image recognition system for kidney stone composition classification using smartphone microscopy and deep convolutional neural networks. Materials and methods:A total of 37 surgically extracted human kidney stones consisting of calcium oxalate (CaOx), cystine, uric acid (UA) and struvite stones were included in the study. All of the stones were fragmented from percutaneous nephrolithotomy (PCNL). The stones were classified using Fourier transform infrared spectroscopy (FTIR) analysis before obtaining smartphone microscope images. The size of the stones ranged from 5 to 10 mm in diameter. Nurugo 400× smartphone microscope (Nurugo, Seoul, Republic of Korea) was functionalized to acquire microscopic images (magnification = 25×) of dry kidney stones using iPhone 6s+ (Apple, Cupertino, CA, USA). Each kidney stone was imaged in six different locations. In total, 222 images were captured from 37 stones. A novel convolutional neural network architecture was built for classification, and the model was assessed using accuracy, positive predictive value, sensitivity and F1 scores. Results:We achieved an overall and weighted accuracy of 88% and 87%, respectively, with an average F1 score of 0.84. The positive predictive value, sensitivity and F1 score for each stone type were respectively reported as follows: CaOx (0.82, 0.83, 0.82), cystine (0.80, 0.88, 0.84), UA (0.92, 0.77, 0.85) and struvite (0.86, 0.84, 0.85). Conclusion:We demonstrate a rapid and accurate point of care diagnostics method for classifying the four types of kidney stones. In the future, diagnostic tools that combine smartphone microscopy with artificial intelligence (AI) can provide accessible health care that can support physicians in their decision-making process. 10.1002/bco2.137
A deep learning system to diagnose the malignant potential of urothelial carcinoma cells in cytology specimens. Cancer cytopathology BACKGROUND:Although deep learning algorithms for clinical cytology have recently been developed, their application to practical assistance systems has not been achieved. In addition, whether deep learning systems (DLSs) can perform diagnoses that cannot be performed by pathologists has not been fully evaluated. METHODS:The authors initially obtained low-power field cytology images from archived Papanicolaou-stained urinary cytology glass slides from 232 patients. To aid in the development of a diagnosis support system that could identify suspicious atypical cells, the images were divided into high-power field panel image sets for training and testing of the 16-layer Visual Geometry Group convolutional neural network. The DLS was trained using linked information pertaining to whether urothelial carcinoma (UC) in the corresponding histology specimen was invasive or noninvasive, or high-grade or low-grade, followed by an evaluation of whether the DLS could diagnose these characteristics. RESULTS:The DLS achieved excellent performance (eg, an area under the curve [AUC] of 0.9890; F1 score, 0.9002) when trained on high-power field images of malignant and benign cases. The DLS could diagnose whether the lesions were invasive UC (AUC, 0.8628; F1 score, 0.8239) or high-grade UC (AUC, 0.8661; F1 score, 0.8218). Gradient-weighted class activation mapping of these images indicated that the diagnoses were based on the color of tumor cell nuclei. CONCLUSIONS:The DLS could accurately screen UC cells and determine the malignant potential of tumors more accurately than classical cytology. The use of a DLS during cytopathology screening could help urologists plan therapeutic strategies, which, in turn, may be beneficial for patients. 10.1002/cncy.22443
Urine cell image recognition using a deep-learning model for an automated slide evaluation system. BJU international OBJECTIVES:To develop a classification system for urine cytology with artificial intelligence (AI) using a convolutional neural network algorithm that classifies urine cell images as negative (benign) or positive (atypical or malignant). PATIENTS AND METHODS:We collected 195 urine cytology slides from consecutive patients with a histologically confirmed diagnosis of urothelial cancer (between January 2016 and December 2017). Two certified cytotechnologists independently evaluated and labelled each slide; 4637 cell images with concordant diagnoses were selected, including 3128 benign cells (negative), 398 atypical cells, and 1111 cells that were malignant or suspicious for malignancy (positive). This pathologically confirmed labelled dataset was used to represent the ground truth for AI training/validation/testing. Customized CutMix (CircleCut) and Refined Data Augmentation were used for image processing. The model architecture included EfficientNet B6 and Arcface. We used 80% of the data for training and validation (4:1 ratio) and 20% for testing. Model performance was evaluated with fivefold cross-validation. A receiver-operating characteristic (ROC) analysis was used to evaluate the binary classification model. Bayesian posterior probabilities for the AI performance measure (Y) and cytotechnologist performance measure (X) were compared. RESULTS:The area under the ROC curve was 0.99 (95% confidence interval [CI] 0.98-0.99), the highest accuracy was 95% (95% CI 94-97), sensitivity was 97% (95% CI 95-99), and specificity was 95% (95% CI 93-97). The accuracy of AI surpassed the highest level of cytotechnologists for the binary classification [Pr(Y > X) = 0.95]. AI achieved >90% accuracy for all cell subtypes. In the subgroup analysis based on the clinicopathological characteristics of patients who provided the test cells, the accuracy of AI ranged between 89% and 97%. CONCLUSION:Our novel AI classification system for urine cytology successfully classified all cell subtypes with an accuracy of higher than 90%, and achieved diagnostic accuracy of malignancy superior to the highest level achieved by cytotechnologists. 10.1111/bju.15518
A pyramidal deep learning pipeline for kidney whole-slide histology images classification. Abdeltawab Hisham,Khalifa Fahmi,Ghazal Mohammed,Cheng Liang,Gondim Dibson,El-Baz Ayman Scientific reports Renal cell carcinoma is the most common type of kidney cancer. There are several subtypes of renal cell carcinoma with distinct clinicopathologic features. Among the subtypes, clear cell renal cell carcinoma is the most common and tends to portend poor prognosis. In contrast, clear cell papillary renal cell carcinoma has an excellent prognosis. These two subtypes are primarily classified based on the histopathologic features. However, a subset of cases can a have a significant degree of histopathologic overlap. In cases with ambiguous histologic features, the correct diagnosis is dependent on the pathologist's experience and usage of immunohistochemistry. We propose a new method to address this diagnostic task based on a deep learning pipeline for automated classification. The model can detect tumor and non-tumoral portions of kidney and classify the tumor as either clear cell renal cell carcinoma or clear cell papillary renal cell carcinoma. Our framework consists of three convolutional neural networks and the whole slide images of kidney which were divided into patches of three different sizes for input into the networks. Our approach can provide patchwise and pixelwise classification. The kidney histology images consist of 64 whole slide images. Our framework results in an image map that classifies the slide image on the pixel-level. Furthermore, we applied generalized Gauss-Markov random field smoothing to maintain consistency in the map. Our approach classified the four classes accurately and surpassed other state-of-the-art methods, such as ResNet (pixel accuracy: 0.89 Resnet18, 0.92 proposed). We conclude that deep learning has the potential to augment the pathologist's capabilities by providing automated classification for histopathological images. 10.1038/s41598-021-99735-6
Deep Learning in Urological Images Using Convolutional Neural Networks: An Artificial Intelligence Study. Turkish journal of urology OBJECTIVE:Using artificial intelligence and a deep learning algorithm can differentiate vesicoureteral reflux and hydronephrosis reliably. MATERIAL AND METHODS:An online dataset of vesicoureteral reflux and hydronephrosis images were abstracted. We developed image analysis and deep learning workflow. The images were trained to distinguish between vesicoureteral reflux and hydronephrosis. The discriminative capability was quantified using receiver-operating characteristic curve analysis. We used Scikit learn to interpret the model. RESULTS:Thirty-nine of the hydronephrosis and 42 of the vesicoureteral reflux images were abstracted from an online dataset. First, we randomly divided the images into training and validation. In this example, we put 68 cases into training and 13 into validation. We did inference on 2 cases and in return their predictions were predicted: [[0.00006]] hydronephrosis, predicted: [[0.99874]] vesicoureteral reflux on 2 test cases. CONCLUSION:This study showed a high-level overview of building a deep neural network for urological image classification. It is concluded that using artificial intelligence with deep learning methods can be applied to differentiate all urological images. 10.5152/tud.2022.22030
Real-time deep learning semantic segmentation during intra-operative surgery for 3D augmented reality assistance. Tanzi Leonardo,Piazzolla Pietro,Porpiglia Francesco,Vezzetti Enrico International journal of computer assisted radiology and surgery PURPOSE:The current study aimed to propose a Deep Learning (DL) and Augmented Reality (AR) based solution for a in-vivo robot-assisted radical prostatectomy (RARP), to improve the precision of a published work from our group. We implemented a two-steps automatic system to align a 3D virtual ad-hoc model of a patient's organ with its 2D endoscopic image, to assist surgeons during the procedure. METHODS:This approach was carried out using a Convolutional Neural Network (CNN) based structure for semantic segmentation and a subsequent elaboration of the obtained output, which produced the needed parameters for attaching the 3D model. We used a dataset obtained from 5 endoscopic videos (A, B, C, D, E), selected and tagged by our team's specialists. We then evaluated the most performing couple of segmentation architecture and neural network and tested the overlay performances. RESULTS:U-Net stood out as the most effecting architectures for segmentation. ResNet and MobileNet obtained similar Intersection over Unit (IoU) results but MobileNet was able to elaborate almost twice operations per seconds. This segmentation technique outperformed the results from the former work, obtaining an average IoU for the catheter of 0.894 (σ = 0.076) compared to 0.339 (σ = 0.195). This modifications lead to an improvement also in the 3D overlay performances, in particular in the Euclidean Distance between the predicted and actual model's anchor point, from 12.569 (σ= 4.456) to 4.160 (σ = 1.448) and in the Geodesic Distance between the predicted and actual model's rotations, from 0.266 (σ = 0.131) to 0.169 (σ = 0.073). CONCLUSION:This work is a further step through the adoption of DL and AR in the surgery domain. In future works, we will overcome the limits of this approach and finally improve every step of the surgical procedure. 10.1007/s11548-021-02432-y
A deep learning framework for real-time 3D model registration in robot-assisted laparoscopic surgery. The international journal of medical robotics + computer assisted surgery : MRCAS INTRODUCTION:The current study presents a deep learning framework to determine, in real-time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot-assisted procedures. METHODS:This framework exploits semantic segmentation and, thereafter, two techniques, based on Convolutional Neural Networks and motion analysis, were used to infer the rotation. RESULTS:The segmentation shows optimal accuracies, with a mean IoU score greater than 80% in all tests. Different performance levels are obtained for rotation, depending on the surgical procedure. DISCUSSION:Even if the presented methodology has various degrees of precision depending on the testing scenario, this work sets the first step for the adoption of deep learning and augmented reality to generalise the automatic registration process. 10.1002/rcs.2387
Computer-aided diagnosis with a convolutional neural network algorithm for automated detection of urinary tract stones on plain X-ray. Kobayashi Masaki,Ishioka Junichiro,Matsuoka Yoh,Fukuda Yuichi,Kohno Yusuke,Kawano Keizo,Morimoto Shinji,Muta Rie,Fujiwara Motohiro,Kawamura Naoko,Okuno Tetsuo,Yoshida Soichiro,Yokoyama Minato,Suda Rumi,Saiki Ryota,Suzuki Kenji,Kumazawa Itsuo,Fujii Yasuhisa BMC urology BACKGROUND:Recent increased use of medical images induces further burden of their interpretation for physicians. A plain X-ray is a low-cost examination that has low-dose radiation exposure and high availability, although diagnosing urolithiasis using this method is not always easy. Since the advent of a convolutional neural network via deep learning in the 2000s, computer-aided diagnosis (CAD) has had a great impact on automatic image analysis in the urological field. The objective of our study was to develop a CAD system with deep learning architecture to detect urinary tract stones on a plain X-ray and to evaluate the model's accuracy. METHODS:We collected plain X-ray images of 1017 patients with a radio-opaque upper urinary tract stone. X-ray images (n = 827 and 190) were used as the training and test data, respectively. We used a 17-layer Residual Network as a convolutional neural network architecture for patch-wise training. The training data were repeatedly used until the best model accuracy was achieved within 300 runs. The F score, which is a harmonic mean of the sensitivity and positive predictive value (PPV) and represents the balance of the accuracy, was measured to evaluate the model's accuracy. RESULTS:Using deep learning, we developed a CAD model that needed 110 ms to provide an answer for each X-ray image. The best F score was 0.752, and the sensitivity and PPV were 0.872 and 0.662, respectively. When limited to a proximal ureter stone, the sensitivity and PPV were 0.925 and 0.876, respectively, and they were the lowest at mid-ureter. CONCLUSION:CAD of a plain X-ray may be a promising method to detect radio-opaque urinary tract stones with satisfactory sensitivity although the PPV could still be improved. The CAD model detects urinary tract stones quickly and automatically and has the potential to become a helpful screening modality especially for primary care physicians for diagnosing urolithiasis. Further study using a higher volume of data would improve the diagnostic performance of CAD models to detect urinary tract stones on a plain X-ray. 10.1186/s12894-021-00874-9
A Deep Learning Pipeline for Grade Groups Classification Using Digitized Prostate Biopsy Specimens. Sensors (Basel, Switzerland) Prostate cancer is a significant cause of morbidity and mortality in the USA. In this paper, we develop a computer-aided diagnostic (CAD) system for automated grade groups (GG) classification using digitized prostate biopsy specimens (PBSs). Our CAD system aims to firstly classify the Gleason pattern (GP), and then identifies the Gleason score (GS) and GG. The GP classification pipeline is based on a pyramidal deep learning system that utilizes three convolution neural networks (CNN) to produce both patch- and pixel-wise classifications. The analysis starts with sequential preprocessing steps that include a histogram equalization step to adjust intensity values, followed by a PBSs' edge enhancement. The digitized PBSs are then divided into overlapping patches with the three sizes: 100 × 100 (CNNS), 150 × 150 (CNNM), and 200 × 200 (CNNL), pixels, and 75% overlap. Those three sizes of patches represent the three pyramidal levels. This pyramidal technique allows us to extract rich information, such as that the larger patches give more global information, while the small patches provide local details. After that, the patch-wise technique assigns each overlapped patch a label as GP categories (1 to 5). Then, the majority voting is the core approach for getting the pixel-wise classification that is used to get a single label for each overlapped pixel. The results after applying those techniques are three images of the same size as the original, and each pixel has a single label. We utilized the majority voting technique again on those three images to obtain only one. The proposed framework is trained, validated, and tested on 608 whole slide images (WSIs) of the digitized PBSs. The overall diagnostic accuracy is evaluated using several metrics: precision, recall, F1-score, accuracy, macro-averaged, and weighted-averaged. The (CNNL) has the best accuracy results for patch classification among the three CNNs, and its classification accuracy is 0.76. The macro-averaged and weighted-average metrics are found to be around 0.70-0.77. For GG, our CAD results are about 80% for precision, and between 60% to 80% for recall and F1-score, respectively. Also, it is around 94% for accuracy and NPV. To highlight our CAD systems' results, we used the standard ResNet50 and VGG-16 to compare our CNN's patch-wise classification results. As well, we compared the GG's results with that of the previous work. 10.3390/s21206708
Classification of renal tumour using convolutional neural networks to detect oncocytoma. Pedersen Mikkel,Andersen Michael Brun,Christiansen Henning,Azawi Nessn H European journal of radiology PURPOSE:To investigate the ability of convolutional neural networks (CNNs) to facilitate differentiation of oncocytoma from renal cell carcinoma (RCC) using non-invasive imaging technology. METHODS:Data were collected from 369 patients between January 2015 and September 2018. True labelling of scans as benign or malignant was determined by subsequent histological findings post-surgery or ultrasound-guided percutaneous biopsy. The data included 20,000 2D CT images. Data were randomly divided into sets for training (70 %), validation (10 %) and independent testing (20 %, DataTest_1). A small dataset (DataTest_2) was used for additional validation of the training model. Data were divided into sets at the patient level, rather than by individual image. A modified version of the ResNet50V2 was used. Accuracy of detecting benign or malignant renal mass was evaluated by a 51 % majority vote of individual image classifications to determine the classification for each patient. RESULTS:Test results from DataTest_1 indicate an area under the curve (AUC) of 0.973 with 93.3 % accuracy and 93.5 % specificity. Results from DataTest_2 indicate an AUC of 0.946 with 90.0 % accuracy and 98.0 % specificity when evaluation is performed image by image. There is no case in which multiple false negative images originate from the same patient. When evaluated with 51 % majority of scans for each patient, the accuracy rises to 100 % and the incidence of false negatives falls to zero. CONCLUSION:CNNs and deep learning technology can classify renal tumour masses as oncocytoma with high accuracy. This diagnostic method could prevent overtreatment for patients with renal masses. 10.1016/j.ejrad.2020.109343
Detection of Pathogenic Variants With Germline Genetic Testing Using Deep Learning vs Standard Methods in Patients With Prostate Cancer and Melanoma. JAMA Importance:Less than 10% of patients with cancer have detectable pathogenic germline alterations, which may be partially due to incomplete pathogenic variant detection. Objective:To evaluate if deep learning approaches identify more germline pathogenic variants in patients with cancer. Design, Setting, and Participants:A cross-sectional study of a standard germline detection method and a deep learning method in 2 convenience cohorts with prostate cancer and melanoma enrolled in the US and Europe between 2010 and 2017. The final date of clinical data collection was December 2017. Exposures:Germline variant detection using standard or deep learning methods. Main Outcomes and Measures:The primary outcomes included pathogenic variant detection performance in 118 cancer-predisposition genes estimated as sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). The secondary outcomes were pathogenic variant detection performance in 59 genes deemed actionable by the American College of Medical Genetics and Genomics (ACMG) and 5197 clinically relevant mendelian genes. True sensitivity and true specificity could not be calculated due to lack of a criterion reference standard, but were estimated as the proportion of true-positive variants and true-negative variants, respectively, identified by each method in a reference variant set that consisted of all variants judged to be valid from either approach. Results:The prostate cancer cohort included 1072 men (mean [SD] age at diagnosis, 63.7 [7.9] years; 857 [79.9%] with European ancestry) and the melanoma cohort included 1295 patients (mean [SD] age at diagnosis, 59.8 [15.6] years; 488 [37.7%] women; 1060 [81.9%] with European ancestry). The deep learning method identified more patients with pathogenic variants in cancer-predisposition genes than the standard method (prostate cancer: 198 vs 182; melanoma: 93 vs 74); sensitivity (prostate cancer: 94.7% vs 87.1% [difference, 7.6%; 95% CI, 2.2% to 13.1%]; melanoma: 74.4% vs 59.2% [difference, 15.2%; 95% CI, 3.7% to 26.7%]), specificity (prostate cancer: 64.0% vs 36.0% [difference, 28.0%; 95% CI, 1.4% to 54.6%]; melanoma: 63.4% vs 36.6% [difference, 26.8%; 95% CI, 17.6% to 35.9%]), PPV (prostate cancer: 95.7% vs 91.9% [difference, 3.8%; 95% CI, -1.0% to 8.4%]; melanoma: 54.4% vs 35.4% [difference, 19.0%; 95% CI, 9.1% to 28.9%]), and NPV (prostate cancer: 59.3% vs 25.0% [difference, 34.3%; 95% CI, 10.9% to 57.6%]; melanoma: 80.8% vs 60.5% [difference, 20.3%; 95% CI, 10.0% to 30.7%]). For the ACMG genes, the sensitivity of the 2 methods was not significantly different in the prostate cancer cohort (94.9% vs 90.6% [difference, 4.3%; 95% CI, -2.3% to 10.9%]), but the deep learning method had a higher sensitivity in the melanoma cohort (71.6% vs 53.7% [difference, 17.9%; 95% CI, 1.82% to 34.0%]). The deep learning method had higher sensitivity in the mendelian genes (prostate cancer: 99.7% vs 95.1% [difference, 4.6%; 95% CI, 3.0% to 6.3%]; melanoma: 91.7% vs 86.2% [difference, 5.5%; 95% CI, 2.2% to 8.8%]). Conclusions and Relevance:Among a convenience sample of 2 independent cohorts of patients with prostate cancer and melanoma, germline genetic testing using deep learning, compared with the current standard genetic testing method, was associated with higher sensitivity and specificity for detection of pathogenic variants. Further research is needed to understand the relevance of these findings with regard to clinical outcomes. 10.1001/jama.2020.20457
Automatic stenosis recognition from coronary angiography using convolutional neural networks. Moon Jong Hak,Lee Da Young,Cha Won Chul,Chung Myung Jin,Lee Kyu-Sung,Cho Baek Hwan,Choi Jin Ho Computer methods and programs in biomedicine BACKGROUND AND OBJECTIVE:Coronary artery disease, which is mostly caused by atherosclerotic narrowing of the coronary artery lumen, is a leading cause of death. Coronary angiography is the standard method to estimate the severity of coronary artery stenosis, but is frequently limited by intra- and inter-observer variations. We propose a deep-learning algorithm that automatically recognizes stenosis in coronary angiographic images. METHODS:The proposed method consists of key frame detection, deep learning model training for classification of stenosis on each key frame, and visualization of the possible location of the stenosis. Firstly, we propose an algorithm that automatically extracts key frames essential for diagnosis from 452 right coronary artery angiography movie clips. Our deep learning model is then trained with image-level annotations to classify the areas narrowed by over 50 %. To make the model focus on the salient features, we apply a self-attention mechanism. The stenotic locations are visualized using the activated area of feature maps with gradient-weighted class activation mapping. RESULTS:The automatically detected key frame was very close to the manually selected key frame (average distance (1.70 ± 0.12) frame per clip). The model was trained with key frames on internal datasets, and validated with internal and external datasets. Our training method achieved high frame-wise area-under-the-curve of 0.971, frame-wise accuracy of 0.934, and clip-wise accuracy of 0.965 in the average values of cross-validation evaluations. The external validation results showed high performances with the mean frame-wise area-under-the-curve of (0.925 and 0.956) in the single and ensemble model, respectively. Heat map visualization shows the location for different types of stenosis in both internal and external data sets. With the self-attention mechanism, the stenosis could be precisely localized, which helps to accurately classify the stenosis by type. CONCLUSIONS:Our automated classification algorithm could recognize and localize coronary artery stenosis highly accurately. Our approach might provide the basis for a screening and assistant tool for the interpretation of coronary angiography. 10.1016/j.cmpb.2020.105819
Medical image diagnosis of prostate tumor based on PSP-Net+VGG16 deep learning network. Computer methods and programs in biomedicine BACKGROUND AND OBJECTIVE:Prostate cancer is the most common cancer of the male reproductive system. With the development of medical imaging technology, magnetic resonance images (MRI) have been used in the diagnosis and treatment of prostate cancer because of its clarity and non-invasiveness. Prostate MRI segmentation and diagnosis experience problems such as low tissue boundary contrast. The traditional segmentation method of manually drawing the contour boundary of the tissue cannot meet the clinical real-time requirements. How to quickly and accurately segment the prostate tumor has become an important research topic. METHODS:This paper proposes a prostate tumor diagnosis based on the deep learning network PSP-Net+VGG16. The deep convolutional neural network segmentation method based on the PSP-Net constructs a atrous convolution residual structure model extraction network. First, the three-dimensional prostate MRI is converted to two-dimensional image slices, and then the slice input of the two-dimensional image is trained based on the PSP-Net neural network; and the VGG16 network is used to analyze the region of interest and classify prostate cancer and normal prostate. RESULTS:According to the experimental results, the segmentation method based on the deep learning network PSP-Net is used to identify the data set samples. The segmentation accuracy is close to the Dice similarity coefficient and Hausdorff distance, and even exceeds the traditional prostate image segmentation method. The Dice index reached 91.3%, and the technique is superior in speed of processing. The predicted tumor markers are very close to the actual markers manually by clinicians; the classification accuracy and recognition rates of prostate MRI based on VGG16 are as high as 87.95% and 87.33%, and the accuracy rate and recall rate of the network model are relatively balanced. The area under curve index is also higher than other models, with good generalization ability. CONCLUSION:Experiments show that prostate cancer diagnosis based on the deep learning network PSP-Net+VGG16 is superior in accuracy and processing time compared to other algorithms, and can be well applied to clinical prostate tumor diagnosis. 10.1016/j.cmpb.2022.106770
Application of artificial neural networks for automated analysis of cystoscopic images: a review of the current status and future prospects. World journal of urology BACKGROUND:Optimal detection and surveillance of bladder cancer (BCa) rely primarily on the cystoscopic visualization of bladder lesions. AI-assisted cystoscopy may improve image recognition and accelerate data acquisition. OBJECTIVE:To provide a comprehensive review of machine learning (ML), deep learning (DL) and convolutional neural network (CNN) applications in cystoscopic image recognition. EVIDENCE ACQUISITION:A detailed search of original articles was performed using the PubMed-MEDLINE database to identify recent English literature relevant to ML, DL and CNN applications in cystoscopic image recognition. EVIDENCE SYNTHESIS:In total, two articles and one conference abstract were identified addressing the application of AI methods in cystoscopic image recognition. These investigations showed accuracies exceeding 90% for tumor detection; however, future work is necessary to incorporate these methods into AI-aided cystoscopy and compared to other tumor visualization tools. Furthermore, we present results from the RaVeNNA-4pi consortium initiative which has extracted 4200 frames from 62 videos, analyzed them with the U-Net network and achieved an average dice score of 0.67. Improvements in its precision can be achieved by augmenting the video/frame database. CONCLUSION:AI-aided cystoscopy has the potential to outperform urologists at recognizing and classifying bladder lesions. To ensure their real-life implementation, however, these algorithms require external validation to generalize their results across other data sets. 10.1007/s00345-019-03059-0
Weakly-supervised convolutional neural networks of renal tumor segmentation in abdominal CTA images. Yang Guanyu,Wang Chuanxia,Yang Jian,Chen Yang,Tang Lijun,Shao Pengfei,Dillenseger Jean-Louis,Shu Huazhong,Luo Limin BMC medical imaging BACKGROUND:Renal cancer is one of the 10 most common cancers in human beings. The laparoscopic partial nephrectomy (LPN) is an effective way to treat renal cancer. Localization and delineation of the renal tumor from pre-operative CT Angiography (CTA) is an important step for LPN surgery planning. Recently, with the development of the technique of deep learning, deep neural networks can be trained to provide accurate pixel-wise renal tumor segmentation in CTA images. However, constructing the training dataset with a large amount of pixel-wise annotations is a time-consuming task for the radiologists. Therefore, weakly-supervised approaches attract more interest in research. METHODS:In this paper, we proposed a novel weakly-supervised convolutional neural network (CNN) for renal tumor segmentation. A three-stage framework was introduced to train the CNN with the weak annotations of renal tumors, i.e. the bounding boxes of renal tumors. The framework includes pseudo masks generation, group and weighted training phases. Clinical abdominal CT angiographic images of 200 patients were applied to perform the evaluation. RESULTS:Extensive experimental results show that the proposed method achieves a higher dice coefficient (DSC) of 0.826 than the other two existing weakly-supervised deep neural networks. Furthermore, the segmentation performance is close to the fully supervised deep CNN. CONCLUSIONS:The proposed strategy improves not only the efficiency of network training but also the precision of the segmentation. 10.1186/s12880-020-00435-w
Evaluating robotic-assisted surgery training videos with multi-task convolutional neural networks. Journal of robotic surgery We seek to understand if an automated algorithm can replace human scoring of surgical trainees performing the urethrovesical anastomosis in radical prostatectomy with synthetic tissue. Specifically, we investigate neural networks for predicting the surgical proficiency score (GEARS score) from video clips. We evaluate videos of surgeons performing the urethral anastomosis using synthetic tissue. The algorithm tracks surgical instrument locations from video, saving the positions of key points on the instruments over time. These positional features are used to train a multi-task convolutional network to infer each sub-category of the GEARS score to determine the proficiency level of trainees. Experimental results demonstrate that the proposed method achieves good performance with scores matching manual inspection in 86.1% of all GEARS sub-categories. Furthermore, the model can detect the difference between proficiency (novice to expert) in 83.3% of videos. Evaluation of GEARS sub-categories with artificial neural networks is possible for novice and intermediate surgeons, but additional research is needed to understand if expert surgeons can be evaluated with a similar automated system. 10.1007/s11701-021-01316-2
MRI and CT bladder segmentation from classical to deep learning based approaches: Current limitations and lessons. Bandyk Mark G,Gopireddy Dheeraj R,Lall Chandana,Balaji K C,Dolz Jose Computers in biology and medicine Precise determination and assessment of bladder cancer (BC) extent of muscle invasion involvement guides proper risk stratification and personalized therapy selection. In this context, segmentation of both bladder walls and cancer are of pivotal importance, as it provides invaluable information to stage the primary tumor. Hence, multiregion segmentation on patients presenting with symptoms of bladder tumors using deep learning heralds a new level of staging accuracy and prediction of the biologic behavior of the tumor. Nevertheless, despite the success of these models in other medical problems, progress in multiregion bladder segmentation, particularly in MRI and CT modalities, is still at a nascent stage, with just a handful of works tackling a multiregion scenario. Furthermore, most existing approaches systematically follow prior literature in other clinical problems, without casting a doubt on the validity of these methods on bladder segmentation, which may present different challenges. Inspired by this, we provide an in-depth look at bladder cancer segmentation using deep learning models. The critical determinants for accurate differentiation of muscle invasive disease, current status of deep learning based bladder segmentation, lessons and limitations of prior work are highlighted. 10.1016/j.compbiomed.2021.104472
Deep Neural Networks Can Accurately Detect Blood Loss and Hemorrhage Control Task Success From Video. Neurosurgery BACKGROUND:Deep neural networks (DNNs) have not been proven to detect blood loss (BL) or predict surgeon performance from video. OBJECTIVE:To train a DNN using video from cadaveric training exercises of surgeons controlling simulated internal carotid hemorrhage to predict clinically relevant outcomes. METHODS:Video was input as a series of images; deep learning networks were developed, which predicted BL and task success from images alone (automated model) and images plus human-labeled instrument annotations (semiautomated model). These models were compared against 2 reference models, which used average BL across all trials as its prediction (control 1) and a linear regression with time to hemostasis (a metric with known association with BL) as input (control 2). The root-mean-square error (RMSE) and correlation coefficients were used to compare the models; lower RMSE indicates superior performance. RESULTS:One hundred forty-three trials were used (123 for training and 20 for testing). Deep learning models outperformed controls (control 1: RMSE 489 mL, control 2: RMSE 431 mL, R2 = 0.35) at BL prediction. The automated model predicted BL with an RMSE of 358 mL (R2 = 0.4) and correctly classified outcome in 85% of trials. The RMSE and classification performance of the semiautomated model improved to 260 mL and 90%, respectively. CONCLUSION:BL and task outcome classification are important components of an automated assessment of surgical performance. DNNs can predict BL and outcome of hemorrhage control from video alone; their performance is improved with surgical instrument presence data. The generalizability of DNNs trained on hemorrhage control tasks should be investigated. 10.1227/neu.0000000000001906
Utilizing deep neural networks and electroencephalogram for objective evaluation of surgeon's distraction during robot-assisted surgery. Shafiei Somayeh B,Iqbal Umar,Hussein Ahmed A,Guru Khurshid A Brain research OBJECTIVE:To develop an algorithm for objective evaluation of distraction of surgeons during robot-assisted surgery (RAS). MATERIALS AND METHODS:Electroencephalogram (EEG) of 22 medical students was recorded while performing five key tasks on the robotic surgical simulator: Instrument Control, Ball Placement, Spatial Control II, Fourth Arm Tissue Retraction, and Hands-on Surgical Training Tasks. All students completed the Surgery Task Load Index (SURG-TLX), which includes one domain for subjective assessment of distraction (scale: 1-20). Scores were divided into low (score 1-6, subjective label: 1), intermediate (score 7-12, subjective label: 2), and high distraction (score 13-20, subjective label: 3). These cut-off values were arbitrarily considered based on a verbal assessment of participants and experienced surgeons. A Deep Convolutional Neural Network (CNN) algorithm was trained utilizing EEG recordings from the medical students and used to classify their distraction levels. The accuracy of our method was determined by comparing the subjective distraction scores on SURG-TLX and the results from the proposed classification algorithm. Also, Pearson correlation was utilized to assess the relationship between performance scores (generated by the simulator) and distraction (Subjective assessment scores). RESULTS:The proposed end-to-end model classified distraction into low, intermediate, and high with 94%, 89%, and 95% accuracy, respectively. We found a significant negative correlation (r = -0.21; p = 0.003) between performance and SURG-TLX distraction scores. CONCLUSIONS:Herein we report, to our knowledge, the first objective method to assess and quantify distraction while performing robotic surgical tasks on the robotic simulator, which may improve patient safety. Validation in the clinical setting is required. 10.1016/j.brainres.2021.147607
Automatic Evaluation of Histological Prognostic Factors Using Two Consecutive Convolutional Neural Networks on Kidney Samples. Clinical journal of the American Society of Nephrology : CJASN BACKGROUND AND OBJECTIVES:The prognosis of patients undergoing kidney tumor resection or kidney donation is linked to many histologic criteria. These criteria notably include glomerular density, glomerular volume, vascular luminal stenosis, and severity of interstitial fibrosis/tubular atrophy. Automated measurements through a deep-learning approach could save time and provide more precise data. This work aimed to develop a free tool to automatically obtain kidney histologic prognostic features. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS:In total, 241 samples of healthy kidney tissue were split into three independent cohorts. The "Training" cohort (=65) was used to train two convolutional neural networks: one to detect the cortex and a second to segment the kidney structures. The "Test" cohort (=50) assessed their performance by comparing manually outlined regions of interest to predicted ones. The "Application" cohort (=126) compared prognostic histologic data obtained manually or through the algorithm on the basis of the combination of the two convolutional neural networks. RESULTS:In the Test cohort, the networks isolated the cortex and segmented the elements of interest with good performances (>90% of the cortex, healthy tubules, glomeruli, and even globally sclerotic glomeruli were detected). In the Application cohort, the expected and predicted prognostic data were significantly correlated. The correlation coefficients were 0.85 for glomerular volume, 0.51 for glomerular density, 0.75 for interstitial fibrosis, 0.71 for tubular atrophy, and 0.73 for vascular intimal thickness, respectively. The algorithm had a good ability to predict significant (>25%) tubular atrophy and interstitial fibrosis level (receiver operator characteristic curve with an area under the curve, 0.92 and 0.91, respectively) or a significant vascular luminal stenosis (>50%) (area under the curve, 0.85). CONCLUSION:This freely available tool enables the automated segmentation of kidney tissue to obtain prognostic histologic data in a fast, objective, reliable, and reproducible way. 10.2215/CJN.07830621
A Novel Radial Basis Neural Network-Leveraged Fast Training Method for Identifying Organs in MR Images. Xu Min,Qian Pengjiang,Zheng Jiamin,Ge Hongwei,Muzic Raymond F Computational and mathematical methods in medicine We propose a new method for fast organ classification and segmentation of abdominal magnetic resonance (MR) images. Magnetic resonance imaging (MRI) is a new type of high-tech imaging examination fashion in recent years. Recognition of specific target areas (organs) based on MR images is one of the key issues in computer-aided diagnosis of medical images. Artificial neural network technology has made significant progress in image processing based on the multimodal MR attributes of each pixel in MR images. However, with the generation of large-scale data, there are few studies on the rapid processing of large-scale MRI data. To address this deficiency, we present a fast radial basis function artificial neural network (Fast-RBF) algorithm. The importance of our efforts is as follows: (1) The proposed algorithm achieves fast processing of large-scale image data by introducing the -insensitive loss function, the structural risk term, and the core-set principle. We apply this algorithm to the identification of specific target areas in MR images. (2) For each abdominal MRI case, we use four MR sequences (fat, water, in-phase (IP), and opposed-phase (OP)) and the position coordinates (, ) of each pixel as the input of the algorithm. We use three classifiers to identify the liver and kidneys in the MR images. Experiments show that the proposed method achieves a higher precision in the recognition of specific regions of medical images and has better adaptability in the case of large-scale datasets than the traditional RBF algorithm. 10.1155/2020/4519483
Automatic recognition of bladder tumours using deep learning technology and its clinical application. Yang Rui,Du Yang,Weng Xiaodong,Chen Zhiyuan,Wang Shanshan,Liu Xiuheng The international journal of medical robotics + computer assisted surgery : MRCAS BACKGROUND:Bladder cancer is a kind of tumors with a high recurrence rate. The improvement of the cure rate and prognosis of bladder tumor depends on the accurate recognition of bladder tumor under the cystoscope. AIMS:To verify that deep learning technology can identify bladder cancer images. MATERIALS AND METHODS:In this study, 1200 cystoscopic cancer images from 224 patients with bladder cancer and 1150 cystoscopic images from 221 patients with no bladder cancer were collected. Three convolutional neural networks (LeNet, AlexNet and GoogLeNet), and the EasyDL deep learning platform were used to train deep learning models to distinguish images of bladder cancer. The diagnostic efficiency of deep learning model and urology experts was compared. RESULTS:The efficiency of EasyDL was the highest, and the accuracy was 96.9%. The efficiency of GoogLeNet was the second highest, and the accuracy was 92.54%. Among the 33 bladder cancer nodes and 11 no bladder cancer nodes, the accuracy of the neural network was 83.36% and that of medical experts was 84.09% (p > 0.05). DISCUSSION:This study used convolutional neural networks to recognize bladder tumor in the clinical. Although these three networks (LeNet, AlexNet and GoogLeNet) had a relatively basic network architecture, they achieved good results in the classification task of cystoscopic images. The deep learning system had a recognition efficiency no less than that of experienced clinical experts. CONCLUSION:This study proved the validity of the convolutional neural network for bladder tumor diagnosis based on the cystoscope. 10.1002/rcs.2194
Methods and tools for objective assessment of psychomotor skills in laparoscopic surgery. Oropesa Ignacio,Sánchez-González Patricia,Lamata Pablo,Chmarra Magdalena K,Pagador José B,Sánchez-Margallo Juan A,Sánchez-Margallo Francisco M,Gómez Enrique J The Journal of surgical research Training and assessment paradigms for laparoscopic surgical skills are evolving from traditional mentor-trainee tutorship towards structured, more objective and safer programs. Accreditation of surgeons requires reaching a consensus on metrics and tasks used to assess surgeons' psychomotor skills. Ongoing development of tracking systems and software solutions has allowed for the expansion of novel training and assessment means in laparoscopy. The current challenge is to adapt and include these systems within training programs, and to exploit their possibilities for evaluation purposes. This paper describes the state of the art in research on measuring and assessing psychomotor laparoscopic skills. It gives an overview on tracking systems as well as on metrics and advanced statistical and machine learning techniques employed for evaluation purposes. The later ones have a potential to be used as an aid in deciding on the surgical competence level, which is an important aspect when accreditation of the surgeons in particular, and patient safety in general, are considered. The prospective of these methods and tools make them complementary means for surgical assessment of motor skills, especially in the early stages of training. Successful examples such as the Fundamentals of Laparoscopic Surgery should help drive a paradigm change to structured curricula based on objective parameters. These may improve the accreditation of new surgeons, as well as optimize their already overloaded training schedules. 10.1016/j.jss.2011.06.034
New robotic surgical systems in urology: an update. Cisu Theodore,Crocerossa Fabio,Carbonara Umberto,Porpiglia Francesco,Autorino Riccardo Current opinion in urology PURPOSE OF REVIEW:The landscape of robotic surgical systems in urology is changing. Several new instruments have been introduced internationally into clinical practice, and others are in development. In this review, we provide an update and summary of recent surgical systems and their clinical applications in urology. RECENT FINDINGS:Robotic-assisted laparoscopic surgery is increasingly becoming a standard skillset in the urologist's technical armamentarium. The current state of the robotic surgery market is monopolized because of a number of regulatory and technical factors but there are several robotic surgical systems approved for clinical use across the world and numerous others in development. Next-generation surgical systems commonly include a modular design, open access consoles, haptic feedback, smaller instruments, and machine learning. SUMMARY:Numerous robotic surgical systems are in development, and several have recently been introduced into clinical practice. These new technologies are changing the landscape of robotic surgery in urology and will likely transform the marketplace of robotic surgery across surgical subspecialties within the next 10--20 years. 10.1097/MOU.0000000000000833
Deep learning visual analysis in laparoscopic surgery: a systematic review and diagnostic test accuracy meta-analysis. Anteby Roi,Horesh Nir,Soffer Shelly,Zager Yaniv,Barash Yiftach,Amiel Imri,Rosin Danny,Gutman Mordechai,Klang Eyal Surgical endoscopy BACKGROUND:In the past decade, deep learning has revolutionized medical image processing. This technique may advance laparoscopic surgery. Study objective was to evaluate whether deep learning networks accurately analyze videos of laparoscopic procedures. METHODS:Medline, Embase, IEEE Xplore, and the Web of science databases were searched from January 2012 to May 5, 2020. Selected studies tested a deep learning model, specifically convolutional neural networks, for video analysis of laparoscopic surgery. Study characteristics including the dataset source, type of operation, number of videos, and prediction application were compared. A random effects model was used for estimating pooled sensitivity and specificity of the computer algorithms. Summary receiver operating characteristic curves were calculated by the bivariate model of Reitsma. RESULTS:Thirty-two out of 508 studies identified met inclusion criteria. Applications included instrument recognition and detection (45%), phase recognition (20%), anatomy recognition and detection (15%), action recognition (13%), surgery time prediction (5%), and gauze recognition (3%). The most common tested procedures were cholecystectomy (51%) and gynecological-mainly hysterectomy and myomectomy (26%). A total of 3004 videos were analyzed. Publications in clinical journals increased in 2020 compared to bio-computational ones. Four studies provided enough data to construct 8 contingency tables, enabling calculation of test accuracy with a pooled sensitivity of 0.93 (95% CI 0.85-0.97) and specificity of 0.96 (95% CI 0.84-0.99). Yet, the majority of papers had a high risk of bias. CONCLUSIONS:Deep learning research holds potential in laparoscopic surgery, but is limited in methodologies. Clinicians may advance AI in surgery, specifically by offering standardized visual databases and reporting. 10.1007/s00464-020-08168-1
A systematic review on artificial intelligence in robot-assisted surgery. International journal of surgery (London, England) BACKGROUND:Despite the extensive published literature on the significant potential of artificial intelligence (AI) there are no reports on its efficacy in improving patient safety in robot-assisted surgery (RAS). The purposes of this work are to systematically review the published literature on AI in RAS, and to identify and discuss current limitations and challenges. MATERIALS AND METHODS:A literature search was conducted on PubMed, Web of Science, Scopus, and IEEExplore according to PRISMA 2020 statement. Eligible articles were peer-review studies published in English language from January 1, 2016 to December 31, 2020. Amstar 2 was used for quality assessment. Risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data of the studies were visually presented in tables using SPIDER tool. RESULTS:Thirty-five publications, representing 3436 patients, met the search criteria and were included in the analysis. The selected reports concern: motion analysis (n = 17), urology (n = 12), gynecology (n = 1), other specialties (n = 1), training (n = 3), and tissue retraction (n = 1). Precision for surgical tools detection varied from 76.0% to 90.6%. Mean absolute error on prediction of urinary continence after robot-assisted radical prostatectomy (RARP) ranged from 85.9 to 134.7 days. Accuracy on prediction of length of stay after RARP was 88.5%. Accuracy on recognition of the next surgical task during robot-assisted partial nephrectomy (RAPN) achieved 75.7%. CONCLUSION:The reviewed studies were of low quality. The findings are limited by the small size of the datasets. Comparison between studies on the same topic was restricted due to algorithms and datasets heterogeneity. There is no proof that currently AI can identify the critical tasks of RAS operations, which determine patient outcome. There is an urgent need for studies on large datasets and external validation of the AI algorithms used. Furthermore, the results should be transparent and meaningful to surgeons, enabling them to inform patients in layman's words. REGISTRATION:Review Registry Unique Identifying Number: reviewregistry1225. 10.1016/j.ijsu.2021.106151
A systematic review of robotic surgery: From supervised paradigms to fully autonomous robotic approaches. The international journal of medical robotics + computer assisted surgery : MRCAS BACKGROUND:From traditional open surgery to laparoscopic surgery and robot-assisted surgery, advances in robotics, machine learning, and imaging are pushing the surgical approach to-wards better clinical outcomes. Pre-clinical and clinical evidence suggests that automation may standardise techniques, increase efficiency, and reduce clinical complications. METHODS:A PRISMA-guided search was conducted across PubMed and OVID. RESULTS:Of the 89 screened articles, 51 met the inclusion criteria, with 10 included in the final review. Automatic data segmentation, trajectory planning, intra-operative registration, trajectory drilling, and soft tissue robotic surgery were discussed. CONCLUSION:Although automated surgical systems remain conceptual, several research groups have developed supervised autonomous robotic surgical systems with increasing consideration for ethico-legal issues for automation. Automation paves the way for precision surgery and improved safety and opens new possibilities for deploying more robust artificial intelligence models, better imaging modalities and robotics to improve clinical outcomes. 10.1002/rcs.2358