Deep Learning in Different Ultrasound Methods for Breast Cancer, from Diagnosis to Prognosis: Current Trends, Challenges, and an Analysis.
Cancers
Breast cancer is the second-leading cause of mortality among women around the world. Ultrasound (US) is one of the noninvasive imaging modalities used to diagnose breast lesions and monitor the prognosis of cancer patients. It has the highest sensitivity for diagnosing breast masses, but it shows increased false negativity due to its high operator dependency. Underserved areas do not have sufficient US expertise to diagnose breast lesions, resulting in delayed management of breast lesions. Deep learning neural networks may have the potential to facilitate early decision-making by physicians by rapidly yet accurately diagnosing and monitoring their prognosis. This article reviews the recent research trends on neural networks for breast mass ultrasound, including and beyond diagnosis. We discussed original research recently conducted to analyze which modes of ultrasound and which models have been used for which purposes, and where they show the best performance. Our analysis reveals that lesion classification showed the highest performance compared to those used for other purposes. We also found that fewer studies were performed for prognosis than diagnosis. We also discussed the limitations and future directions of ongoing research on neural networks for breast ultrasound.
10.3390/cancers15123139
Deep learning-based classification of breast lesions using dynamic ultrasound video.
European journal of radiology
PURPOSE:We intended to develop a deep-learning-based classification model based on breast ultrasound dynamic video, then evaluate its diagnostic performance in comparison with the classic model based on ultrasound static image and that of different radiologists. METHOD:We collected 1000 breast lesions from 888 patients from May 2020 to December 2021. Each lesion contained two static images and two dynamic videos. We divided these lesions randomly into training, validation, and test sets by the ratio of 7:2:1. Two deep learning (DL) models, namely DL-video and DL-image, were developed based on 3D Resnet-50 and 2D Resnet-50 using 2000 dynamic videos and 2000 static images, respectively. Lesions in the test set were evaluated to compare the diagnostic performance of two models and six radiologists with different seniority. RESULTS:The area under the curve of the DL-video model was significantly higher than those of the DL-image model (0.969 vs. 0.925, P = 0.0172) and six radiologists (0.969 vs. 0.779-0.912, P < 0.05). All radiologists performed better when evaluating the dynamic videos compared to the static images. Furthermore, radiologists performed better with increased seniority both in reading images and videos. CONCLUSIONS:The DL-video model can discern more detailed spatial and temporal information for accurate classification of breast lesions than the conventional DL-image model and radiologists, and its clinical application can further improve the diagnosis of breast cancer.
10.1016/j.ejrad.2023.110885
Artificial intelligence breast ultrasound and handheld ultrasound in the BI-RADS categorization of breast lesions: A pilot head to head comparison study in screening program.
Frontiers in public health
Background:Artificial intelligence breast ultrasound diagnostic system (AIBUS) has been introduced as an alternative approach for handheld ultrasound (HHUS), while their results in BI-RADS categorization has not been compared. Methods:This pilot study was based on a screening program conducted from May 2020 to October 2020 in southeast China. All the participants who received both HHUS and AIBUS were included in the study ( = 344). The ultrasound videos after AIBUS scanning were independently watched by a senior radiologist and a junior radiologist. Agreement rate and weighted Kappa value were used to compare their results in BI-RADS categorization with HHUS. Results:The detection rate of breast nodules by HHUS was 14.83%, while the detection rates were 34.01% for AIBUS videos watched by a senior radiologist and 35.76% when watched by a junior radiologist. After AIBUS scanning, the weighted Kappa value for BI-RADS categorization between videos watched by senior radiologists and HHUS was 0.497 ( < 0.001) with an agreement rate of 78.8%, indicating its potential use in breast cancer screening. However, the Kappa value of AIBUS videos watched by junior radiologist was 0.39, when comparing to HHUS. Conclusion:AIBUS breast scan can obtain relatively clear images and detect more breast nodules. The results of AIBUS scanning watched by senior radiologists are moderately consistent with HHUS and might be used in screening practice, especially in primary health care with limited numbers of radiologists.
10.3389/fpubh.2022.1098639