Convolutional neural network for classifying primary liver cancer based on triple-phase CT and tumor marker information: a pilot study.
Nakai Hirotsugu,Fujimoto Koji,Yamashita Rikiya,Sato Toshiyuki,Someya Yuko,Taura Kojiro,Isoda Hiroyoshi,Nakamoto Yuji
Japanese journal of radiology
PURPOSE:To develop convolutional neural network (CNN) models for differentiating intrahepatic cholangiocarcinoma (ICC) from hepatocellular carcinoma (HCC) and predicting histopathological grade of HCC. MATERIALS AND METHODS:Preoperative computed tomography and tumor marker information of 617 primary liver cancer patients were retrospectively collected to develop CNN models categorizing tumors into three categories: moderately differentiated HCC (mHCC), poorly differentiated HCC (pHCC), and ICC, where the histopathological diagnoses were considered as ground truths. The models processed manually cropped tumor with and without tumor marker information (two-input and one-input models, respectively). Overall accuracy was assessed using a held-out dataset (10%). Area under the curve, sensitivity, and specificity for differentiating ICC from HCCs (mHCC + pHCC), and pHCC from mHCC were also evaluated. We assessed two radiologists' performance without tumor marker information as references (overall accuracy, sensitivity, and specificity). The two-input model was compared with the one-input model and radiologists using permutation tests. RESULTS:The overall accuracy was 0.61, 0.60, 0.55, 0.53 for the two-input model, one-input model, radiologist 1, and radiologist 2, respectively. For differentiating pHCC from mHCC, the two-input model showed significantly higher specificity than radiologist 1 (0.68 [95% confidence interval: 0.50-0.83] vs 0.45 [95% confidence interval: 0.27-0.63]; p = 0.04). CONCLUSION:Our CNN model with tumor marker information showed feasibility and potential for three-class classification within primary liver cancer.
Deep Learning for Accurate Diagnosis of Liver Tumor Based on Magnetic Resonance Imaging and Clinical Data.
Zhen Shi-Hui,Cheng Ming,Tao Yu-Bo,Wang Yi-Fan,Juengpanich Sarun,Jiang Zhi-Yu,Jiang Yan-Kai,Yan Yu-Yu,Lu Wei,Lue Jie-Min,Qian Jia-Hong,Wu Zhong-Yu,Sun Ji-Hong,Lin Hai,Cai Xiu-Jun
Frontiers in oncology
Early-stage diagnosis and treatment can improve survival rates of liver cancer patients. Dynamic contrast-enhanced MRI provides the most comprehensive information for differential diagnosis of liver tumors. However, MRI diagnosis is affected by subjective experience, so deep learning may supply a new diagnostic strategy. We used convolutional neural networks (CNNs) to develop a deep learning system (DLS) to classify liver tumors based on enhanced MR images, unenhanced MR images, and clinical data including text and laboratory test results. Using data from 1,210 patients with liver tumors ( = 31,608 images), we trained CNNs to get seven-way classifiers, binary classifiers, and three-way malignancy-classifiers (Model A-Model G). Models were validated in an external independent extended cohort of 201 patients ( = 6,816 images). The area under receiver operating characteristic (ROC) curve (AUC) were compared across different models. We also compared the sensitivity and specificity of models with the performance of three experienced radiologists. Deep learning achieves a performance on par with three experienced radiologists on classifying liver tumors in seven categories. Using only unenhanced images, CNN performs well in distinguishing malignant from benign liver tumors (AUC, 0.946; 95% CI 0.914-0.979 vs. 0.951; 0.919-0.982, = 0.664). New CNN combining unenhanced images with clinical data greatly improved the performance of classifying malignancies as hepatocellular carcinoma (AUC, 0.985; 95% CI 0.960-1.000), metastatic tumors (0.998; 0.989-1.000), and other primary malignancies (0.963; 0.896-1.000), and the agreement with pathology was 91.9%.These models mined diagnostic information in unenhanced images and clinical data by deep-neural-network, which were different to previous methods that utilized enhanced images. The sensitivity and specificity of almost every category in these models reached the same high level compared to three experienced radiologists. Trained with data in various acquisition conditions, DLS that integrated these models could be used as an accurate and time-saving assisted-diagnostic strategy for liver tumors in clinical settings, even in the absence of contrast agents. DLS therefore has the potential to avoid contrast-related side effects and reduce economic costs associated with current standard MRI inspection practices for liver tumor patients.
Use of BERT (Bidirectional Encoder Representations from Transformers)-Based Deep Learning Method for Extracting Evidences in Chinese Radiology Reports: Development of a Computer-Aided Liver Cancer Diagnosis Framework.
Liu Honglei,Zhang Zhiqiang,Xu Yan,Wang Ni,Huang Yanqun,Yang Zhenghan,Jiang Rui,Chen Hui
Journal of medical Internet research
BACKGROUND:Liver cancer is a substantial disease burden in China. As one of the primary diagnostic tools for detecting liver cancer, dynamic contrast-enhanced computed tomography provides detailed evidences for diagnosis that are recorded in free-text radiology reports. OBJECTIVE:The aim of our study was to apply a deep learning model and rule-based natural language processing (NLP) method to identify evidences for liver cancer diagnosis automatically. METHODS:We proposed a pretrained, fine-tuned BERT (Bidirectional Encoder Representations from Transformers)-based BiLSTM-CRF (Bidirectional Long Short-Term Memory-Conditional Random Field) model to recognize the phrases of APHE (hyperintense enhancement in the arterial phase) and PDPH (hypointense in the portal and delayed phases). To identify more essential diagnostic evidences, we used the traditional rule-based NLP methods for the extraction of radiological features. APHE, PDPH, and other extracted radiological features were used to design a computer-aided liver cancer diagnosis framework by random forest. RESULTS:The BERT-BiLSTM-CRF predicted the phrases of APHE and PDPH with an F1 score of 98.40% and 90.67%, respectively. The prediction model using combined features had a higher performance (F1 score, 88.55%) than those using APHE and PDPH (84.88%) or other extracted radiological features (83.52%). APHE and PDPH were the top 2 essential features for liver cancer diagnosis. CONCLUSIONS:This work was a comprehensive NLP study, wherein we identified evidences for the diagnosis of liver cancer from Chinese radiology reports, considering both clinical knowledge and radiology findings. The BERT-based deep learning method for the extraction of diagnostic evidence achieved state-of-the-art performance. The high performance proves the feasibility of the BERT-BiLSTM-CRF model in information extraction from Chinese radiology reports. The findings of our study suggest that the deep learning-based method for automatically identifying evidences for diagnosis can be extended to other types of Chinese clinical texts.
An imageomics and multi-network based deep learning model for risk assessment of liver transplantation for hepatocellular cancer.
He Tiancheng,Fong Joy Nolte,Moore Linda W,Ezeana Chika F,Victor David,Divatia Mukul,Vasquez Matthew,Ghobrial R Mark,Wong Stephen T C
Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
INTRODUCTION:Liver transplantation (LT) is an effective treatment for hepatocellular carcinoma (HCC), the most common type of primary liver cancer. Patients with small HCC (<5 cm) are given priority over others for transplantation due to clinical allocation policies based on tumor size. Attempting to shift from the prevalent paradigm that successful transplantation and longer disease-free survival can only be achieved in patients with small HCC to expanding the transplantation option to patients with HCC of the highest tumor burden (>5 cm), we developed a convergent artificial intelligence (AI) model that combines transient clinical data with quantitative histologic and radiomic features for more objective risk assessment of liver transplantation for HCC patients. METHODS:Patients who received a LT for HCC between 2008-2019 were eligible for inclusion in the analysis. All patients with post-LT recurrence were included, and those without recurrence were randomly selected for inclusion in the deep learning model. Pre- and post-transplant magnetic resonance imaging (MRI) scans and reports were compressed using CapsNet networks and natural language processing, respectively, as input for a multiple feature radial basis function network. We applied a histological image analysis algorithm to detect pathologic areas of interest from explant tissue of patients who recurred. The multilayer perceptron was designed as a feed-forward, supervised neural network topology, with the final assessment of recurrence risk. We used area under the curve (AUC) and F-1 score to assess the predictability of different network combinations. RESULTS:A total of 109 patients were included (87 in the training group, 22 in the testing group), of which 20 were positive for cancer recurrence. Seven models (AUC; F-1 score) were generated, including clinical features only (0.55; 0.52), magnetic resonance imaging (MRI) only (0.64; 0.61), pathological images only (0.64; 0.61), MRI plus pathology (0.68; 0.65), MRI plus clinical (0.78, 0.75), pathology plus clinical (0.77; 0.73), and a combination of clinical, MRI, and pathology features (0.87; 0.84). The final combined model showed 80 % recall and 89 % precision. The total accuracy of the implemented model was 82 %. CONCLUSION:We validated that the deep learning model combining clinical features and multi-scale histopathologic and radiomic image features can be used to discover risk factors for recurrence beyond tumor size and biomarker analysis. Such a predictive, convergent AI model has the potential to alter the LT allocation system for HCC patients and expand the transplantation treatment option to patients with HCC of the highest tumor burden.
Extending 2-D Convolutional Neural Networks to 3-D for Advancing Deep Learning Cancer Classification With Application to MRI Liver Tumor Differentiation.
Trivizakis Eleftherios,Manikis Georgios C,Nikiforaki Katerina,Drevelegas Konstantinos,Constantinides Manos,Drevelegas Antonios,Marias Kostas
IEEE journal of biomedical and health informatics
Deep learning (DL) architectures have opened new horizons in medical image analysis attaining unprecedented performance in tasks such as tissue classification and segmentation as well as prediction of several clinical outcomes. In this paper, we propose and evaluate a novel three-dimensional (3-D) convolutional neural network (CNN) designed for tissue classification in medical imaging and applied for discriminating between primary and metastatic liver tumors from diffusion weighted MRI (DW-MRI) data. The proposed network consists of four consecutive strided 3-D convolutional layers with 3 × 3 × 3 kernel size and rectified linear unit (ReLU) as activation function, followed by a fully connected layer with 2048 neurons and a Softmax layer for binary classification. A dataset comprising 130 DW-MRI scans was used for the training and validation of the network. To the best of our knowledge this is the first DL solution for the specific clinical problem and the first 3-D CNN for cancer classification operating directly on whole 3-D tomographic data without the need of any preprocessing step such as region cropping, annotating, or detecting regions of interest. The classification performance results, 83% (3-D) versus 69.6% and 65.2% (2-D), demonstrated significant tissue classification accuracy improvement compared to two 2-D CNNs of different architectures also designed for the specific clinical problem with the same dataset. These results suggest that the proposed 3-D CNN architecture can bring significant benefit in DW-MRI liver discrimination and potentially, in numerous other tissue classification problems based on tomographic data, especially in size-limited, disease-specific clinical datasets.
Deep learning for liver tumor diagnosis part I: development of a convolutional neural network classifier for multi-phasic MRI.
Hamm Charlie A,Wang Clinton J,Savic Lynn J,Ferrante Marc,Schobert Isabel,Schlachter Todd,Lin MingDe,Duncan James S,Weinreb Jeffrey C,Chapiro Julius,Letzen Brian
OBJECTIVES:To develop and validate a proof-of-concept convolutional neural network (CNN)-based deep learning system (DLS) that classifies common hepatic lesions on multi-phasic MRI. METHODS:A custom CNN was engineered by iteratively optimizing the network architecture and training cases, finally consisting of three convolutional layers with associated rectified linear units, two maximum pooling layers, and two fully connected layers. Four hundred ninety-four hepatic lesions with typical imaging features from six categories were utilized, divided into training (n = 434) and test (n = 60) sets. Established augmentation techniques were used to generate 43,400 training samples. An Adam optimizer was used for training. Monte Carlo cross-validation was performed. After model engineering was finalized, classification accuracy for the final CNN was compared with two board-certified radiologists on an identical unseen test set. RESULTS:The DLS demonstrated a 92% accuracy, a 92% sensitivity (Sn), and a 98% specificity (Sp). Test set performance in a single run of random unseen cases showed an average 90% Sn and 98% Sp. The average Sn/Sp on these same cases for radiologists was 82.5%/96.5%. Results showed a 90% Sn for classifying hepatocellular carcinoma (HCC) compared to 60%/70% for radiologists. For HCC classification, the true positive and false positive rates were 93.5% and 1.6%, respectively, with a receiver operating characteristic area under the curve of 0.992. Computation time per lesion was 5.6 ms. CONCLUSION:This preliminary deep learning study demonstrated feasibility for classifying lesions with typical imaging features from six common hepatic lesion types, motivating future studies with larger multi-institutional datasets and more complex imaging appearances. KEY POINTS:• Deep learning demonstrates high performance in the classification of liver lesions on volumetric multi-phasic MRI, showing potential as an eventual decision-support tool for radiologists. • Demonstrating a classification runtime of a few milliseconds per lesion, a deep learning system could be incorporated into the clinical workflow in a time-efficient manner.