logo logo
Automated classification of gastric neoplasms in endoscopic images using a convolutional neural network. Cho Bum-Joo,Bang Chang Seok,Park Se Woo,Yang Young Joo,Seo Seung In,Lim Hyun,Shin Woon Geon,Hong Ji Taek,Yoo Yong Tak,Hong Seok Hwan,Choi Jae Ho,Lee Jae Jun,Baik Gwang Ho Endoscopy BACKGROUND:Visual inspection, lesion detection, and differentiation between malignant and benign features are key aspects of an endoscopist's role. The use of machine learning for the recognition and differentiation of images has been increasingly adopted in clinical practice. This study aimed to establish convolutional neural network (CNN) models to automatically classify gastric neoplasms based on endoscopic images. METHODS:Endoscopic white-light images of pathologically confirmed gastric lesions were collected and classified into five categories: advanced gastric cancer, early gastric cancer, high grade dysplasia, low grade dysplasia, and non-neoplasm. Three pretrained CNN models were fine-tuned using a training dataset. The classifying performance of the models was evaluated using a test dataset and a prospective validation dataset. RESULTS:A total of 5017 images were collected from 1269 patients, among which 812 images from 212 patients were used as the test dataset. An additional 200 images from 200 patients were collected and used for prospective validation. For the five-category classification, the weighted average accuracy of the Inception-Resnet-v2 model reached 84.6 %. The mean area under the curve (AUC) of the model for differentiating gastric cancer and neoplasm was 0.877 and 0.927, respectively. In prospective validation, the Inception-Resnet-v2 model showed lower performance compared with the endoscopist with the best performance (five-category accuracy 76.4 % vs. 87.6 %; cancer 76.0 % vs. 97.5 %; neoplasm 73.5 % vs. 96.5 %;  < 0.001). However, there was no statistical difference between the Inception-Resnet-v2 model and the endoscopist with the worst performance in the differentiation of gastric cancer (accuracy 76.0 % vs. 82.0 %) and neoplasm (AUC 0.776 vs. 0.865). CONCLUSION:The evaluated deep-learning models have the potential for clinical application in classifying gastric cancer or neoplasm on endoscopic white-light images. 10.1055/a-0981-6133
A deep neural network improves endoscopic detection of early gastric cancer without blind spots. Wu Lianlian,Zhou Wei,Wan Xinyue,Zhang Jun,Shen Lei,Hu Shan,Ding Qianshan,Mu Ganggang,Yin Anning,Huang Xu,Liu Jun,Jiang Xiaoda,Wang Zhengqiang,Deng Yunchao,Liu Mei,Lin Rong,Ling Tingsheng,Li Peng,Wu Qi,Jin Peng,Chen Jie,Yu Honggang Endoscopy BACKGROUND:Gastric cancer is the third most lethal malignancy worldwide. A novel deep convolution neural network (DCNN) to perform visual tasks has been recently developed. The aim of this study was to build a system using the DCNN to detect early gastric cancer (EGC) without blind spots during esophagogastroduodenoscopy (EGD). METHODS:3170 gastric cancer and 5981 benign images were collected to train the DCNN to detect EGC. A total of 24549 images from different parts of stomach were collected to train the DCNN to monitor blind spots. Class activation maps were developed to automatically cover suspicious cancerous regions. A grid model for the stomach was used to indicate the existence of blind spots in unprocessed EGD videos. RESULTS:The DCNN identified EGC from non-malignancy with an accuracy of 92.5 %, a sensitivity of 94.0 %, a specificity of 91.0 %, a positive predictive value of 91.3 %, and a negative predictive value of 93.8 %, outperforming all levels of endoscopists. In the task of classifying gastric locations into 10 or 26 parts, the DCNN achieved an accuracy of 90 % or 65.9 %, on a par with the performance of experts. In real-time unprocessed EGD videos, the DCNN achieved automated performance for detecting EGC and monitoring blind spots. CONCLUSIONS:We developed a system based on a DCNN to accurately detect EGC and recognize gastric locations better than endoscopists, and proactively track suspicious cancerous lesions and monitor blind spots during EGD. 10.1055/a-0855-3532
Artificial intelligence-based diagnostic system classifying gastric cancers and ulcers: comparison between the original and newly developed systems. Endoscopy BACKGROUND:We previously reported for the first time the usefulness of artificial intelligence (AI) systems in detecting gastric cancers. However, the "original convolutional neural network (O-CNN)" employed in the previous study had a relatively low positive predictive value (PPV). Therefore, we aimed to develop an advanced AI-based diagnostic system and evaluate its applicability for the classification of gastric cancers and gastric ulcers. METHODS:We constructed an "advanced CNN" (A-CNN) by adding a new training dataset (4453 gastric ulcer images from 1172 lesions) to the O-CNN, which had been trained using 13 584 gastric cancer and 373 gastric ulcer images. The diagnostic performance of the A-CNN in terms of classifying gastric cancers and ulcers was retrospectively evaluated using an independent validation dataset (739 images from 100 early gastric cancers and 720 images from 120 gastric ulcers) and compared with that of the O-CNN by estimating the overall classification accuracy. RESULTS:The sensitivity, specificity, and PPV of the A-CNN in classifying gastric cancer at the lesion level were 99.0 % (95 % confidence interval [CI] 94.6 %-100 %), 93.3 % (95 %CI 87.3 %-97.1 %), and 92.5 % (95 %CI 85.8 %-96.7 %), respectively, and for classifying gastric ulcers were 93.3 % (95 %CI 87.3 %-97.1 %), 99.0 % (95 %CI 94.6 %-100 %), and 99.1 % (95 %CI 95.2 %-100 %), respectively. At the lesion level, the overall accuracies of the O- and A-CNN for classifying gastric cancers and gastric ulcers were 45.9 % (gastric cancers 100 %, gastric ulcers 0.8 %) and 95.9 % (gastric cancers 99.0 %, gastric ulcers 93.3 %), respectively. CONCLUSION:The newly developed AI-based diagnostic system can effectively classify gastric cancers and gastric ulcers. 10.1055/a-1194-8771
Detecting early gastric cancer: Comparison between the diagnostic ability of convolutional neural networks and endoscopists. Digestive endoscopy : official journal of the Japan Gastroenterological Endoscopy Society OBJECTIVES:Detecting early gastric cancer is difficult, and it may even be overlooked by experienced endoscopists. Recently, artificial intelligence based on deep learning through convolutional neural networks (CNNs) has enabled significant advancements in the field of gastroenterology. However, it remains unclear whether a CNN can outperform endoscopists. In this study, we evaluated whether the performance of a CNN in detecting early gastric cancer is better than that of endoscopists. METHODS:The CNN was constructed using 13,584 endoscopic images from 2639 lesions of gastric cancer. Subsequently, its diagnostic ability was compared to that of 67 endoscopists using an independent test dataset (2940 images from 140 cases). RESULTS:The average diagnostic time for analyzing 2940 test endoscopic images by the CNN and endoscopists were 45.5 ± 1.8 s and 173.0 ± 66.0 min, respectively. The sensitivity, specificity, and positive and negative predictive values for the CNN were 58.4%, 87.3%, 26.0%, and 96.5%, respectively. These values for the 67 endoscopists were 31.9%, 97.2%, 46.2%, and 94.9%, respectively. The CNN had a significantly higher sensitivity than the endoscopists (by 26.5%; 95% confidence interval, 14.9-32.5%). CONCLUSION:The CNN detected more early gastric cancer cases in a shorter time than the endoscopists. The CNN needs further training to achieve higher diagnostic accuracy. However, a diagnostic support tool for gastric cancer using a CNN will be realized in the near future. 10.1111/den.13688
Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging. Li Lan,Chen Yishu,Shen Zhe,Zhang Xuequn,Sang Jianzhong,Ding Yong,Yang Xiaoyun,Li Jun,Chen Ming,Jin Chaohui,Chen Chunlei,Yu Chaohui Gastric cancer : official journal of the International Gastric Cancer Association and the Japanese Gastric Cancer Association BACKGROUND:Magnifying endoscopy with narrow band imaging (M-NBI) has been applied to examine early gastric cancer by observing microvascular architecture and microsurface structure of gastric mucosal lesions. However, the diagnostic efficacy of non-experts in differentiating early gastric cancer from non-cancerous lesions by M-NBI remained far from satisfactory. In this study, we developed a new system based on convolutional neural network (CNN) to analyze gastric mucosal lesions observed by M-NBI. METHODS:A total of 386 images of non-cancerous lesions and 1702 images of early gastric cancer were collected to train and establish a CNN model (Inception-v3). Then a total of 341 endoscopic images (171 non-cancerous lesions and 170 early gastric cancer) were selected to evaluate the diagnostic capabilities of CNN and endoscopists. Primary outcome measures included diagnostic accuracy, sensitivity, specificity, positive predictive value, and negative predictive value. RESULTS:The sensitivity, specificity, and accuracy of CNN system in the diagnosis of early gastric cancer were 91.18%, 90.64%, and 90.91%, respectively. No significant difference was spotted in the specificity and accuracy of diagnosis between CNN and experts. However, the diagnostic sensitivity of CNN was significantly higher than that of the experts. Furthermore, the diagnostic sensitivity, specificity and accuracy of CNN were significantly higher than those of the non-experts. CONCLUSIONS:Our CNN system showed high accuracy, sensitivity and specificity in the diagnosis of early gastric cancer. It is anticipated that more progress will be made in optimization of the CNN diagnostic system and further development of artificial intelligence in the medical field. 10.1007/s10120-019-00992-2
Application of artificial intelligence using a convolutional neural network for diagnosis of early gastric cancer based on magnifying endoscopy with narrow-band imaging. Ueyama Hiroya,Kato Yusuke,Akazawa Yoichi,Yatagai Noboru,Komori Hiroyuki,Takeda Tsutomu,Matsumoto Kohei,Ueda Kumiko,Matsumoto Kenshi,Hojo Mariko,Yao Takashi,Nagahara Akihito,Tada Tomohiro Journal of gastroenterology and hepatology BACKGROUND AND AIM:Magnifying endoscopy with narrow-band imaging (ME-NBI) has made a huge contribution to clinical practice. However, acquiring skill at ME-NBI diagnosis of early gastric cancer (EGC) requires considerable expertise and experience. Recently, artificial intelligence (AI), using deep learning and a convolutional neural network (CNN), has made remarkable progress in various medical fields. Here, we constructed an AI-assisted CNN computer-aided diagnosis (CAD) system, based on ME-NBI images, to diagnose EGC and evaluated the diagnostic accuracy of the AI-assisted CNN-CAD system. METHODS:The AI-assisted CNN-CAD system (ResNet50) was trained and validated on a dataset of 5574 ME-NBI images (3797 EGCs, 1777 non-cancerous mucosa and lesions). To evaluate the diagnostic accuracy, a separate test dataset of 2300 ME-NBI images (1430 EGCs, 870 non-cancerous mucosa and lesions) was assessed using the AI-assisted CNN-CAD system. RESULTS:The AI-assisted CNN-CAD system required 60 s to analyze 2300 test images. The overall accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the CNN were 98.7%, 98%, 100%, 100%, and 96.8%, respectively. All misdiagnosed images of EGCs were of low-quality or of superficially depressed and intestinal-type intramucosal cancers that were difficult to distinguish from gastritis, even by experienced endoscopists. CONCLUSIONS:The AI-assisted CNN-CAD system for ME-NBI diagnosis of EGC could process many stored ME-NBI images in a short period of time and had a high diagnostic ability. This system may have great potential for future application to real clinical settings, which could facilitate ME-NBI diagnosis of EGC in practice. 10.1111/jgh.15190
Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Hirasawa Toshiaki,Aoyama Kazuharu,Tanimoto Tetsuya,Ishihara Soichiro,Shichijo Satoki,Ozawa Tsuyoshi,Ohnishi Tatsuya,Fujishiro Mitsuhiro,Matsuo Keigo,Fujisaki Junko,Tada Tomohiro Gastric cancer : official journal of the International Gastric Cancer Association and the Japanese Gastric Cancer Association BACKGROUND:Image recognition using artificial intelligence with deep learning through convolutional neural networks (CNNs) has dramatically improved and been increasingly applied to medical fields for diagnostic imaging. We developed a CNN that can automatically detect gastric cancer in endoscopic images. METHODS:A CNN-based diagnostic system was constructed based on Single Shot MultiBox Detector architecture and trained using 13,584 endoscopic images of gastric cancer. To evaluate the diagnostic accuracy, an independent test set of 2296 stomach images collected from 69 consecutive patients with 77 gastric cancer lesions was applied to the constructed CNN. RESULTS:The CNN required 47 s to analyze 2296 test images. The CNN correctly diagnosed 71 of 77 gastric cancer lesions with an overall sensitivity of 92.2%, and 161 non-cancerous lesions were detected as gastric cancer, resulting in a positive predictive value of 30.6%. Seventy of the 71 lesions (98.6%) with a diameter of 6 mm or more as well as all invasive cancers were correctly detected. All missed lesions were superficially depressed and differentiated-type intramucosal cancers that were difficult to distinguish from gastritis even for experienced endoscopists. Nearly half of the false-positive lesions were gastritis with changes in color tone or an irregular mucosal surface. CONCLUSION:The constructed CNN system for detecting gastric cancer could process numerous stored endoscopic images in a very short time with a clinically relevant diagnostic ability. It may be well applicable to daily clinical practice to reduce the burden of endoscopists. 10.1007/s10120-018-0793-2