加载中

    Automatic detection of different types of small-bowel lesions on capsule endoscopy images using a newly developed deep convolutional neural network. Otani Keita,Nakada Ayako,Kurose Yusuke,Niikura Ryota,Yamada Atsuo,Aoki Tomonori,Nakanishi Hiroyoshi,Doyama Hisashi,Hasatani Kenkei,Sumiyoshi Tetsuya,Kitsuregawa Masaru,Harada Tatsuya,Koike Kazuhiko Endoscopy BACKGROUND : Previous computer-aided detection systems for diagnosing lesions in images from wireless capsule endoscopy (WCE) have been limited to a single type of small-bowel lesion. We developed a new artificial intelligence (AI) system able to diagnose multiple types of lesions, including erosions and ulcers, vascular lesions, and tumors. METHODS : We trained the deep neural network system RetinaNet on a data set of 167 patients, which consisted of images of 398 erosions and ulcers, 538 vascular lesions, 4590 tumors, and 34 437 normal tissues. We calculated the mean area under the receiver operating characteristic curve (AUC) for each lesion type using five-fold stratified cross-validation. RESULTS : The mean age of the patients was 63.6 years; 92 were men. The mean AUCs of the AI system were 0.996 (95 %CI 0.992 - 0.999) for erosions and ulcers, 0.950 (95 %CI 0.923 - 0.978) for vascular lesions, and 0.950 (95 %CI 0.913 - 0.988) for tumors. CONCLUSION : We developed and validated a new computer-aided diagnosis system for multiclass diagnosis of small-bowel lesions in WCE images. 10.1055/a-1167-8157
    Hookworm Detection in Wireless Capsule Endoscopy Images With Deep Learning. He Jun-Yan,Wu Xiao,Jiang Yu-Gang,Peng Qiang,Jain Ramesh IEEE transactions on image processing : a publication of the IEEE Signal Processing Society As one of the most common human helminths, hookworm is a leading cause of maternal and child morbidity, which seriously threatens human health. Recently, wireless capsule endoscopy (WCE) has been applied to automatic hookworm detection. Unfortunately, it remains a challenging task. In recent years, deep convolutional neural network (CNN) has demonstrated impressive performance in various image and video analysis tasks. In this paper, a novel deep hookworm detection framework is proposed for WCE images, which simultaneously models visual appearances and tubular patterns of hookworms. This is the first deep learning framework specifically designed for hookworm detection in WCE images. Two CNN networks, namely edge extraction network and hookworm classification network, are seamlessly integrated in the proposed framework, which avoid the edge feature caching and speed up the classification. Two edge pooling layers are introduced to integrate the tubular regions induced from edge extraction network and the feature maps from hookworm classification network, leading to enhanced feature maps emphasizing the tubular regions. Experiments have been conducted on one of the largest WCE datasets with WCE images, which demonstrate the effectiveness of the proposed hookworm detection framework. It significantly outperforms the state-of-the-art approaches. The high sensitivity and accuracy of the proposed method in detecting hookworms shows its potential for clinical application. 10.1109/TIP.2018.2801119
    A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy. Leenhardt Romain,Vasseur Pauline,Li Cynthia,Saurin Jean Christophe,Rahmi Gabriel,Cholet Franck,Becq Aymeric,Marteau Philippe,Histace Aymeric,Dray Xavier, Gastrointestinal endoscopy BACKGROUND AND AIMS:GI angiectasia (GIA) is the most common small-bowel (SB) vascular lesion, with an inherent risk of bleeding. SB capsule endoscopy (SB-CE) is the currently accepted diagnostic procedure. The aim of this study was to develop a computer-assisted diagnosis tool for the detection of GIA. METHODS:Deidentified SB-CE still frames featuring annotated typical GIA and normal control still frames were selected from a database. A semantic segmentation images approach associated with a convolutional neural network (CNN) was used for deep-feature extractions and classification. Two datasets of still frames were created and used for machine learning and for algorithm testing. RESULTS:The GIA detection algorithm yielded a sensitivity of 100%, a specificity of 96%, a positive predictive value of 96%, and a negative predictive value of 100%. Reproducibility was optimal. The reading process for an entire SB-CE video would take 39 minutes. CONCLUSIONS:The developed CNN-based algorithm had high diagnostic performances, allowing detection of GIA in SB-CE still frames. This study paves the way for future automated CNN-based SB-CE reading softwares. 10.1016/j.gie.2018.06.036
    Artificial intelligence using a convolutional neural network for automatic detection of small-bowel angioectasia in capsule endoscopy images. Tsuboi Akiyoshi,Oka Shiro,Aoyama Kazuharu,Saito Hiroaki,Aoki Tomonori,Yamada Atsuo,Matsuda Tomoki,Fujishiro Mitsuhiro,Ishihara Soichiro,Nakahori Masato,Koike Kazuhiko,Tanaka Shinji,Tada Tomohiro Digestive endoscopy : official journal of the Japan Gastroenterological Endoscopy Society BACKGROUND AND AIM:Although small-bowel angioectasia is reported as the most common cause of bleeding in patients and frequently diagnosed by capsule endoscopy (CE) in patients with obscure gastrointestinal bleeding, a computer-aided detection method has not been established. We developed an artificial intelligence system with deep learning that can automatically detect small-bowel angioectasia in CE images. METHODS:We trained a deep convolutional neural network (CNN) system based on Single Shot Multibox Detector using 2237 CE images of angioectasia. We assessed its diagnostic accuracy by calculating the area under the receiver operating characteristic curve (ROC-AUC), sensitivity, specificity, positive predictive value, and negative predictive value using an independent test set of 10 488 small-bowel images, including 488 images of small-bowel angioectasia. RESULTS:The AUC to detect angioectasia was 0.998. Sensitivity, specificity, positive predictive value, and negative predictive value of CNN were 98.8%, 98.4%, 75.4%, and 99.9%, respectively, at a cut-off value of 0.36 for the probability score. CONCLUSIONS:We developed and validated a new system based on CNN to automatically detect angioectasia in CE images. This may be well applicable to daily clinical practice to reduce the burden of physicians as well as to reduce oversight. 10.1111/den.13507
    Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Aoki Tomonori,Yamada Atsuo,Aoyama Kazuharu,Saito Hiroaki,Tsuboi Akiyoshi,Nakada Ayako,Niikura Ryota,Fujishiro Mitsuhiro,Oka Shiro,Ishihara Soichiro,Matsuda Tomoki,Tanaka Shinji,Koike Kazuhiko,Tada Tomohiro Gastrointestinal endoscopy BACKGROUND AND AIMS:Although erosions and ulcerations are the most common small-bowel abnormalities found on wireless capsule endoscopy (WCE), a computer-aided detection method has not been established. We aimed to develop an artificial intelligence system with deep learning to automatically detect erosions and ulcerations in WCE images. METHODS:We trained a deep convolutional neural network (CNN) system based on a Single Shot Multibox Detector, using 5360 WCE images of erosions and ulcerations. We assessed its performance by calculating the area under the receiver operating characteristic curve and its sensitivity, specificity, and accuracy using an independent test set of 10,440 small-bowel images including 440 images of erosions and ulcerations. RESULTS:The trained CNN required 233 seconds to evaluate 10,440 test images. The area under the curve for the detection of erosions and ulcerations was 0.958 (95% confidence interval [CI], 0.947-0.968). The sensitivity, specificity, and accuracy of the CNN were 88.2% (95% CI, 84.8%-91.0%), 90.9% (95% CI, 90.3%-91.4%), and 90.8% (95% CI, 90.2%-91.3%), respectively, at a cut-off value of 0.481 for the probability score. CONCLUSIONS:We developed and validated a new system based on CNN to automatically detect erosions and ulcerations in WCE images. This may be a crucial step in the development of daily-use diagnostic software for WCE images to help reduce oversights and the burden on physicians. 10.1016/j.gie.2018.10.027
    Computer-aided detection of small intestinal ulcer and erosion in wireless capsule endoscopy images. Fan Shanhui,Xu Lanmeng,Fan Yihong,Wei Kaihua,Li Lihua Physics in medicine and biology A novel computer-aided detection method based on deep learning framework was proposed to detect small intestinal ulcer and erosion in wireless capsule endoscopy (WCE) images. To the best of our knowledge, this is the first time that deep learning framework has been exploited on automated ulcer and erosion detection in WCE images. Compared with the traditional detection method, deep learning framework can produce image features directly from the data and increase recognition accuracy as well as efficiency, especially for big data. The developed method included image cropping and image compression. The AlexNet convolutional neural network was trained to the database with tens of thousands of WCE images to differentiate lesion and normal tissue. The results of ulcer and erosion detection reached a high accuracy of 95.16% and 95.34%, sensitivity of 96.80% and 93.67%, and specificity of 94.79% and 95.98%, correspondingly. The area under the receiver operating characteristic curve was over 0.98 in both of the networks. The promising results indicate that the proposed method has the potential to work in tandem with doctors to efficiently detect intestinal ulcer and erosion. 10.1088/1361-6560/aad51c
    Detection of high-grade small bowel obstruction on conventional radiography with convolutional neural networks. Cheng Phillip M,Tejura Tapas K,Tran Khoa N,Whang Gilbert Abdominal radiology (New York) The purpose of this pilot study is to determine whether a deep convolutional neural network can be trained with limited image data to detect high-grade small bowel obstruction patterns on supine abdominal radiographs. Grayscale images from 3663 clinical supine abdominal radiographs were categorized into obstructive and non-obstructive categories independently by three abdominal radiologists, and the majority classification was used as ground truth; 74 images were found to be consistent with small bowel obstruction. Images were rescaled and randomized, with 2210 images constituting the training set (39 with small bowel obstruction) and 1453 images constituting the test set (35 with small bowel obstruction). Weight parameters for the final classification layer of the Inception v3 convolutional neural network, previously trained on the 2014 Large Scale Visual Recognition Challenge dataset, were retrained on the training set. After training, the neural network achieved an AUC of 0.84 on the test set (95% CI 0.78-0.89). At the maximum Youden index (sensitivity + specificity-1), the sensitivity of the system for small bowel obstruction is 83.8%, with a specificity of 68.1%. The results demonstrate that transfer learning with convolutional neural networks, even with limited training data, may be used to train a detector for high-grade small bowel obstruction gas patterns on supine radiographs. 10.1007/s00261-017-1294-1
    Gastroenterologist-Level Identification of Small-Bowel Diseases and Normal Variants by Capsule Endoscopy Using a Deep-Learning Model. Ding Zhen,Shi Huiying,Zhang Hao,Meng Lingjun,Fan Mengke,Han Chaoqun,Zhang Kun,Ming Fanhua,Xie Xiaoping,Liu Hao,Liu Jun,Lin Rong,Hou Xiaohua Gastroenterology BACKGROUND & AIMS:Capsule endoscopy has revolutionized investigation of the small bowel. However, this technique produces a video that is 8-10 hours long, so analysis is time consuming for gastroenterologists. Deep convolutional neural networks (CNNs) can recognize specific images among a large variety. We aimed to develop a CNN-based algorithm to assist in the evaluation of small bowel capsule endoscopy (SB-CE) images. METHODS:We collected 113,426,569 images from 6970 patients who had SB-CE at 77 medical centers from July 2016 through July 2018. A CNN-based auxiliary reading model was trained to differentiate abnormal from normal images using 158,235 SB-CE images from 1970 patients. Images were categorized as normal, inflammation, ulcer, polyps, lymphangiectasia, bleeding, vascular disease, protruding lesion, lymphatic follicular hyperplasia, diverticulum, parasite, and other. The model was further validated in 5000 patients (no patient was overlap with the 1970 patients in the training set); the same patients were evaluated by conventional analysis and CNN-based auxiliary analysis by 20 gastroenterologists. If there was agreement in image categorization between the conventional analysis and CNN model, no further evaluation was performed. If there was disagreement between the conventional analysis and CNN model, the gastroenterologists re-evaluated the image to confirm or reject the CNN categorization. RESULTS:In the SB-CE images from the validation set, 4206 abnormalities in 3280 patients were identified after final consensus evaluation. The CNN-based auxiliary model identified abnormalities with 99.88% sensitivity in the per-patient analysis (95% CI, 99.67-99.96) and 99.90% sensitivity in the per-lesion analysis (95% CI, 99.74-99.97). Conventional reading by the gastroenterologists identified abnormalities with 74.57% sensitivity (95% CI, 73.05-76.03) in the per-patient analysis and 76.89% in the per-lesion analysis (95% CI, 75.58-78.15). The mean reading time per patient was 96.6 ± 22.53 minutes by conventional reading and 5.9 ± 2.23 minutes by CNN-based auxiliary reading (P < .001). CONCLUSIONS:We validated the ability of a CNN-based algorithm to identify abnormalities in SB-CE images. The CNN-based auxiliary model identified abnormalities with higher levels of sensitivity and significantly shorter reading times than conventional analysis by gastroenterologists. This algorithm provides an important tool to help gastroenterologists analyze SB-CE images more efficiently and more accurately. 10.1053/j.gastro.2019.06.025