logo logo
Different cell imaging methods did not significantly improve immune cell image classification performance. Ogawa Taisaku,Ochiai Koji,Iwata Tomoharu,Ikawa Tomokatsu,Tsuzuki Taku,Shiroguchi Katsuyuki,Takahashi Koichi PloS one Developments in high-throughput microscopy have made it possible to collect huge amounts of cell image data that are difficult to analyse manually. Machine learning (e.g., deep learning) is often employed to automate the extraction of information from these data, such as cell counting, cell type classification and image segmentation. However, the effects of different imaging methods on the accuracy of image processing have not been examined systematically. We studied the effects of different imaging methods on the performance of machine learning-based cell type classifiers. We observed lymphoid-primed multipotential progenitor (LMPP) and pro-B cells using three imaging methods: differential interference contrast (DIC), phase contrast (Ph) and bright-field (BF). We examined the classification performance of convolutional neural networks (CNNs) with each of them and their combinations. CNNs achieved an area under the receiver operating characteristic (ROC) curve (AUC) of ~0.9, which was significantly better than when the classifier used only cell size or cell contour shape as input. However, no significant differences were found between imaging methods and focal positions. 10.1371/journal.pone.0262397
Quantitative features to assist in the diagnostic assessment of chronic lymphocytic leukemia progression. The Journal of pathology The use of artificial intelligence methods in the image-based diagnostic assessment of hematological diseases is a growing trend in recent years. In these methods, the selection of quantitative features that describe cytological characteristics plays a key role. They are expected to add objectivity and consistency among observers to the geometric, color, or texture variables that pathologists usually interpret from visual inspection. In a recent paper in The Journal of Pathology, El Hussein, Chen et al proposed an algorithmic procedure to assist pathologists in the diagnostic evaluation of chronic lymphocytic leukemia (CLL) progression using whole-slide image analysis of tissue samples. The core of the procedure was a set of quantitative descriptors (biomarkers) calculated from the segmentation of cell nuclei, which was performed using a convolutional neural network. These biomarkers were based on clinical practice and easily calculated with reproducible tools. They were used as input to a machine learning algorithm that provided classification in one of the stages of CLL progression. Works like this can contribute to the integration into the workflow of clinical laboratories of automated diagnostic systems based on the morphological analysis of histological slides and blood smears. © 2021 The Pathological Society of Great Britain and Ireland. 10.1002/path.5858
Applying Faster R-CNN for Object Detection on Malaria Images. Hung Jane,Lopes Stefanie C P,Nery Odailton Amaral,Nosten Francois,Ferreira Marcelo U,Duraisingh Manoj T,Marti Matthias,Ravel Deepali,Rangel Gabriel,Malleret Benoit,Lacerda Marcus V G,Rénia Laurent,Costa Fabio T M,Carpenter Anne E Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops Deep learning based models have had great success in object detection, but the state of the art models have not yet been widely applied to biological image data. We apply for the first time an object detection model previously used on natural images to identify cells and recognize their stages in brightfield microscopy images of malaria-infected blood. Many micro-organisms like malaria parasites are still studied by expert manual inspection and hand counting. This type of object detection task is challenging due to factors like variations in cell shape, density, and color, and uncertainty of some cell classes. In addition, annotated data useful for training is scarce, and the class distribution is inherently highly imbalanced due to the dominance of uninfected red blood cells. We use Faster Region-based Convolutional Neural Network (Faster R-CNN), one of the top performing object detection models in recent years, pre-trained on ImageNet but fine tuned with our data, and compare it to a baseline, which is based on a traditional approach consisting of cell segmentation, extraction of several single-cell features, and classification using random forests. To conduct our initial study, we collect and label a dataset of 1300 fields of view consisting of around 100,000 individual cells. We demonstrate that Faster R-CNN outperforms our baseline and put the results in context of human performance. 10.1109/cvprw.2017.112
Image analysis and machine learning for detecting malaria. Translational research : the journal of laboratory and clinical medicine Malaria remains a major burden on global health, with roughly 200 million cases worldwide and more than 400,000 deaths per year. Besides biomedical research and political efforts, modern information technology is playing a key role in many attempts at fighting the disease. One of the barriers toward a successful mortality reduction has been inadequate malaria diagnosis in particular. To improve diagnosis, image analysis software and machine learning methods have been used to quantify parasitemia in microscopic blood slides. This article gives an overview of these techniques and discusses the current developments in image analysis and machine learning for microscopic malaria diagnosis. We organize the different approaches published in the literature according to the techniques used for imaging, image preprocessing, parasite detection and cell segmentation, feature computation, and automatic cell classification. Readers will find the different techniques listed in tables, with the relevant articles cited next to them, for both thin and thick blood smear images. We also discussed the latest developments in sections devoted to deep learning and smartphone technology for future malaria diagnosis. 10.1016/j.trsl.2017.12.004
A Computational Tumor-Infiltrating Lymphocyte Assessment Method Comparable with Visual Reporting Guidelines for Triple-Negative Breast Cancer. Sun Peng,He Jiehua,Chao Xue,Chen Keming,Xu Yuanyuan,Huang Qitao,Yun Jingping,Li Mei,Luo Rongzhen,Kuang Jinbo,Wang Huajia,Li Haosen,Hui Hui,Xu Shuoyu EBioMedicine BACKGROUND:Tumor-infiltrating lymphocytes (TILs) are clinically significant in triple-negative breast cancer (TNBC). Although a standardized methodology for visual TILs assessment (VTA) exists, it has several inherent limitations. We established a deep learning-based computational TIL assessment (CTA) method broadly following VTA guideline and compared it with VTA for TNBC to determine the prognostic value of the CTA and a reasonable CTA workflow for clinical practice. METHODS:We trained three deep neural networks for nuclei segmentation, nuclei classification and necrosis classification to establish a CTA workflow. The automatic TIL (aTIL) score generated was compared with manual TIL (mTIL) scores provided by three pathologists in an Asian (n = 184) and a Caucasian (n = 117) TNBC cohort to evaluate scoring concordance and prognostic value. FINDINGS:The intraclass correlations (ICCs) between aTILs and mTILs varied from 0.40 to 0.70 in two cohorts. Multivariate Cox proportional hazards analysis revealed that the aTIL score was associated with disease free survival (DFS) in both cohorts, as either a continuous [hazard ratio (HR)=0.96, 95% CI 0.94-0.99] or dichotomous variable (HR=0.29, 95% CI 0.12-0.72). A higher C-index was observed in a composite mTIL/aTIL three-tier stratification model than in the dichotomous model, using either mTILs or aTILs alone. INTERPRETATION:The current study provides a useful tool for stromal TIL assessment and prognosis evaluation for patients with TNBC. A workflow integrating both VTA and CTA may aid pathologists in performing risk management and decision-making tasks. FUNDING:National Natural Science Foundation of China, Guangdong Medical Research Foundation, Guangdong Natural Science Foundation. 10.1016/j.ebiom.2021.103492
Automatic classification of cells in microscopic fecal images using convolutional neural networks. Bioscience reports The analysis of fecal-type components for clinical diagnosis is important. The main examination involves the counting of red blood cells (RBCs), white blood cells (WBCs), and molds under the microscopic. With the development of machine vision, some vision-based detection schemes have been proposed. However, these methods have a single target for detection, with low detection efficiency and low accuracy. We proposed an algorithm to identify the visible image of fecal composition based on intelligent deep learning. The algorithm mainly includes region proposal and candidate recognition. In the process of segmentation, we proposed a morphology extraction algorithm in a complex background. As for the candidate recognition, we proposed a new convolutional neural network (CNN) architecture based on Inception-v3 and principal component analysis (PCA). This method achieves high-average of 90.7%, which is better than the other mainstream CNN models. Finally, the images within the rectangle marks were obtained. The total time for detection of an image was roughly 1200 ms. The algorithm proposed in the present paper can be integrated into an automatic fecal detection system. 10.1042/BSR20182100
TA-Net: Triple attention network for medical image segmentation. Li Yang,Yang Jun,Ni Jiajia,Elazab Ahmed,Wu Jianhuang Computers in biology and medicine The automatic segmentation of medical images has made continuous progress due to the development of convolutional neural networks (CNNs) and attention mechanism. However, previous works usually explore the attention features of a certain dimension in the image, thus may ignore the correlation between feature maps in other dimensions. Therefore, how to capture the global features of various dimensions is still facing challenges. To deal with this problem, we propose a triple attention network (TA-Net) by exploring the ability of the attention mechanism to simultaneously recognize global contextual information in the channel domain, spatial domain, and feature internal domain. Specifically, during the encoder step, we propose a channel with self-attention encoder (CSE) block to learn the long-range dependencies of pixels. The CSE effectively increases the receptive field and enhances the representation of target features. In the decoder step, we propose a spatial attention up-sampling (SU) block that makes the network pay more attention to the position of the useful pixels when fusing the low-level and high-level features. Extensive experiments were tested on four public datasets and one local dataset. The datasets include the following types: retinal blood vessels (DRIVE and STARE), cells (ISBI 2012), cutaneous melanoma (ISIC 2017), and intracranial blood vessels. Experimental results demonstrate that the proposed TA-Net is overall superior to previous state-of-the-art methods in different medical image segmentation tasks with high accuracy, promising robustness, and relatively low redundancy. 10.1016/j.compbiomed.2021.104836
CE-Net: Context Encoder Network for 2D Medical Image Segmentation. Gu Zaiwang,Cheng Jun,Fu Huazhu,Zhou Kang,Hao Huaying,Zhao Yitian,Zhang Tianyang,Gao Shenghua,Liu Jiang IEEE transactions on medical imaging Medical image segmentation is an important step in medical image analysis. With the rapid development of a convolutional neural network in image processing, deep learning has been used for medical image segmentation, such as optic disc segmentation, blood vessel detection, lung segmentation, cell segmentation, and so on. Previously, U-net based approaches have been proposed. However, the consecutive pooling and strided convolutional operations led to the loss of some spatial information. In this paper, we propose a context encoder network (CE-Net) to capture more high-level information and preserve spatial information for 2D medical image segmentation. CE-Net mainly contains three major components: a feature encoder module, a context extractor, and a feature decoder module. We use the pretrained ResNet block as the fixed feature extractor. The context extractor module is formed by a newly proposed dense atrous convolution block and a residual multi-kernel pooling block. We applied the proposed CE-Net to different 2D medical image segmentation tasks. Comprehensive results show that the proposed method outperforms the original U-Net method and other state-of-the-art methods for optic disc segmentation, vessel detection, lung segmentation, cell contour segmentation, and retinal optical coherence tomography layer segmentation. 10.1109/TMI.2019.2903562
Hybrid adversarial-discriminative network for leukocyte classification in leukemia. Zhang Chuanhao,Wu Shangshang,Lu Zhiming,Shen Yajuan,Wang Jing,Huang Pu,Lou Jingjiao,Liu Cong,Xing Lei,Zhang Jian,Xue Jie,Li Dengwang Medical physics PURPOSE:Leukemia is a lethal disease that is harmful to bone marrow and overall blood health. The classification of white blood cell images is crucial for leukemia diagnosis. The purpose of this study is to classify white blood cells by extracting discriminative information from cell segmentation and combining it with the fine-grained features. We propose a hybrid adversarial residual network with support vector machine (SVM), which utilizes the extracted features to improve the classification accuracy for human peripheral white cells. METHODS:Firstly, we segment the cell and nucleus by utilizing an adversarial residual network, which contains a segmentation network and a discriminator network. To extract features that can handle the inter-class consistency problem effectively, we introduce the adversarial residual network. Then, we utilize convolutional neural network (CNN) features and histogram of oriented gradient (HOG) features, which can extract discriminative features from images of segmented cell nuclei. To utilize the representative features fully, a discriminative network is introduced to deal with neighboring information at different scales. Finally, we combine the vectors of HOG features with those of CNN features and feed them into a linear SVM to classify white blood cells into six types. RESULTS:We used three methods to evaluate the effect of leukocyte classification based on 5000 leukocyte images acquired from a local hospital. The first approach is to use the CNN features as the input of SVM to classify leukocytes, which achieved 94.23% specificity, 95.10% sensitivity, and 94.41% accuracy. The use of the HOG features for SVM achieved 83.50% specificity, 87.50% sensitivity, and 85.00% accuracy. The use of combined CNN and HOG features achieved 94.57% specificity, 96.11% sensitivity, and 95.93% accuracy. CONCLUSIONS:We propose a novel hybrid adversarial-discriminative network for the classification of microscopic leukocyte images. It improves the accuracy of cell classification, reduces the difficulty and time pressure of doctors' work, and economizes the valuable time of doctors in daily clinical diagnosis. 10.1002/mp.14144
GC-Net: Global context network for medical image segmentation. Ni Jiajia,Wu Jianhuang,Tong Jing,Chen Zhengming,Zhao Junping Computer methods and programs in biomedicine BACKGROUND AND OBJECTIVE:Medical image segmentation plays an important role in many clinical applications such as disease diagnosis, surgery planning, and computer-assisted therapy. However, it is a very challenging task due to variant images qualities, complex shapes of objects, and the existence of outliers. Recently, researchers have presented deep learning methods to segment medical images. However, these methods often use the high-level features of the convolutional neural network directly or the high-level features combined with the shallow features, thus ignoring the role of the global context features for the segmentation task. Consequently, they have limited capability on extensive medical segmentation tasks. The purpose of this work is to devise a neural network with global context feature information for accomplishing medical image segmentation of different tasks. METHODS:The proposed global context network (GC-Net) consists of two components; feature encoding and decoding modules. We use multiple convolutions and batch normalization layers in the encoding module. On the other hand, the decoding module is formed by a proposed global context attention (GCA) block and squeeze and excitation pyramid pooling (SEPP) block. The GCA module connects low-level and high-level features to produce more representative features, while the SEPP module increases the size of the receptive field and the ability of multi-scale feature fusion. Moreover, a weighted cross entropy loss is designed to better balance the segmented and non-segmented regions. RESULTS:The proposed GC-Net is validated on three publicly available datasets and one local dataset. The tested medical segmentation tasks include segmentation of intracranial blood vessel, retinal vessels, cell contours, and lung. Experiments demonstrate that, our network outperforms state-of-the-art methods concerning several commonly used evaluation metrics. CONCLUSION:Medical segmentation of different tasks can be accurately and effectively achieved by devising a deep convolutional neural network with a global context attention mechanism. 10.1016/j.cmpb.2019.105121
Optimal Deep Transfer Learning-Based Human-Centric Biomedical Diagnosis for Acute Lymphoblastic Leukemia Detection. Computational intelligence and neuroscience Human-centric biomedical diagnosis (HCBD) becomes a hot research topic in the healthcare sector, which assists physicians in the disease diagnosis and decision-making process. Leukemia is a pathology that affects younger people and adults, instigating early death and a number of other symptoms. Computer-aided detection models are found to be useful for reducing the probability of recommending unsuitable treatments and helping physicians in the disease detection process. Besides, the rapid development of deep learning (DL) models assists in the detection and classification of medical-imaging-related problems. Since the training of DL models necessitates massive datasets, transfer learning models can be employed for image feature extraction. In this view, this study develops an optimal deep transfer learning-based human-centric biomedical diagnosis model for acute lymphoblastic detection (ODLHBD-ALLD). The presented ODLHBD-ALLD model mainly intends to detect and classify acute lymphoblastic leukemia using blood smear images. To accomplish this, the ODLHBD-ALLD model involves the Gabor filtering (GF) technique as a noise removal step. In addition, it makes use of a modified fuzzy c-means (MFCM) based segmentation approach for segmenting the images. Besides, the competitive swarm optimization (CSO) algorithm with the EfficientNetB0 model is utilized as a feature extractor. Lastly, the attention-based long-short term memory (ABiLSTM) model is employed for the proper identification of class labels. For investigating the enhanced performance of the ODLHBD-ALLD approach, a wide range of simulations were executed on open access dataset. The comparative analysis reported the betterment of the ODLHBD-ALLD model over the other existing approaches. 10.1155/2022/7954111
WBC image classification and generative models based on convolutional neural network. BMC medical imaging BACKGROUND:Computer-aided methods for analyzing white blood cells (WBC) are popular due to the complexity of the manual alternatives. Recent works have shown highly accurate segmentation and detection of white blood cells from microscopic blood images. However, the classification of the observed cells is still a challenge, in part due to the distribution of the five types that affect the condition of the immune system. METHODS:(i) This work proposes W-Net, a CNN-based method for WBC classification. We evaluate W-Net on a real-world large-scale dataset that includes 6562 real images of the five WBC types. (ii) For further benefits, we generate synthetic WBC images using Generative Adversarial Network to be used for education and research purposes through sharing. RESULTS:(i) W-Net achieves an average accuracy of 97%. In comparison to state-of-the-art methods in the field of WBC classification, we show that W-Net outperforms other CNN- and RNN-based model architectures. Moreover, we show the benefits of using pre-trained W-Net in a transfer learning context when fine-tuned to specific task or accommodating another dataset. (ii) The synthetic WBC images are confirmed by experiments and a domain expert to have a high degree of similarity to the original images. The pre-trained W-Net and the generated WBC dataset are available for the community to facilitate reproducibility and follow up research work. CONCLUSION:This work proposed W-Net, a CNN-based architecture with a small number of layers, to accurately classify the five WBC types. We evaluated W-Net on a real-world large-scale dataset and addressed several challenges such as the transfer learning property and the class imbalance. W-Net achieved an average classification accuracy of 97%. We synthesized a dataset of new WBC image samples using DCGAN, which we released to the public for education and research purposes. 10.1186/s12880-022-00818-1
Automated annotations of epithelial cells and stroma in hematoxylin-eosin-stained whole-slide images using cytokeratin re-staining. Brázdil Tomáš,Gallo Matej,Nenutil Rudolf,Kubanda Andrej,Toufar Martin,Holub Petr The journal of pathology. Clinical research The diagnosis of solid tumors of epithelial origin (carcinomas) represents a major part of the workload in clinical histopathology. Carcinomas consist of malignant epithelial cells arranged in more or less cohesive clusters of variable size and shape, together with stromal cells, extracellular matrix, and blood vessels. Distinguishing stroma from epithelium is a critical component of artificial intelligence (AI) methods developed to detect and analyze carcinomas. In this paper, we propose a novel automated workflow that enables large-scale guidance of AI methods to identify the epithelial component. The workflow is based on re-staining existing hematoxylin and eosin (H&E) formalin-fixed paraffin-embedded sections by immunohistochemistry for cytokeratins, cytoskeletal components specific to epithelial cells. Compared to existing methods, clinically available H&E sections are reused and no additional material, such as consecutive slides, is needed. We developed a simple and reliable method for automatic alignment to generate masks denoting cytokeratin-rich regions, using cell nuclei positions that are visible in both the original and the re-stained slide. The registration method has been compared to state-of-the-art methods for alignment of consecutive slides and shows that, despite being simpler, it provides similar accuracy and is more robust. We also demonstrate how the automatically generated masks can be used to train modern AI image segmentation based on U-Net, resulting in reliable detection of epithelial regions in previously unseen H&E slides. Through training on real-world material available in clinical laboratories, this approach therefore has widespread applications toward achieving AI-assisted tumor assessment directly from scanned H&E sections. In addition, the re-staining method will facilitate additional automated quantitative studies of tumor cell and stromal cell phenotypes. 10.1002/cjp2.249
Automated and semi-automated enhancement, segmentation and tracing of cytoskeletal networks in microscopic images: A review. Özdemir Bugra,Reski Ralf Computational and structural biotechnology journal Cytoskeletal filaments are structures of utmost importance to biological cells and organisms due to their versatility and the significant functions they perform. These biopolymers are most often organised into network-like scaffolds with a complex morphology. Understanding the geometrical and topological organisation of these networks provides key insights into their functional roles. However, this non-trivial task requires a combination of high-resolution microscopy and sophisticated image processing/analysis software. The correct analysis of the network structure and connectivity needs precise segmentation of microscopic images. While segmentation of filament-like objects is a well-studied concept in biomedical imaging, where tracing of neurons and blood vessels is routine, there are comparatively fewer studies focusing on the segmentation of cytoskeletal filaments and networks from microscopic images. The developments in the fields of microscopy, computer vision and deep learning, however, began to facilitate the task, as reflected by an increase in the recent literature on the topic. Here, we aim to provide a short summary of the research on the (semi-)automated enhancement, segmentation and tracing methods that are particularly designed and developed for microscopic images of cytoskeletal networks. In addition to providing an overview of the conventional methods, we cover the recently introduced, deep-learning-assisted methods alongside the advantages they offer over classical methods. 10.1016/j.csbj.2021.04.019
The Diagnosis of Chronic Myeloid Leukemia with Deep Adversarial Learning. The American journal of pathology Chronic myeloid leukemia (CML) is a clonal proliferative disorder of granulocytic lineage, with morphologic evaluation as the first step for a definite diagnosis. This study developed a conditional generative adversarial network (cGAN)-based model, CMLcGAN, to segment megakaryocytes from myeloid cells in bone marrow biopsies. After segmentation, the statistical characteristics of two types of cells were extracted and compared between patients and controls. At the segmentation phase, the CMLcGAN was evaluated on 517 images (512 × 512) which achieved a mean pixel accuracy of 95.1%, a mean intersection over union of 71.2%, and a mean Dice coefficient of 81.8%. In addition, the CMLcGAN was compared with seven other available deep learning-based segmentation models and achieved a better segmentation performance. At the clinical validation phase, a series of seven-dimensional statistical features from various cells were extracted. Using the t-test, five-dimensional features were selected as the clinical prediction feature set. Finally, the model iterated 100 times using threefold cross-validation on whole slide images (58 CML cases and 31 healthy cases), and the final best AUC was 84.93%. In conclusion, a CMLcGAN model was established for multiclass segmentation of bone marrow cells that performed better than other deep learning-based segmentation models. 10.1016/j.ajpath.2022.03.016
Automated red blood cells extraction from holographic images using fully convolutional neural networks. Biomedical optics express In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. 10.1364/BOE.8.004466
Segmentation of Microscope Erythrocyte Images by CNN-Enhanced Algorithms. Sensors (Basel, Switzerland) This paper presents an algorithm for segmentation and shape analysis of erythrocyte images collected using an optical microscope. The main objective of the proposed approach is to compute statistical object values such as the number of erythrocytes in the image, their size, and width to height ratio. A median filter, a mean filter and a bilateral filter were used for initial noise reduction. Background subtraction using a rolling ball filter removes background irregularities. Combining the distance transform with the Otsu and watershed segmentation methods allows for initial image segmentation. Further processing steps, including morphological transforms and the previously mentioned segmentation methods, were applied to each segmented cell, resulting in an accurate segmentation. Finally, the noise standard deviation, sensitivity, specificity, precision, negative predictive value, accuracy and the number of detected objects are calculated. The presented approach shows that the second stage of the two-stage segmentation algorithm applied to individual cells segmented in the first stage allows increasing the precision from 0.857 to 0.968 for the artificial image example tested in this paper. The next step of the algorithm is to categorize segmented erythrocytes to identify poorly segmented and abnormal ones, thus automating this process, previously often done manually by specialists. The presented segmentation technique is also applicable as a probability map processor in the deep learning pipeline. The presented two-stage processing introduces a promising fusion model presented by the authors for the first time. 10.3390/s21051720
High-Efficiency Classification of White Blood Cells Based on Object Detection. Journal of healthcare engineering White blood cells (WBCs) play a significant role in the human immune system, and the content of various subtypes of WBCs is usually maintained within a certain range in the human body, while deviant levels are important warning signs for diseases. Hence, the detection and classification of WBCs is an essential diagnostic technique. However, traditional WBC classification technologies based on image processing usually need to segment the collected target cell images from the background. This preprocessing operation not only increases the workload but also heavily affects the classification quality and efficiency. Therefore, we proposed one high-efficiency object detection technology that combines the segmentation and recognition of targets into one step to realize the detection and classification of WBCs in an image at the same time. Two state-of-the-art object detection models, Faster RCNN and Yolov4, were employed and comparatively studied to classify neutrophils, eosinophils, monocytes, and lymphocytes on a balanced and enhanced Blood Cell Count Dataset (BCCD). Our experimental results showed that the Faster RCNN and Yolov4 based deep transfer learning models achieved classification accuracy rates of 96.25% and 95.75%, respectively. For the one-stage model, Yolov4, while ensuring more than 95% accuracy, its detection speed could reach 60 FPS, which showed better performance compared with the two-stage model, Faster RCNN. The high-efficiency object detection network that does not require cell presegmentation can remove the difficulty of image preprocessing and greatly improve the efficiency of the entire classification task, which provides a potential solution for future real-time point-of-care diagnostic systems. 10.1155/2021/1615192
Deep learning approach to peripheral leukocyte recognition. PloS one Microscopic examination of peripheral blood plays an important role in the field of diagnosis and control of major diseases. Peripheral leukocyte recognition by manual requires medical technicians to observe blood smears through light microscopy, using their experience and expertise to discriminate and analyze different cells, which is time-consuming, labor-intensive and subjective. The traditional systems based on feature engineering often need to ensure successful segmentation and then manually extract certain quantitative and qualitative features for recognition but still remaining a limitation of poor robustness. The classification pipeline based on convolutional neural network is of automatic feature extraction and free of segmentation but hard to deal with multiple object recognition. In this paper, we take leukocyte recognition as object detection task and apply two remarkable object detection approaches, Single Shot Multibox Detector and An Incremental Improvement Version of You Only Look Once. To improve recognition performance, some key factors involving these object detection approaches are explored and the detection models are generated using the train set of 14,700 annotated images. Finally, we evaluate these detection models on test sets consisting of 1,120 annotated images and 7,868 labeled single object images corresponding to 11 categories of peripheral leukocytes, respectively. A best mean average precision of 93.10% and mean accuracy of 90.09% are achieved while the inference time is 53 ms per image on a NVIDIA GTX1080Ti GPU. 10.1371/journal.pone.0218808
Automated screening of sickle cells using a smartphone-based microscope and deep learning. NPJ digital medicine Sickle cell disease (SCD) is a major public health priority throughout much of the world, affecting millions of people. In many regions, particularly those in resource-limited settings, SCD is not consistently diagnosed. In Africa, where the majority of SCD patients reside, more than 50% of the 0.2-0.3 million children born with SCD each year will die from it; many of these deaths are in fact preventable with correct diagnosis and treatment. Here, we present a deep learning framework which can perform automatic screening of sickle cells in blood smears using a smartphone microscope. This framework uses two distinct, complementary deep neural networks. The first neural network enhances and standardizes the blood smear images captured by the smartphone microscope, spatially and spectrally matching the image quality of a laboratory-grade benchtop microscope. The second network acts on the output of the first image enhancement neural network and is used to perform the semantic segmentation between healthy and sickle cells within a blood smear. These segmented images are then used to rapidly determine the SCD diagnosis per patient. We blindly tested this mobile sickle cell detection method using blood smears from 96 unique patients (including 32 SCD patients) that were imaged by our smartphone microscope, and achieved ~98% accuracy, with an area-under-the-curve of 0.998. With its high accuracy, this mobile and cost-effective method has the potential to be used as a screening tool for SCD and other blood cell disorders in resource-limited settings. 10.1038/s41746-020-0282-y
Classification of acute lymphoblastic leukemia using deep learning. Rehman Amjad,Abbas Naveed,Saba Tanzila,Rahman Syed Ijaz Ur,Mehmood Zahid,Kolivand Hoshang Microscopy research and technique Acute Leukemia is a life-threatening disease common both in children and adults that can lead to death if left untreated. Acute Lymphoblastic Leukemia (ALL) spreads out in children's bodies rapidly and takes the life within a few weeks. To diagnose ALL, the hematologists perform blood and bone marrow examination. Manual blood testing techniques that have been used since long time are often slow and come out with the less accurate diagnosis. This work improves the diagnosis of ALL with a computer-aided system, which yields accurate result by using image processing and deep learning techniques. This research proposed a method for the classification of ALL into its subtypes and reactive bone marrow (normal) in stained bone marrow images. A robust segmentation and deep learning techniques with the convolutional neural network are used to train the model on the bone marrow images to achieve accurate classification results. Experimental results thus obtained and compared with the results of other classifiers Naïve Bayesian, KNN, and SVM. Experimental results reveal that the proposed method achieved 97.78% accuracy. The obtained results exhibit that the proposed approach could be used as a tool to diagnose Acute Lymphoblastic Leukemia and its sub-types that will definitely assist pathologists. 10.1002/jemt.23139
Antibody Supervised Training of a Deep Learning Based Algorithm for Leukocyte Segmentation in Papillary Thyroid Carcinoma. Stenman Sebastian,Bychkov Dmitrii,Kucukel Hakan,Linder Nina,Haglund Caj,Arola Johanna,Lundin Johan IEEE journal of biomedical and health informatics The quantity of leukocytes in papillary thyroid carcinoma (PTC) potentially have prognostic and treatment predictive value. Here, we propose a novel method for training a convolutional neural network (CNN) algorithm for segmenting leukocytes in PTCs. Tissue samples from two retrospective PTC cohort were obtained and representative tissue slides from twelve patients were stained with hematoxylin and eosin (HE) and digitized. Then, the HE slides were destained and restained immunohistochemically (IHC) with antibodies to the pan-leukocyte anti CD45 antigen and scanned again. The two stain-pairs of all representative tissue slides were registered, and image tiles of regions of interests were exported. The image tiles were processed and the 3,3'-diaminobenzidine (DAB) stained areas representing anti CD45 expression were turned into binary masks. These binary masks were applied as annotations on the HE image tiles and used in the training of a CNN algorithm. Ten whole slide images (WSIs) were used for training using a five-fold cross-validation and the remaining two slides were used as an independent test set for the trained model. For visual evaluation, the algorithm was run on all twelve WSIs, and in total 238,144 tiles sized 500 × 500 pixels were analyzed. The trained CNN algorithm had an intersection over union of 0.82 for detection of leukocytes in the HE image tiles when comparing the prediction masks to the ground truth anti CD45 mask. We conclude that this method for generating antibody supervised annotations using the destain-restain IHC guided annotations resulted in high accuracy segmentations of leukocytes in HE tissue images. 10.1109/JBHI.2020.2994970
Recent computational methods for white blood cell nuclei segmentation: A comparative study. Andrade Alan R,Vogado Luis H S,Veras Rodrigo de M S,Silva Romuere R V,Araujo Flávio H D,Medeiros Fátima N S Computer methods and programs in biomedicine BACKGROUND AND OBJECTIVE:Leukaemia is a disease found worldwide; it is a type of cancer that originates in the bone marrow and is characterised by an abnormal proliferation of white blood cells (leukocytes). In order to correctly identify this abnormality, haematologists examine blood smears from patients. A diagnosis obtained by this method may be influenced by factors such as the experience and level of fatigue of the haematologist, resulting in non-standard reports and even errors. In the literature, several methods have been proposed that involve algorithms to diagnose this disease. However, no reviews or surveys have been conducted. This paper therefore presents an empirical investigation of computational methods focusing on the segmentation of leukocytes. METHODS:In our study, 15 segmentation methods were evaluated using five public image databases: ALL-IDB2, BloodSeg, Leukocytes, JTSC Database and CellaVision. Following the standard methodology for literature evaluation, we conducted a pixel-level segmentation evaluation by comparing the segmented image with its corresponding ground truth. In order to identify the strengths and weaknesses of these methods, we performed an evaluation using six evaluation metrics: accuracy, specificity, precision, recall, kappa, Dice, and true positive rate. RESULTS:The segmentation algorithms performed significantly differently for different image databases, and for each database, a different algorithm achieved the best results. Moreover, the two best methods achieved average accuracy values higher than 97%, with an excellent kappa index. Also, the average Dice index indicated that the similarity between the segmented leukocyte and its ground truth was higher than 0.85 for these two methods This result confirms the high level of similarity between these images but does not guarantee that a method has segmented all leukocyte nuclei. We also found that the method that performed best segmented only 58.44% of all leukocytes. CONCLUSIONS:Of the techniques used to segment leukocytes, we note that clustering algorithms, the Otsu threshold, simple arithmetic operations and region growing are the approaches most widely used for this purpose. However, these computational methods have not yet overcome all the challenges posed by this problem. 10.1016/j.cmpb.2019.03.001
Quantitative analysis of blood cells from microscopic images using convolutional neural network. Tessema Abel Worku,Mohammed Mohammed Aliy,Simegn Gizeaddis Lamesgin,Kwa Timothy Chung Medical & biological engineering & computing Blood cell count provides relevant clinical information about different kinds of disorders. Any deviation in the number of blood cells implies the presence of infection, inflammation, edema, bleeding, and other blood-related issues. Current microscopic methods used for blood cell counting are very tedious and are highly prone to different sources of errors. Besides, these techniques do not provide full information related to blood cells like shape and size, which play important roles in the clinical investigation of serious blood-related diseases. In this paper, deep learning-based automatic classification and quantitative analysis of blood cells are proposed using the YOLOv2 model. The model was trained on 1560 images and 2703-labeled blood cells with different hyper-parameters. It was tested on 26 images containing 1454 red blood cells, 159 platelets, 3 basophils, 12 eosinophils, 24 lymphocytes, 13 monocytes, and 28 neutrophils. The network achieved detection and segmentation of blood cells with an average accuracy of 80.6% and a precision of 88.4%. Quantitative analysis of cells was done following classification, and mean accuracy of 92.96%, 91.96%, 88.736%, and 92.7% has been achieved in the measurement of area, aspect ratio, diameter, and counting of cells respectively.Graphical abstract Graphical abstract where the first picture shows the input image of blood cells seen under a compound light microscope. The second image shows the tools used like OpenCV to pre-process the image. The third image shows the convolutional neural network used to train and perform object detection. The 4th image shows the output of the network in the detection of blood cells. The last images indicate post-processing applied on the output image such as counting of each blood cells using the class label of each detection and quantification of morphological parameters like area, aspect ratio, and diameter of blood cells so that the final result provides the number of each blood cell types (seven) and morphological information providing valuable clinical information. 10.1007/s11517-020-02291-w
Combining DC-GAN with ResNet for blood cell image classification. Ma Li,Shuai Renjun,Ran Xuming,Liu Wenjia,Ye Chao Medical & biological engineering & computing In medicine, white blood cells (WBCs) play an important role in the human immune system. The different types of WBC abnormalities are related to different diseases so that the total number and classification of WBCs are critical for clinical diagnosis and therapy. However, the traditional method of white blood cell classification is to segment the cells, extract features, and then classify them. Such method depends on the good segmentation, and the accuracy is not high. Moreover, the insufficient data or unbalanced samples can cause the low classification accuracy of model by using deep learning in medical diagnosis. To solve these problems, this paper proposes a new blood cell image classification framework which is based on a deep convolutional generative adversarial network (DC-GAN) and a residual neural network (ResNet). In particular, we introduce a new loss function which is improved the discriminative power of the deeply learned features. The experiments show that our model has a good performance on the classification of WBC images, and the accuracy reaches 91.7%. Graphical Abstract Overview of the proposed method, we use the deep convolution generative adversarial networks (DC-GAN) to generate new samples that are used as supplementary input to a ResNet, the transfer learning method is used to initialize the parameters of the network, the output of the DC-GAN and the parameters are applied the final classification network. In particular, we introduced a modified loss function for classification to increase inter-class variations and decrease intra-class differences. 10.1007/s11517-020-02163-3
Mutual Information based hybrid model and deep learning for Acute Lymphocytic Leukemia detection in single cell blood smear images. Jha Krishna Kumar,Dutta Himadri Sekhar Computer methods and programs in biomedicine BACKGROUND AND OBJECTIVE:Due to the development in digital microscopic imaging, image processing and classification has become an interesting area for diagnostic research. Various techniques are available in the literature for the detection of Acute Lymphocytic Leukemia from the single cell blood smear images. The purpose of this work is to develop an effective method for leukemia detection. METHODS:This work has developed deep learning based leukemia detection module from the blood smear images. Here, the detection scheme carries out pre-processing, segmentation, feature extraction and classification. The segmentation is done by the proposed Mutual Information (MI) based hybrid model, which combines the segmentation results of the active contour model and fuzzy C means algorithm. Then, from the segmented images, the statistical and the Local Directional Pattern (LDP) features are extracted and provided to the proposed Chronological Sine Cosine Algorithm (SCA) based Deep CNN classifier for the classification. RESULTS:For the experimentation, the blood smear images are considered from the AA-IDB2 database and evaluated based on metrics, such as True Positive Rate (TPR), True Negative Rate (TNR), and accuracy. Simulation results reveal that the proposed Chronological SCA based Deep CNN classifier has the accuracy of 98.7%. CONCLUSIONS:The performance of the proposed Chronological SCA-based Deep CNN classifier is compared with the state-of-the-art methods. The analysis shows that the proposed classifier has comparatively improved performance and determines the leukemia from the blood smear images. 10.1016/j.cmpb.2019.104987
LeukocyteMask: An automated localization and segmentation method for leukocyte in blood smear images using deep neural networks. Fan Haoyi,Zhang Fengbin,Xi Liang,Li Zuoyong,Liu Guanghai,Xu Yong Journal of biophotonics Digital pathology and microscope image analysis is widely used in comprehensive studies of cell morphology. Identification and analysis of leukocytes in blood smear images, acquired from bright field microscope, are vital for diagnosing many diseases such as hepatitis, leukaemia and acquired immune deficiency syndrome (AIDS). The major challenge for robust and accurate identification and segmentation of leukocyte in blood smear images lays in the large variations of cell appearance such as size, colour and shape of cells, the adhesion between leukocytes (white blood cells, WBCs) and erythrocytes (red blood cells, RBCs), and the emergence of substantial dyeing impurities in blood smear images. In this paper, an end-to-end leukocyte localization and segmentation method is proposed, named LeukocyteMask, in which pixel-level prior information is utilized for supervisor training of a deep convolutional neural network, which is then employed to locate the region of interests (ROI) of leukocyte, and finally segmentation mask of leukocyte is obtained based on the extracted ROI by forward propagation of the network. Experimental results validate the effectiveness of the propose method and both the quantitative and qualitative comparisons with existing methods indicate that LeukocyteMask achieves a state-of-the-art performance for the segmentation of leukocyte in terms of robustness and accuracy . 10.1002/jbio.201800488
Deep-learning-based MRI in the diagnosis of cerebral infarction and its correlation with the neutrophil to lymphocyte ratio. Lan Wei,Ai Peiying,Xu Qian Annals of palliative medicine BACKGROUND:Dizziness is a common symptom in clinic, but there lacks an effective treatment method. This study sought to examine the efficiency of deep learning (DL)-based magnetic resonance imaging (MRI) in the diagnosis of cerebral infarction mainly manifesting as vertigo using the neutrophil to lymphocyte ratio (NLR) and other routine blood indexes. METHODS:An improved multiscale U-Net [MS (U-Net)] model, based on the U-net model, was proposed and applied in the segmentation of MRI of the brain. One hundred and fifteen vertiginous cerebral infarction (VCI) patients, admitted to the Department of Neurology at Huizhou Central People's Hospital from January 2016 to December 2020, were chosen as the research subjects. Based on the MRI segmentation results for the brain, the patients were allocated to the benign paroxysmal positional vertigo (BPPV) group or acute cerebral infarction (ACI) group. Additionally, 50 healthy individuals, whose venous blood was collected for routine blood analyses, were allocated to the control group. RESULTS:The MS (U-Net) model accomplishes MRI segmentation of the brain, and its segmentation results were much closer to the real results than those of the U-Net model. Compared to the control group, the monocyte count (MC), low-density lipoprotein/high-density lipoprotein (LDL/HDL) ratio, and NLR of patients in the BPPV and ACI groups showed an obvious increase (P<0.05), as did the white blood cell count, triglyceride (TG) level, and other indexes of ACI patients (P<0.05). In relation to the diagnosis, the areas under the curve for the TG level, LDL/HDL ratio, and NLR of the BPPV and ACI groups were 0.930 and 0.760, 0.900, and 0.770, 0.945 and 0.855, respectively (P<0.05). CONCLUSIONS:DL can accomplish MRI segmentation in cerebral infarction patients, and the TG level, LDL/HDL ratio and NLR can be used in the diagnosis of VCI. 10.21037/apm-21-1786
A large dataset of white blood cells containing cell locations and types, along with segmented nuclei and cytoplasm. Kouzehkanan Zahra Mousavi,Saghari Sepehr,Tavakoli Sajad,Rostami Peyman,Abaszadeh Mohammadjavad,Mirzadeh Farzaneh,Satlsar Esmaeil Shahabi,Gheidishahran Maryam,Gorgi Fatemeh,Mohammadi Saeed,Hosseini Reshad Scientific reports Accurate and early detection of anomalies in peripheral white blood cells plays a crucial role in the evaluation of well-being in individuals and the diagnosis and prognosis of hematologic diseases. For example, some blood disorders and immune system-related diseases are diagnosed by the differential count of white blood cells, which is one of the common laboratory tests. Data is one of the most important ingredients in the development and testing of many commercial and successful automatic or semi-automatic systems. To this end, this study introduces a free access dataset of normal peripheral white blood cells called Raabin-WBC containing about 40,000 images of white blood cells and color spots. For ensuring the validity of the data, a significant number of cells were labeled by two experts. Also, the ground truths of the nuclei and cytoplasm are extracted for 1145 selected cells. To provide the necessary diversity, various smears have been imaged, and two different cameras and two different microscopes were used. We did some preliminary deep learning experiments on Raabin-WBC to demonstrate how the generalization power of machine learning methods, especially deep neural networks, can be affected by the mentioned diversity. Raabin-WBC as a public data in the field of health can be used for the model development and testing in different machine learning tasks including classification, detection, segmentation, and localization. 10.1038/s41598-021-04426-x
Recognition of peripheral blood cell images using convolutional neural networks. Acevedo Andrea,Alférez Santiago,Merino Anna,Puigví Laura,Rodellar José Computer methods and programs in biomedicine BACKGROUND AND OBJECTIVES:Morphological analysis is the starting point for the diagnostic approach of more than 80% of hematological diseases. However, the morphological differentiation among different types of normal and abnormal peripheral blood cells is a difficult task that requires experience and skills. Therefore, the paper proposes a system for the automatic classification of eight groups of peripheral blood cells with high accuracy by means of a transfer learning approach using convolutional neural networks. With this new approach, it is not necessary to implement image segmentation, the feature extraction becomes automatic and existing models can be fine-tuned to obtain specific classifiers. METHODS:A dataset of 17,092 images of eight classes of normal peripheral blood cells was acquired using the CellaVision DM96 analyzer. All images were identified by pathologists as the ground truth to train a model to classify different cell types: neutrophils, eosinophils, basophils, lymphocytes, monocytes, immature granulocytes (myelocytes, metamyelocytes and promyelocytes), erythroblasts and platelets. Two designs were performed based on two architectures of convolutional neural networks, Vgg-16 and Inceptionv3. In the first case, the networks were used as feature extractors and these features were used to train a support vector machine classifier. In the second case, the same networks were fine-tuned with our dataset to obtain two end-to-end models for classification of the eight classes of blood cells. RESULTS:In the first case, the experimental test accuracies obtained were 86% and 90% when extracting features with Vgg-16 and Inceptionv3, respectively. On the other hand, in the fine-tuning experiment, global accuracy values of 96% and 95% were obtained using Vgg-16 and Inceptionv3, respectively. All the models were trained and tested using Keras and Tensorflow with a Nvidia Titan XP Graphics Processing Unit. CONCLUSIONS:The main contribution of this paper is a classification scheme involving a convolutional neural network trained to discriminate among eight classes of cells circulating in peripheral blood. Starting from a state-of-the-art general architecture, we have established a fine-tuning procedure to develop an end-to-end classifier trained using a dataset with over 17,000 cell images obtained from clinical practice. The performance obtained when testing the system has been truly satisfactory, the values of precision, sensitivity, and specificity being excellent. To summarize, the best overall classification accuracy has been 96.2%. 10.1016/j.cmpb.2019.105020
A deep learning approach to the screening of malaria infection: Automated and rapid cell counting, object detection and instance segmentation using Mask R-CNN. Loh De Rong,Yong Wen Xin,Yapeter Jullian,Subburaj Karupppasamy,Chandramohanadas Rajesh Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society Accurate and early diagnosis is critical to proper malaria treatment and hence death prevention. Several computer vision technologies have emerged in recent years as alternatives to traditional microscopy and rapid diagnostic tests. In this work, we used a deep learning model called Mask R-CNN that is trained on uninfected and Plasmodium falciparum-infected red blood cells. Our predictive model produced reports at a rate 15 times faster than manual counting without compromising on accuracy. Another unique feature of our model is its ability to generate segmentation masks on top of bounding box classifications for immediate visualization, making it superior to existing models. Furthermore, with greater standardization, it holds much potential to reduce errors arising from manual counting and save a significant amount of human resources, time, and cost. 10.1016/j.compmedimag.2020.101845
White blood cells detection and classification based on regional convolutional neural networks. Kutlu Hüseyin,Avci Engin,Özyurt Fatih Medical hypotheses White blood cells (WBC) are important parts of our immune system and they protect our body against infections by eliminating viruses, bacteria, parasites and fungi. There are five types of WBC. These are called Lymphocytes, Monocytes, Eosinophils, Basophils and Neutrophils. The number of WBC types and the total number of WBCs provide important information about our health status. Diseases such as leukemia, AIDS, autoimmune diseases, immune deficiencies, blood diseases can be diagnosed based on the number of WBCs. In this study, a computer-aided automated system that can easily identify and locate WBC types in blood images has been proposed. Current blood test devices usually detect WBCs with traditional image processing methods such as preprocessing, segmentation, feature extraction, feature selection and classification. Deep learning methodology is superior to traditional image processing methods in literature. In addition, traditional methods require the appearance of the whole object to be able to recognize objects. Contrary to traditional methods, convolutional neural networks (CNN), a deep learning architecture, can extract features from a part of an object and perform object recognition. In this case, a CNN-based system shows a higher performance in recognizing partially visible cells for reasons such as overlap or only partial visibility of the image. Therefore, it has been the motivation of this study to increase the performance of existing blood test devices with deep learning method. Blood cells have been identified and classified by Regional Based Convolutional Neural Networks. Designed architectures have been trained and tested by combining BCCD data set and LISC data set. Regional Convolutional Neural Networks (R - CNN) has been used as a methodology. In this way, different cell types within the same image have been classified simultaneously with a detector. While training CNN which is the basis of R - CNN architecture; AlexNet, VGG16, GoogLeNet, ResNet50 architectures have been tested with full learning and transfer learning. At the end of the study, the system has showed 100% success in determining WBC cells. ResNet50, one of the CNN architectures, has showed the best performance with transfer learning. Cell types of Lymphocyte were determined with 99.52% accuracy rate, Monocyte with 98.40% accuracy rate, Basophil with 98.48% accuracy rate, Eosinophil with 96.16% accuracy rate and Neutrophil with 95.04% accuracy rate. 10.1016/j.mehy.2019.109472
Automated Semantic Segmentation of Red Blood Cells for Sickle Cell Disease. Zhang Mo,Li Xiang,Xu Mengjia,Li Quanzheng IEEE journal of biomedical and health informatics Red blood cell (RBC) segmentation and classification from microscopic images is a crucial step for the diagnosis of sickle cell disease (SCD). In this work, we adopt a deep learning based semantic segmentation framework to solve the RBC classification task. A major challenge for robust segmentation and classification is the large variations on the size, shape and viewpoint of the cells, combining with the low image quality caused by noise and artifacts. To address these challenges, we apply deformable convolution layers to the classic U-Net structure and implement the deformable U-Net (dU-Net). U-Net architecture has been shown to offer accurate localization for image semantic segmentation. Moreover, deformable convolution enables free-form deformation of the feature learning process, thus making the network more robust to various cell morphologies and image settings. dU-Net is tested on microscopic red blood cell images from patients with sickle cell disease. Results show that dU-Net can achieve highest accuracy for both binary segmentation and multi-class semantic segmentation tasks, comparing with both unsupervised and state-of-the-art deep learning based supervised segmentation methods. Through detailed investigation of the segmentation results, we further conclude that the performance improvement is mainly caused by the deformable convolution layer, which has better ability to separate the touching cells, discriminate the background noise and predict correct cell shapes without any shape priors. 10.1109/JBHI.2020.3000484
Deep Learning-Based Phenotypic Assessment of Red Cell Storage Lesions for Safe Transfusions. Kim Eunji,Park Seonghwan,Hwang Seunghyeon,Moon Inkyu,Javidi Bahram IEEE journal of biomedical and health informatics This study presents a novel approach to automatically perform instant phenotypic assessment of red blood cell (RBC) storage lesion in phase images obtained by digital holographic microscopy. The proposed model combines a generative adversarial network (GAN) with marker-controlled watershed segmentation scheme. The GAN model performed RBC segmentations and classifications to develop ageing markers, and the watershed segmentation was used to completely separate overlapping RBCs. Our approach achieved good segmentation and classification accuracy with a Dice's coefficient of 0.94 at a high throughput rate of about 152 cells per second. These results were compared with other deep neural network architectures. Moreover, our image-based deep learning models recognized the morphological changes that occur in RBCs during storage. Our deep learning-based classification results were in good agreement with previous findings on the changes in RBC markers (dominant shapes) affected by storage duration. We believe that our image-based deep learning models can be useful for automated assessment of RBC quality, storage lesions for safe transfusions, and diagnosis of RBC-related diseases. 10.1109/JBHI.2021.3104650
Deep learning for cell image segmentation and ranking. Araújo Flávio H D,Silva Romuere R V,Ushizima Daniela M,Rezende Mariana T,Carneiro Cláudia M,Campos Bianchi Andrea G,Medeiros Fátima N S Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society Ninety years after its invention, the Pap test continues to be the most used method for the early identification of cervical precancerous lesions. In this test, the cytopathologists look for microscopic abnormalities in and around the cells, which is a time-consuming and prone to human error task. This paper introduces computational tools for cytological analysis that incorporate cell segmentation deep learning techniques. These techniques are capable of processing both free-lying and clumps of abnormal cells with a high overlapping rate from digitized images of conventional Pap smears. Our methodology employs a preprocessing step that discards images with a low probability of containing abnormal cells without prior segmentation and, therefore, performs faster when compared with the existing methods. Also, it ranks outputs based on the likelihood of the images to contain abnormal cells. We evaluate our methodology on an image database of conventional Pap smears from real scenarios, with 108 fields-of-view containing at least one abnormal cell and 86 containing only normal cells, corresponding to millions of cells. Our results show that the proposed approach achieves accurate results (MAP = 0.936), runs faster than existing methods, and it is robust to the presence of white blood cells, and other contaminants. 10.1016/j.compmedimag.2019.01.003
Automated recognition of white blood cells using deep learning. Biomedical engineering letters The detection, counting, and precise segmentation of white blood cells in cytological images are vital steps in the effective diagnosis of several cancers. This paper introduces an efficient method for automatic recognition of white blood cells in peripheral blood and bone marrow images based on deep learning to alleviate tedious tasks for hematologists in clinical practice. First, input image pre-processing was proposed before applying a deep neural network model adapted to cells localization and segmentation. Then, model outputs were improved by using combined predictions and corrections. Finally, a new algorithm that uses the cooperation between model results and spatial information was implemented to improve the segmentation quality. To implement our model, python language, Tensorflow, and Keras libraries were used. The calculations were executed using NVIDIA GPU 1080, while the datasets used in our experiments came from patients in the Hemobiology service of Tlemcen Hospital (Algeria). The results were promising and showed the efficiency, power, and speed of the proposed method compared to the state-of-the-art methods. In addition to its accuracy of 95.73%, the proposed approach provided fast predictions (less than 1 s). 10.1007/s13534-020-00168-3
NuClick: A deep learning framework for interactive segmentation of microscopic images. Alemi Koohbanani Navid,Jahanifar Mostafa,Zamani Tajadin Neda,Rajpoot Nasir Medical image analysis Object segmentation is an important step in the workflow of computational pathology. Deep learning based models generally require large amount of labeled data for precise and reliable prediction. However, collecting labeled data is expensive because it often requires expert knowledge, particularly in medical imaging domain where labels are the result of a time-consuming analysis made by one or more human experts. As nuclei, cells and glands are fundamental objects for downstream analysis in computational pathology/cytology, in this paper we propose NuClick, a CNN-based approach to speed up collecting annotations for these objects requiring minimum interaction from the annotator. We show that for nuclei and cells in histology and cytology images, one click inside each object is enough for NuClick to yield a precise annotation. For multicellular structures such as glands, we propose a novel approach to provide the NuClick with a squiggle as a guiding signal, enabling it to segment the glandular boundaries. These supervisory signals are fed to the network as auxiliary inputs along with RGB channels. With detailed experiments, we show that NuClick is applicable to a wide range of object scales, robust against variations in the user input, adaptable to new domains, and delivers reliable annotations. An instance segmentation model trained on masks generated by NuClick achieved the first rank in LYON19 challenge. As exemplar outputs of our framework, we are releasing two datasets: 1) a dataset of lymphocyte annotations within IHC images, and 2) a dataset of segmented WBCs in blood smear images. 10.1016/j.media.2020.101771
Localization and recognition of leukocytes in peripheral blood: A deep learning approach. Reena M Roy,Ameer P M Computers in biology and medicine Automatic recognition and classification of leukocytes helps medical practitioners to diagnose various blood-related diseases by analysing their percentages. Different researchers have come up with different algorithms that use traditional learning for the classification of different types of leukocytes. In contrast to traditional learning, in which no knowledge is retained that can be transferred from one model to another, our proposed algorithm uses deep learning approach for segmentation and classification. The proposed algorithm has two-stage pipelining consisting of semantic segmentation and transfer learning-based classification. Here, we have used pre-trained networks, utilizing knowledge from previously learned tasks, called DeepLabv3+ for segmentation of leukocytes and AlexNet to classify five categories of leukocytes in peripheral blood from whole blood smear microscopic images. For experimentation, a microscopic blood image dataset consisting of 257 cells belonging to five types of leukocytes was used. The results obtained from experiments show that the proposed algorithm attained a mean average precision of 98.42% (@IoU = 0.7) in white blood cell localization and a classification accuracy of 98.87 ± 1% compared to existing methods. 10.1016/j.compbiomed.2020.104034
Robust Method for Semantic Segmentation of Whole-Slide Blood Cell Microscopic Images. Computational and mathematical methods in medicine Previous works on segmentation of SEM (scanning electron microscope) blood cell image ignore the semantic segmentation approach of whole-slide blood cell segmentation. In the proposed work, we address the problem of whole-slide blood cell segmentation using the semantic segmentation approach. We design a novel convolutional encoder-decoder framework along with VGG-16 as the pixel-level feature extraction model. The proposed framework comprises 3 main steps: First, all the original images along with manually generated ground truth masks of each blood cell type are passed through the preprocessing stage. In the preprocessing stage, pixel-level labeling, RGB to grayscale conversion of masked image and pixel fusing, and unity mask generation are performed. After that, VGG16 is loaded into the system, which acts as a pretrained pixel-level feature extraction model. In the third step, the training process is initiated on the proposed model. We have evaluated our network performance on three evaluation metrics. We obtained outstanding results with respect to classwise, as well as global and mean accuracies. Our system achieved classwise accuracies of 97.45%, 93.34%, and 85.11% for RBCs, WBCs, and platelets, respectively, while global and mean accuracies remain 97.18% and 91.96%, respectively. 10.1155/2020/4015323
Deep Learning Approaches Towards Skin Lesion Segmentation and Classification from Dermoscopic Images - A Review. Baig Ramsha,Bibi Maryam,Hamid Anmol,Kausar Sumaira,Khalid Shahzad Current medical imaging BACKGROUND:Automated intelligent systems for unbiased diagnosis are primary requirement for the pigment lesion analysis. It has gained the attention of researchers in the last few decades. These systems involve multiple phases such as pre-processing, feature extraction, segmentation, classification and post processing. It is crucial to accurately localize and segment the skin lesion. It is observed that recent enhancements in machine learning algorithms and dermoscopic techniques reduced the misclassification rate therefore, the focus towards computer aided systems increased exponentially in recent years. Computer aided diagnostic systems are reliable source for dermatologists to analyze the type of cancer, but it is widely acknowledged that even higher accuracy is needed for computer aided diagnostic systems to be adopted practically in the diagnostic process of life threatening diseases. INTRODUCTION:Skin cancer is one of the most threatening cancers. It occurs by the abnormal multiplication of cells. The core three types of skin cells are: Squamous, Basal and Melanocytes. There are two wide classes of skin cancer; Melanocytic and non-Melanocytic. It is difficult to differentiate between benign and malignant melanoma, therefore dermatologists sometimes misclassify the benign and malignant melanoma. Melanoma is estimated as 19th most frequent cancer, it is riskier than the Basel and Squamous carcinoma because it rapidly spreads throughout the body. Hence, to lower the death risk, it is critical to diagnose the correct type of cancer in early rudimentary phases. It can occur on any part of body, but it has higher probability to occur on chest, back and legs. METHODS:The paper presents a review of segmentation and classification techniques for skin lesion detection. Dermoscopy and its features are discussed briefly. After that Image pre-processing techniques are described. A thorough review of segmentation and classification phases of skin lesion detection using deep learning techniques is presented Literature is discussed and a comparative analysis of discussed methods is presented. CONCLUSION:In this paper, we have presented the survey of more than 100 papers and comparative analysis of state of the art techniques, model and methodologies. Malignant melanoma is one of the most threating and deadliest cancers. Since the last few decades, researchers are putting extra attention and effort in accurate diagnosis of melanoma. The main challenges of dermoscopic skin lesion images are: low contrasts, multiple lesions, irregular and fuzzy borders, blood vessels, regression, hairs, bubbles, variegated coloring and other kinds of distortions. The lack of large training dataset makes these problems even more challenging. Due to recent advancement in the paradigm of deep learning, and specially the outstanding performance in medical imaging, it has become important to review the deep learning algorithms performance in skin lesion segmentation. Here, we have discussed the results of different techniques on the basis of different evaluation parameters such as Jaccard coefficient, sensitivity, specificity and accuracy. And the paper listed down the major achievements in this domain with the detailed discussion of the techniques. In future, it is expected to improve results by utilizing the capabilities of deep learning frameworks with other pre and post processing techniques so reliable and accurate diagnostic systems can be built. 10.2174/1573405615666190129120449
DeepQuantify: deep learning and quantification system of white blood cells in light microscopy images of injured skeletal muscles. Journal of medical imaging (Bellingham, Wash.) White blood cells (WBCs) are the most diverse types of cells observed in the healing process of injured skeletal muscles. In the recovery process, WBCs exhibit a dynamic cellular response and undergo multiple changes of the protein expression. The progress of healing can be analyzed by the number of WBCs or by the number of specific proteins observed in light microscopy images obtained at different time points after injury. We propose a deep learning quantification and analysis system called DeepQuantify to analyze WBCs in light microscopy images of uninjured and injured muscles of female mice. The DeepQuantify system features in segmentation using the localized iterative Otsu's thresholding method, masking postprocessing, and classification of WBCs with a convolutional neural network (CNN) classifier to achieve a high accuracy and a low manpower cost. The proposed two-layer CNN classifier designed based on the optimization hypothesis is evaluated and compared with other CNN classifiers. The DeepQuantify system adopting these CNN classifiers is evaluated for quantifying CD68-positive macrophages and 7/4-positive neutrophils and compared with the state-of-the-art deep learning segmentation architectures. DeepQuantify achieves an accuracy of 90.64% and 89.31% for CD68-positive macrophages and 7/4-positive neutrophils, respectively. The DeepQuantify system employing the proposed two-layer CNN architecture achieves better performance than those deep segmentation architectures. The quantitative analysis of two protein dynamics during muscle recovery is also presented. 10.1117/1.JMI.6.2.024006
White blood cells identification system based on convolutional deep neural learning networks. Shahin A I,Guo Yanhui,Amin K M,Sharawi Amr A Computer methods and programs in biomedicine BACKGROUND AND OBJECTIVES:White blood cells (WBCs) differential counting yields valued information about human health and disease. The current developed automated cell morphology equipments perform differential count which is based on blood smear image analysis. Previous identification systems for WBCs consist of successive dependent stages; pre-processing, segmentation, feature extraction, feature selection, and classification. There is a real need to employ deep learning methodologies so that the performance of previous WBCs identification systems can be increased. Classifying small limited datasets through deep learning systems is a major challenge and should be investigated. METHODS:In this paper, we propose a novel identification system for WBCs based on deep convolutional neural networks. Two methodologies based on transfer learning are followed: transfer learning based on deep activation features and fine-tuning of existed deep networks. Deep acrivation featues are extracted from several pre-trained networks and employed in a traditional identification system. Moreover, a novel end-to-end convolutional deep architecture called "WBCsNet" is proposed and built from scratch. Finally, a limited balanced WBCs dataset classification is performed through the WBCsNet as a pre-trained network. RESULTS:During our experiments, three different public WBCs datasets (2551 images) have been used which contain 5 healthy WBCs types. The overall system accuracy achieved by the proposed WBCsNet is (96.1%) which is more than different transfer learning approaches or even the previous traditional identification system. We also present features visualization for the WBCsNet activation which reflects higher response than the pre-trained activated one. CONCLUSION:a novel WBCs identification system based on deep learning theory is proposed and a high performance WBCsNet can be employed as a pre-trained network. 10.1016/j.cmpb.2017.11.015
Integrating deep learning with microfluidics for biophysical classification of sickle red blood cells adhered to laminin. PLoS computational biology Sickle cell disease, a genetic disorder affecting a sizeable global demographic, manifests in sickle red blood cells (sRBCs) with altered shape and biomechanics. sRBCs show heightened adhesive interactions with inflamed endothelium, triggering painful vascular occlusion events. Numerous studies employ microfluidic-assay-based monitoring tools to quantify characteristics of adhered sRBCs from high resolution channel images. The current image analysis workflow relies on detailed morphological characterization and cell counting by a specially trained worker. This is time and labor intensive, and prone to user bias artifacts. Here we establish a morphology based classification scheme to identify two naturally arising sRBC subpopulations-deformable and non-deformable sRBCs-utilizing novel visual markers that link to underlying cell biomechanical properties and hold promise for clinically relevant insights. We then set up a standardized, reproducible, and fully automated image analysis workflow designed to carry out this classification. This relies on a two part deep neural network architecture that works in tandem for segmentation of channel images and classification of adhered cells into subtypes. Network training utilized an extensive data set of images generated by the SCD BioChip, a microfluidic assay which injects clinical whole blood samples into protein-functionalized microchannels, mimicking physiological conditions in the microvasculature. Here we carried out the assay with the sub-endothelial protein laminin. The machine learning approach segmented the resulting channel images with 99.1±0.3% mean IoU on the validation set across 5 k-folds, classified detected sRBCs with 96.0±0.3% mean accuracy on the validation set across 5 k-folds, and matched trained personnel in overall characterization of whole channel images with R2 = 0.992, 0.987 and 0.834 for total, deformable and non-deformable sRBC counts respectively. Average analysis time per channel image was also improved by two orders of magnitude (∼ 2 minutes vs ∼ 2-3 hours) over manual characterization. Finally, the network results show an order of magnitude less variance in counts on repeat trials than humans. This kind of standardization is a prerequisite for the viability of any diagnostic technology, making our system suitable for affordable and high throughput disease monitoring. 10.1371/journal.pcbi.1008946
Automatic segmentation of blood cells from microscopic slides: A comparative analysis. Depto Deponker Sarker,Rahman Shazidur,Hosen Md Mekayel,Akter Mst Shapna,Reme Tamanna Rahman,Rahman Aimon,Zunair Hasib,Rahman M Sohel,Mahdy M R C Tissue & cell With the recent developments in deep learning, automatic cell segmentation from images of microscopic examination slides seems to be a solved problem as recent methods have achieved comparable results on existing benchmark datasets. However, most of the existing cell segmentation benchmark datasets either contain a single cell type, few instances of the cells, not publicly available. Therefore, it is unclear whether the performance improvements can generalize on more diverse datasets. In this paper, we present a large and diverse cell segmentation dataset BBBC041Seg, which consists both of uninfected cells (i.e., red blood cells/RBCs, leukocytes) and infected cells (i.e., gametocytes, rings, trophozoites, and schizonts). Additionally, all cell types do not have equal instances, which encourages researchers to develop algorithms for learning from imbalanced classes in a few shot learning paradigm. Furthermore, we conduct a comparative study using both classical rule-based and recent deep learning state-of-the-art (SOTA) methods for automatic cell segmentation and provide them as strong baselines. We believe the introduction of BBBC041Seg will promote future research towards clinically applicable cell segmentation methods from microscopic examinations, which can be later used for downstream tasks such as detecting hematological diseases (i.e., malaria). 10.1016/j.tice.2021.101653
Robust Blood Cell Image Segmentation Method Based on Neural Ordinary Differential Equations. Computational and mathematical methods in medicine For the analysis of medical images, one of the most basic methods is to diagnose diseases by examining blood smears through a microscope to check the morphology, number, and ratio of red blood cells and white blood cells. Therefore, accurate segmentation of blood cell images is essential for cell counting and identification. The aim of this paper is to perform blood smear image segmentation by combining neural ordinary differential equations (NODEs) with U-Net networks to improve the accuracy of image segmentation. In order to study the effect of ODE-solve on the speed and accuracy of the network, the ODE-block module was added to the nine convolutional layers in the U-Net network. Firstly, blood cell images are preprocessed to enhance the contrast between the regions to be segmented; secondly, the same dataset was used for the training set and testing set to test segmentation results. According to the experimental results, we select the location where the ordinary differential equation block (ODE-block) module is added, select the appropriate error tolerance, and balance the calculation time and the segmentation accuracy, in order to exert the best performance; finally, the error tolerance of the ODE-block is adjusted to increase the network depth, and the training NODEs-UNet network model is used for cell image segmentation. Using our proposed network model to segment blood cell images in the testing set, it can achieve 95.3% pixel accuracy and 90.61% mean intersection over union. By comparing the U-Net and ResNet networks, the pixel accuracy of our network model is increased by 0.88% and 0.46%, respectively, and the mean intersection over union is increased by 2.18% and 1.13%, respectively. Our proposed network model improves the accuracy of blood cell image segmentation and reduces the computational cost of the network. 10.1155/2021/5590180
Clustering-Based Dual Deep Learning Architecture for Detecting Red Blood Cells in Malaria Diagnostic Smears. IEEE journal of biomedical and health informatics Computer-assisted algorithms have become a mainstay of biomedical applications to improve accuracy and reproducibility of repetitive tasks like manual segmentation and annotation. We propose a novel pipeline for red blood cell detection and counting in thin blood smear microscopy images, named RBCNet, using a dual deep learning architecture. RBCNet consists of a U-Net first stage for cell-cluster or superpixel segmentation, followed by a second refinement stage Faster R-CNN for detecting small cell objects within the connected component clusters. RBCNet uses cell clustering instead of region proposals, which is robust to cell fragmentation, is highly scalable for detecting small objects or fine scale morphological structures in very large images, can be trained using non-overlapping tiles, and during inference is adaptive to the scale of cell-clusters with a low memory footprint. We tested our method on an archived collection of human malaria smears with nearly 200,000 labeled cells across 965 images from 193 patients, acquired in Bangladesh, with each patient contributing five images. Cell detection accuracy using RBCNet was higher than 97 %. The novel dual cascade RBCNet architecture provides more accurate cell detections because the foreground cell-cluster masks from U-Net adaptively guide the detection stage, resulting in a notably higher true positive and lower false alarm rates, compared to traditional and other deep learning methods. The RBCNet pipeline implements a crucial step towards automated malaria diagnosis. 10.1109/JBHI.2020.3034863
Computational Intelligence Method for Detection of White Blood Cells Using Hybrid of Convolutional Deep Learning and SIFT. Manthouri Mohammad,Aghajari Zhila,Safary Sheida Computational and mathematical methods in medicine Infection diseases are among the top global issues with negative impacts on health, economy, and society as a whole. One of the most effective ways to detect these diseases is done by analysing the microscopic images of blood cells. Artificial intelligence (AI) techniques are now widely used to detect these blood cells and explore their structures. In recent years, deep learning architectures have been utilized as they are powerful tools for big data analysis. In this work, we are presenting a deep neural network for processing of microscopic images of blood cells. Processing these images is particularly important as white blood cells and their structures are being used to diagnose different diseases. In this research, we design and implement a reliable processing system for blood samples and classify five different types of white blood cells in microscopic images. We use the Gram-Schmidt algorithm for segmentation purposes. For the classification of different types of white blood cells, we combine Scale-Invariant Feature Transform (SIFT) feature detection technique with a deep convolutional neural network. To evaluate our work, we tested our method on LISC and WBCis databases. We achieved 95.84% and 97.33% accuracy of segmentation for these data sets, respectively. Our work illustrates that deep learning models can be promising in designing and developing a reliable system for microscopic image processing. 10.1155/2022/9934144
A Deep Learning Approach for Segmentation of Red Blood Cell Images and Malaria Detection. Delgado-Ortet Maria,Molina Angel,Alférez Santiago,Rodellar José,Merino Anna Entropy (Basel, Switzerland) Malaria is an endemic life-threating disease caused by the unicellular protozoan parasites of the genus . Confirming the presence of parasites early in all malaria cases ensures species-specific antimalarial treatment, reducing the mortality rate, and points to other illnesses in negative cases. However, the gold standard remains the light microscopy of May-Grünwald-Giemsa (MGG)-stained thin and thick peripheral blood (PB) films. This is a time-consuming procedure, dependent on a pathologist's skills, meaning that healthcare providers may encounter difficulty in diagnosing malaria in places where it is not endemic. This work presents a novel three-stage pipeline to (1) segment erythrocytes, (2) crop and mask them, and (3) classify them into malaria infected or not. The first and third steps involved the design, training, validation and testing of a Segmentation Neural Network and a Convolutional Neural Network from scratch using a Graphic Processing Unit. Segmentation achieved a global accuracy of 93.72% over the test set and the specificity for malaria detection in red blood cells (RBCs) was 87.04%. This work shows the potential that deep learning has in the digital pathology field and opens the way for future improvements, as well as for broadening the use of the created networks. 10.3390/e22060657
Examination of blood samples using deep learning and mobile microscopy. Pfeil Juliane,Nechyporenko Alina,Frohme Marcus,Hufert Frank T,Schulze Katja BMC bioinformatics BACKGROUND:Microscopic examination of human blood samples is an excellent opportunity to assess general health status and diagnose diseases. Conventional blood tests are performed in medical laboratories by specialized professionals and are time and labor intensive. The development of a point-of-care system based on a mobile microscope and powerful algorithms would be beneficial for providing care directly at the patient's bedside. For this purpose human blood samples were visualized using a low-cost mobile microscope, an ocular camera and a smartphone. Training and optimisation of different deep learning methods for instance segmentation are used to detect and count the different blood cells. The accuracy of the results is assessed using quantitative and qualitative evaluation standards. RESULTS:Instance segmentation models such as Mask R-CNN, Mask Scoring R-CNN, D2Det and YOLACT were trained and optimised for the detection and classification of all blood cell types. These networks were not designed to detect very small objects in large numbers, so extensive modifications were necessary. Thus, segmentation of all blood cell types and their classification was feasible with great accuracy: qualitatively evaluated, mean average precision of 0.57 and mean average recall of 0.61 are achieved for all blood cell types. Quantitatively, 93% of ground truth blood cells can be detected. CONCLUSIONS:Mobile blood testing as a point-of-care system can be performed with diagnostic accuracy using deep learning methods. In the future, this application could enable very fast, cheap, location- and knowledge-independent patient care. 10.1186/s12859-022-04602-4
Automated White Blood Cell Counting in Nailfold Capillary Using Deep Learning Segmentation and Video Stabilization. Sensors (Basel, Switzerland) White blood cells (WBCs) are essential components of the immune system in the human body. Various invasive and noninvasive methods to monitor the condition of the WBCs have been developed. Among them, a noninvasive method exploits an optical characteristic of WBCs in a nailfold capillary image, as they appear as visual gaps. This method is inexpensive and could possibly be implemented on a portable device. However, recent studies on this method use a manual or semimanual image segmentation, which depends on recognizable features and the intervention of experts, hindering its scalability and applicability. We address and solve this problem with proposing an automated method for detecting and counting WBCs that appear as visual gaps on nailfold capillary images. The proposed method consists of an automatic capillary segmentation method using deep learning, video stabilization, and WBC event detection algorithms. Performances of the three segmentation algorithms (manual, conventional, and deep learning) with/without video stabilization were benchmarks. Experimental results demonstrate that the proposed method improves the performance of the WBC event counting and outperforms conventional approaches. 10.3390/s20247101