Identification of individual cells from z-stacks of bright-field microscopy images.
Lugagne Jean-Baptiste,Jain Srajan,Ivanovitch Pierre,Ben Meriem Zacchary,Vulin Clément,Fracassi Chiara,Batt Gregory,Hersen Pascal
Obtaining single cell data from time-lapse microscopy images is critical for quantitative biology, but bottlenecks in cell identification and segmentation must be overcome. We propose a novel, versatile method that uses machine learning classifiers to identify cell morphologies from z-stack bright-field microscopy images. We show that axial information is enough to successfully classify the pixels of an image, without the need to consider in focus morphological features. This fast, robust method can be used to identify different cell morphologies, including the features of E. coli, S. cerevisiae and epithelial cells, even in mixed cultures. Our method demonstrates the potential of acquiring and processing Z-stacks for single-layer, single-cell imaging and segmentation.
A patch-based tensor decomposition algorithm for M-FISH image classification.
Wang Min,Huang Ting-Zhu,Li Jingyao,Wang Yu-Ping
Cytometry. Part A : the journal of the International Society for Analytical Cytology
Multiplex-fluorescence in situ hybridization (M-FISH) is a chromosome imaging technique which can be used to detect chromosomal abnormalities such as translocations, deletions, duplications, and inversions. Chromosome classification from M-FISH imaging data is a key step to implement the technique. In the classified M-FISH image, each pixel in a chromosome is labeled with a class index and drawn with a pseudo-color so that geneticists can easily conduct diagnosis, for example, identifying chromosomal translocations by examining color changes between chromosomes. However, the information of pixels in a neighborhood is often overlooked by existing approaches. In this work, we assume that the pixels in a patch belong to the same class and use the patch to represent the center pixel's class information, by which we can use the correlations of neighboring pixels and the structural information across different spectral channels for the classification. On the basis of assumption, we propose a patch-based classification algorithm by using higher order singular value decomposition (HOSVD). The developed method has been tested on a comprehensive M-FISH database that we established, demonstrating improved performance. When compared with other pixel-wise M-FISH image classifiers such as fuzzy c-means clustering (FCM), adaptive fuzzy c-means clustering (AFCM), improved adaptive fuzzy c-means clustering (IAFCM), and sparse representation classification (SparseRC) methods, the proposed method gave the highest correct classification ratio (CCR), which can translate into improved diagnosis of genetic diseases and cancers. © 2016 International Society for Advancement of Cytometry.
Semantic segmentation of mFISH images using convolutional networks.
Pardo Esteban,Morgado José Mário T,Malpica Norberto
Cytometry. Part A : the journal of the International Society for Analytical Cytology
Multicolor in situ hybridization (mFISH) is a karyotyping technique used to detect major chromosomal alterations using fluorescent probes and imaging techniques. Manual interpretation of mFISH images is a time consuming step that can be automated using machine learning; in previous works, pixel or patch wise classification was employed, overlooking spatial information which can help identify chromosomes. In this work, we propose a fully convolutional semantic segmentation network for the interpretation of mFISH images, which uses both spatial and spectral information to classify each pixel in an end-to-end fashion. The semantic segmentation network developed was tested on samples extracted from a public dataset using cross validation. Despite having no labeling information of the image it was tested on, our algorithm yielded an average correct classification ratio (CCR) of 87.41%. Previously, this level of accuracy was only achieved with state of the art algorithms when classifying pixels from the same image in which the classifier has been trained. These results provide evidence that fully convolutional semantic segmentation networks may be employed in the computer aided diagnosis of genetic diseases with improved performance over the current image analysis methods. © 2018 International Society for Advancement of Cytometry.
Automated discrimination of dicentric and monocentric chromosomes by machine learning-based image processing.
Li Yanxin,Knoll Joan H,Wilkins Ruth C,Flegal Farrah N,Rogan Peter K
Microscopy research and technique
Dose from radiation exposure can be estimated from dicentric chromosome (DC) frequencies in metaphase cells of peripheral blood lymphocytes. We automated DC detection by extracting features in Giemsa-stained metaphase chromosome images and classifying objects by machine learning (ML). DC detection involves (i) intensity thresholded segmentation of metaphase objects, (ii) chromosome separation by watershed transformation and elimination of inseparable chromosome clusters, fragments and staining debris using a morphological decision tree filter, (iii) determination of chromosome width and centreline, (iv) derivation of centromere candidates, and (v) distinction of DCs from monocentric chromosomes (MC) by ML. Centromere candidates are inferred from 14 image features input to a Support Vector Machine (SVM). Sixteen features derived from these candidates are then supplied to a Boosting classifier and a second SVM which determines whether a chromosome is either a DC or MC. The SVM was trained with 292 DCs and 3135 MCs, and then tested with cells exposed to either low (1 Gy) or high (2-4 Gy) radiation dose. Results were then compared with those of 3 experts. True positive rates (TPR) and positive predictive values (PPV) were determined for the tuning parameter, σ. At larger σ, PPV decreases and TPR increases. At high dose, for σ = 1.3, TPR = 0.52 and PPV = 0.83, while at σ = 1.6, the TPR = 0.65 and PPV = 0.72. At low dose and σ = 1.3, TPR = 0.67 and PPV = 0.26. The algorithm differentiates DCs from MCs, overlapped chromosomes and other objects with acceptable accuracy over a wide range of radiation exposures.
A fully automated artificial intelligence method for non-invasive, imaging-based identification of genetic alterations in glioblastomas.
Calabrese Evan,Villanueva-Meyer Javier E,Cha Soonmee
Glioblastoma is the most common malignant brain parenchymal tumor yet remains challenging to treat. The current standard of care-resection and chemoradiation-is limited in part due to the genetic heterogeneity of glioblastoma. Previous studies have identified several tumor genetic biomarkers that are frequently present in glioblastoma and can alter clinical management. Currently, genetic biomarker status is confirmed with tissue sampling, which is costly and only available after tumor resection or biopsy. The purpose of this study was to evaluate a fully automated artificial intelligence approach for predicting the status of several common glioblastoma genetic biomarkers on preoperative MRI. We retrospectively analyzed multisequence preoperative brain MRI from 199 adult patients with glioblastoma who subsequently underwent tumor resection and genetic testing. Radiomics features extracted from fully automated deep learning-based tumor segmentations were used to predict nine common glioblastoma genetic biomarkers with random forest regression. The proposed fully automated method was useful for predicting IDH mutations (sensitivity = 0.93, specificity = 0.88), ATRX mutations (sensitivity = 0.94, specificity = 0.92), chromosome 7/10 aneuploidies (sensitivity = 0.90, specificity = 0.88), and CDKN2 family mutations (sensitivity = 0.76, specificity = 0.86).
Detection of chromosome structural variation by targeted next-generation sequencing and a deep learning application.
Park Hosub,Chun Sung-Min,Shim Jooyong,Oh Ji-Hye,Cho Eun Jeong,Hwang Hee Sang,Lee Ji-Young,Kim Deokhoon,Jang Se Jin,Nam Soo Jeong,Hwang Changha,Sohn Insuk,Sung Chang Ohk
Molecular testing is increasingly important in cancer diagnosis. Targeted next generation sequencing (NGS) is widely accepted method but structural variation (SV) detection by targeted NGS remains challenging. In the brain tumor, identification of molecular alterations, including 1p/19q co-deletion, is essential for accurate glial tumor classification. Hence, we used targeted NGS to detect 1p/19q co-deletion using a newly developed deep learning (DL) model in 61 tumors, including 19 oligodendroglial tumors. An ensemble 1-dimentional convolution neural network was developed and used to detect the 1p/19q co-deletion. External validation was performed using 427 low-grade glial tumors from The Cancer Genome Atlas (TCGA). Manual review of the copy number plot from the targeted NGS identified the 1p/19q co-deletion in all 19 oligodendroglial tumors. Our DL model also perfectly detected the 1p/19q co-deletion (area under the curve, AUC = 1) in the testing set, and yielded reproducible results (AUC = 0.9652) in the validation set (n = 427), although the validation data were generated on a completely different platform (SNP Array 6.0 platform). In conclusion, targeted NGS using a cancer gene panel is a promising approach for classifying glial tumors, and DL can be successfully integrated for the SV detection in NGS data.