Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Med Image Anal ; 97: 103294, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39128377

RESUMEN

Multiple instance learning (MIL)-based methods have been widely adopted to process the whole slide image (WSI) in the field of computational pathology. Due to the sparse slide-level supervision, these methods usually lack good localization on the tumor regions, leading to poor interpretability. Moreover, they lack robust uncertainty estimation of prediction results, leading to poor reliability. To solve the above two limitations, we propose an explainable and evidential multiple instance learning (E2-MIL) framework for whole slide image classification. E2-MIL is mainly composed of three modules: a detail-aware attention distillation module (DAM), a structure-aware attention refined module (SRM), and an uncertainty-aware instance classifier (UIC). Specifically, DAM helps the global network locate more detail-aware positive instances by utilizing the complementary sub-bags to learn detailed attention knowledge from the local network. In addition, a masked self-guidance loss is also introduced to help bridge the gap between the slide-level labels and instance-level classification tasks. SRM generates a structure-aware attention map that locates the entire tumor region structure by effectively modeling the spatial relations between clustering instances. Moreover, UIC provides accurate instance-level classification results and robust predictive uncertainty estimation to improve the model reliability based on subjective logic theory. Extensive experiments on three large multi-center subtyping datasets demonstrate both slide-level and instance-level performance superiority of E2-MIL.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Reproducibilidad de los Resultados , Algoritmos , Aprendizaje Automático
2.
Front Oncol ; 14: 1275769, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38746682

RESUMEN

Background: Whole Slide Image (WSI) analysis, driven by deep learning algorithms, has the potential to revolutionize tumor detection, classification, and treatment response prediction. However, challenges persist, such as limited model generalizability across various cancer types, the labor-intensive nature of patch-level annotation, and the necessity of integrating multi-magnification information to attain a comprehensive understanding of pathological patterns. Methods: In response to these challenges, we introduce MAMILNet, an innovative multi-scale attentional multi-instance learning framework for WSI analysis. The incorporation of attention mechanisms into MAMILNet contributes to its exceptional generalizability across diverse cancer types and prediction tasks. This model considers whole slides as "bags" and individual patches as "instances." By adopting this approach, MAMILNet effectively eliminates the requirement for intricate patch-level labeling, significantly reducing the manual workload for pathologists. To enhance prediction accuracy, the model employs a multi-scale "consultation" strategy, facilitating the aggregation of test outcomes from various magnifications. Results: Our assessment of MAMILNet encompasses 1171 cases encompassing a wide range of cancer types, showcasing its effectiveness in predicting complex tasks. Remarkably, MAMILNet achieved impressive results in distinct domains: for breast cancer tumor detection, the Area Under the Curve (AUC) was 0.8872, with an Accuracy of 0.8760. In the realm of lung cancer typing diagnosis, it achieved an AUC of 0.9551 and an Accuracy of 0.9095. Furthermore, in predicting drug therapy responses for ovarian cancer, MAMILNet achieved an AUC of 0.7358 and an Accuracy of 0.7341. Conclusion: The outcomes of this study underscore the potential of MAMILNet in driving the advancement of precision medicine and individualized treatment planning within the field of oncology. By effectively addressing challenges related to model generalization, annotation workload, and multi-magnification integration, MAMILNet shows promise in enhancing healthcare outcomes for cancer patients. The framework's success in accurately detecting breast tumors, diagnosing lung cancer types, and predicting ovarian cancer therapy responses highlights its significant contribution to the field and paves the way for improved patient care.

3.
Comput Methods Programs Biomed ; 244: 107936, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38016392

RESUMEN

BACKGROUND AND OBJECTIVE: Esophageal cancer is a serious disease with a high prevalence in Eastern Asia. Histopathology tissue analysis stands as the gold standard in diagnosing esophageal cancer. In recent years, there has been a shift towards digitizing histopathological images into whole slide images (WSIs), progressively integrating them into cancer diagnostics. However, the gigapixel sizes of WSIs present significant storage and processing challenges, and they often lack localized annotations. To address this issue, multi-instance learning (MIL) has been introduced for WSI classification, utilizing weakly supervised learning for diagnosis analysis. By applying the principles of MIL to WSI analysis, it is possible to reduce the workload of pathologists by facilitating the generation of localized annotations. Nevertheless, the approach's effectiveness is hindered by the traditional simple aggregation operation and the domain shift resulting from the prevalent use of convolutional feature extractors pretrained on ImageNet. METHODS: We propose a MIL-based framework for WSI analysis and cancer classification. Concurrently, we introduce employing self-supervised learning, which obviates the need for manual annotation and demonstrates versatility in various tasks, to pretrain feature extractors. This method enhances the extraction of representative features from esophageal WSI for MIL, ensuring more robust and accurate performance. RESULTS: We build a comprehensive dataset of whole esophageal slide images and conduct extensive experiments utilizing this dataset. The performance on our dataset demonstrates the efficiency of our proposed MIL framework and the pretraining process, with our framework outperforming existing methods, achieving an accuracy of 93.07% and AUC (area under the curve) of 95.31%. CONCLUSION: This work proposes an effective MIL method to classify WSI of esophageal cancer. The promising results indicate that our cancer classification framework holds great potential in promoting the automatic whole esophageal slide image analysis.


Asunto(s)
Neoplasias Esofágicas , Humanos , Neoplasias Esofágicas/diagnóstico por imagen , Suministros de Energía Eléctrica , Procesamiento de Imagen Asistido por Computador , Carga de Trabajo
4.
Comput Biol Med ; 167: 107607, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37890421

RESUMEN

Multiple instance learning (MIL) models have achieved remarkable success in analyzing whole slide images (WSIs) for disease classification problems. However, with regard to giga-pixel WSI classification problems, current MIL models are often incapable of differentiating a WSI with extremely small tumor lesions. This minute tumor-to-normal area ratio in a MIL bag inhibits the attention mechanism from properly weighting the areas corresponding to minor tumor lesions. To overcome this challenge, we propose salient instance inference MIL (SiiMIL), a weakly-supervised MIL model for WSI classification. We introduce a novel representation learning for histopathology images to identify representative normal keys. These keys facilitate the selection of salient instances within WSIs, forming bags with high tumor-to-normal ratios. Finally, an attention mechanism is employed for slide-level classification based on formed bags. Our results show that salient instance inference can improve the tumor-to-normal area ratio in the tumor WSIs. As a result, SiiMIL achieves 0.9225 AUC and 0.7551 recall on the Camelyon16 dataset, which outperforms the existing MIL models. In addition, SiiMIL can generate tumor-sensitive attention heatmaps that is more interpretable to pathologists than the widely used attention-based MIL method. Our experiments imply that SiiMIL can accurately identify tumor instances, which could only take up less than 1% of a WSI, so that the ratio of tumor to normal instances within a bag can increase by two to four times.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Aprendizaje Automático , Neoplasias , Humanos , Neoplasias/diagnóstico por imagen
5.
Med Image Anal ; 88: 102885, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37423055

RESUMEN

Image analysis and machine learning algorithms operating on multi-gigapixel whole-slide images (WSIs) often process a large number of tiles (sub-images) and require aggregating predictions from the tiles in order to predict WSI-level labels. In this paper, we present a review of existing literature on various types of aggregation methods with a view to help guide future research in the area of computational pathology (CPath). We propose a general CPath workflow with three pathways that consider multiple levels and types of data and the nature of computation to analyse WSIs for predictive modelling. We categorize aggregation methods according to the context and representation of the data, features of computational modules and CPath use cases. We compare and contrast different methods based on the principle of multiple instance learning, perhaps the most commonly used aggregation method, covering a wide range of CPath literature. To provide a fair comparison, we consider a specific WSI-level prediction task and compare various aggregation methods for that task. Finally, we conclude with a list of objectives and desirable attributes of aggregation methods in general, pros and cons of the various approaches, some recommendations and possible future directions.


Asunto(s)
Algoritmos , Aprendizaje Automático , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
6.
Comput Biol Med ; 161: 107034, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37230019

RESUMEN

In recent years, with the advancement of computer-aided diagnosis (CAD) technology and whole slide image (WSI), histopathological WSI has gradually played a crucial aspect in the diagnosis and analysis of diseases. To increase the objectivity and accuracy of pathologists' work, artificial neural network (ANN) methods have been generally needed in the segmentation, classification, and detection of histopathological WSI. However, the existing review papers only focus on equipment hardware, development status and trends, and do not summarize the art neural network used for full-slide image analysis in detail. In this paper, WSI analysis methods based on ANN are reviewed. Firstly, the development status of WSI and ANN methods is introduced. Secondly, we summarize the common ANN methods. Next, we discuss publicly available WSI datasets and evaluation metrics. These ANN architectures for WSI processing are divided into classical neural networks and deep neural networks (DNNs) and then analyzed. Finally, the application prospect of the analytical method in this field is discussed. The important potential method is Visual Transformers.


Asunto(s)
Diagnóstico por Computador , Redes Neurales de la Computación , Diagnóstico por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
7.
Comput Med Imaging Graph ; 104: 102176, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36682215

RESUMEN

Classification of subtype and grade is imperative in the clinical diagnosis and prognosis of cancer. Many deep learning-based studies related to cancer classification are based on pathology and genomics. However, most of them are late fusion-based and require full supervision in pathology image analysis. To address these problems, we present an integrated framework for cancer classification with pathology and genomics data. This framework consists of two major parts, a weakly supervised model for extracting patch features from whole slide images (WSIs), and a hierarchical multimodal fusion model. The weakly supervised model can make full use of WSI labels, and mitigate the effects of label noises by the self-training strategy. The generic multimodal fusion model is capable of capturing deep interaction information through multi-level attention mechanisms and controlling the expressiveness of each modal representation. We validate our approach on glioma and lung cancer datasets from The Cancer Genome Atlas (TCGA). The results demonstrate that the proposed method achieves superior performance compared to state-of-the-art methods, with the competitive AUC of 0.872 and 0.977 on these two datasets respectively. This paper establishes insight on how to build deep networks on multimodal biomedical data and proposes a more general framework for pathology image analysis without pixel-level annotation.


Asunto(s)
Glioma , Neoplasias Pulmonares , Humanos , Genómica , Procesamiento de Imagen Asistido por Computador
8.
Patterns (N Y) ; 3(12): 100642, 2022 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-36569545

RESUMEN

Pathologists diagnose prostate cancer by core needle biopsy. In low-grade and low-volume cases, they look for a few malignant glands out of hundreds within a core. They may miss a few malignant glands, resulting in repeat biopsies or missed therapeutic opportunities. This study developed a multi-resolution deep-learning pipeline to assist pathologists in detecting malignant glands in core needle biopsies of low-grade and low-volume cases. Analyzing a gland at multiple resolutions, our model exploited morphology and neighborhood information, which were crucial in prostate gland classification. We developed and tested our pipeline on the slides of a local cohort of 99 patients in Singapore. Besides, we made the images publicly available, becoming the first digital histopathology dataset of patients of Asian ancestry with prostatic carcinoma. Our multi-resolution classification model achieved an area under the receiver operating characteristic curve (AUROC) value of 0.992 (95% confidence interval [CI]: 0.985-0.997) in the external validation study, showing the generalizability of our multi-resolution approach.

10.
Diagnostics (Basel) ; 12(9)2022 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-36140562

RESUMEN

In this paper, we propose a novel approach to segment tumor and normal regions in human breast tissues. Cancer is the second most common cause of death in our society; every eighth woman will be diagnosed with breast cancer in her life. Histological diagnosis is key in the process where oncotherapy is administered. Due to the time-consuming analysis and the lack of specialists alike, obtaining a timely diagnosis is often a difficult process in healthcare institutions, so there is an urgent need for improvement in diagnostics. To reduce costs and speed up the process, an automated algorithm could aid routine diagnostics. We propose an area-based annotation approach generalized by a new rule template to accurately solve high-resolution biological segmentation tasks in a time-efficient way. These algorithm and implementation rules provide an alternative solution for pathologists to make decisions as accurate as manually. This research is based on an individual database from Semmelweis University, containing 291 high-resolution, bright field microscopy breast tumor tissue images. A total of 70% of the 128 × 128-pixel resolution images (206,174 patches) were used for training a convolutional neural network to learn the features of normal and tumor tissue samples. The evaluation of the small regions results in high-resolution histopathological image segmentation; the optimal parameters were calculated on the validation dataset (29 images, 10%), considering the accuracy and time factor as well. The algorithm was tested on the test dataset (61 images, 20%), reaching a 99.10% f1 score on pixel level evaluation within 3 min on average. Besides the quantitative analyses, the system's accuracy was measured qualitatively by a histopathologist, who confirmed that the algorithm was also accurate in regions not annotated before.

11.
Clin Transl Radiat Oncol ; 36: 106-112, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35993091

RESUMEN

Background: The microscopic tumor extension before, during or after radiochemotherapy (RCHT) and its correlation with the tumor microenvironment (TME) are presently unknown. This information is, however, crucial in the era of image-guided, adaptive high-precision photon or particle therapy. Materials and methods: In this pilot study, we analyzed formalin-fixed paraffin-embedded (FFPE) tumor resection specimen from patients with histologically confirmed squamous cell carcinoma (SCC; n = 10) or adenocarcinoma (A; n = 10) of the esophagus, having undergone neoadjuvant radiochemotherapy followed by resection (NRCHT + R) or resection (R)]. FFPE tissue sections were analyzed by immunohistochemistry regarding tumor hypoxia (HIF-1α), proliferation (Ki67), immune status (PD1), cancer cell stemness (CXCR4), and p53 mutation status. Marker expression in HIF-1α subvolumes was part of a sub-analysis. Statistical analyses were performed using one-sided Mann-Whitney tests and Bland-Altman analysis. Results: In both SCC and AC patients, the overall percentages of positive tumor cells among the five TME markers, namely HIF-1α, Ki67, p53, CXCR4 and PD1 after NRCHT were lower than in the R cohort. However, only PD1 in SCC and Ki67 in AC showed significant association (Ki67: p = 0.03, PD1: p = 0.02). In the sub-analysis of hypoxic subvolumes among the AC patients, the percentage of positive tumor cells within hypoxic regions were statistically significantly lower in the NRCHT than in the R cohort across all the markers except for PD1. Conclusion: In this pilot study, we showed changes in the TME induced by NRCHT in both SCC and AC. These findings will be correlated with microscopic tumor extension measurements in a subsequent cohort of patients.

12.
Comput Med Imaging Graph ; 99: 102093, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35752000

RESUMEN

Despite the progress made during the last two decades in the surgery and chemotherapy of ovarian cancer, more than 70 % of advanced patients are with recurrent cancer and decease. Surgical debulking of tumors following chemotherapy is the conventional treatment for advanced carcinoma, but patients with such treatment remain at great risk for recurrence and developing drug resistance, and only about 30 % of the women affected will be cured. Bevacizumab is a humanized monoclonal antibody, which blocks VEGF signaling in cancer, inhibits angiogenesis and causes tumor shrinkage, and has been recently approved by FDA as a monotherapy for advanced ovarian cancer in combination with chemotherapy. Considering the cost, potential toxicity, and finding that only a portion of patients will benefit from these drugs, the identification of new predictive method for the treatment of ovarian cancer remains an urgent unmet medical need. In this study, we develop weakly supervised deep learning approaches to accurately predict therapeutic effect for bevacizumab of ovarian cancer patients from histopathological hematoxylin and eosin stained whole slide images, without any pathologist-provided locally annotated regions. To the authors' best knowledge, this is the first model demonstrated to be effective for prediction of the therapeutic effect of patients with epithelial ovarian cancer to bevacizumab. Quantitative evaluation of a whole section dataset shows that the proposed method achieves high accuracy, 0.882 ± 0.06; precision, 0.921 ± 0.04, recall, 0.912 ± 0.03; F-measure, 0.917 ± 0.07 using 5-fold cross validation and outperforms two state-of-the art deep learning approaches Coudray et al. (2018), Campanella et al. (2019). For an independent TMA testing set, the three proposed methods obtain promising results with high recall (sensitivity) 0.946, 0.893 and 0.964, respectively. The results suggest that the proposed method could be useful for guiding treatment by assisting in filtering out patients without positive therapeutic response to suffer from further treatments while keeping patients with positive response in the treatment process. Furthermore, according to the statistical analysis of the Cox Proportional Hazards Model, patients who were predicted to be invalid by the proposed model had a very high risk of cancer recurrence (hazard ratio = 13.727) than patients predicted to be effective with statistical signifcance (p < 0.05).


Asunto(s)
Aprendizaje Profundo , Neoplasias Ováricas , Bevacizumab/uso terapéutico , Carcinoma Epitelial de Ovario/tratamiento farmacológico , Femenino , Humanos , Neoplasias Ováricas/diagnóstico por imagen , Neoplasias Ováricas/tratamiento farmacológico , Neoplasias Ováricas/patología , Resultado del Tratamiento
13.
Diagnostics (Basel) ; 12(4)2022 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-35454038

RESUMEN

Breast cancer is the leading cause of death for women globally. In clinical practice, pathologists visually scan over enormous amounts of gigapixel microscopic tissue slide images, which is a tedious and challenging task. In breast cancer diagnosis, micro-metastases and especially isolated tumor cells are extremely difficult to detect and are easily neglected because tiny metastatic foci might be missed in visual examinations by medical doctors. However, the literature poorly explores the detection of isolated tumor cells, which could be recognized as a viable marker to determine the prognosis for T1NoMo breast cancer patients. To address these issues, we present a deep learning-based framework for efficient and robust lymph node metastasis segmentation in routinely used histopathological hematoxylin−eosin-stained (H−E) whole-slide images (WSI) in minutes, and a quantitative evaluation is conducted using 188 WSIs, containing 94 pairs of H−E-stained WSIs and immunohistochemical CK(AE1/AE3)-stained WSIs, which are used to produce a reliable and objective reference standard. The quantitative results demonstrate that the proposed method achieves 89.6% precision, 83.8% recall, 84.4% F1-score, and 74.9% mIoU, and that it performs significantly better than eight deep learning approaches, including two recently published models (v3_DCNN and Xception-65), and three variants of Deeplabv3+ with three different backbones, namely, U-Net, SegNet, and FCN, in precision, recall, F1-score, and mIoU (p<0.001). Importantly, the proposed system is shown to be capable of identifying tiny metastatic foci in challenging cases, for which there are high probabilities of misdiagnosis in visual inspection, while the baseline approaches tend to fail in detecting tiny metastatic foci. For computational time comparison, the proposed method takes 2.4 min for processing a WSI utilizing four NVIDIA Geforce GTX 1080Ti GPU cards and 9.6 min using a single NVIDIA Geforce GTX 1080Ti GPU card, and is notably faster than the baseline methods (4-times faster than U-Net and SegNet, 5-times faster than FCN, 2-times faster than the 3 different variants of Deeplabv3+, 1.4-times faster than v3_DCNN, and 41-times faster than Xception-65).

14.
Comput Methods Programs Biomed ; 208: 106291, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34333205

RESUMEN

BACKGROUND AND OBJECTIVE: Computerized pathology image analysis is an important tool in research and clinical settings, which enables quantitative tissue characterization and can assist a pathologist's evaluation. The aim of our study is to systematically quantify and minimize uncertainty in output of computer based pathology image analysis. METHODS: Uncertainty quantification (UQ) and sensitivity analysis (SA) methods, such as Variance-Based Decomposition (VBD) and Morris One-At-a-Time (MOAT), are employed to track and quantify uncertainty in a real-world application with large Whole Slide Imaging datasets - 943 Breast Invasive Carcinoma (BRCA) and 381 Lung Squamous Cell Carcinoma (LUSC) patients. Because these studies are compute intensive, high-performance computing systems and efficient UQ/SA methods were combined to provide efficient execution. UQ/SA has been able to highlight parameters of the application that impact the results, as well as nuclear features that carry most of the uncertainty. Using this information, we built a method for selecting stable features that minimize application output uncertainty. RESULTS: The results show that input parameter variations significantly impact all stages (segmentation, feature computation, and survival analysis) of the use case application. We then identified and classified features according to their robustness to parameter variation, and using the proposed features selection strategy, for instance, patient grouping stability in survival analysis has been improved from in 17% and 34% for BRCA and LUSC, respectively. CONCLUSIONS: This strategy created more robust analyses, demonstrating that SA and UQ are important methods that may increase confidence digital pathology.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Incertidumbre
15.
Front Oncol ; 11: 665929, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34249702

RESUMEN

Pancreatic ductal adenocarcinoma (PDAC) is one of the deadliest cancer types worldwide, with the lowest 5-year survival rate among all kinds of cancers. Histopathology image analysis is considered a gold standard for PDAC detection and diagnosis. However, the manual diagnosis used in current clinical practice is a tedious and time-consuming task and diagnosis concordance can be low. With the development of digital imaging and machine learning, several scholars have proposed PDAC analysis approaches based on feature extraction methods that rely on field knowledge. However, feature-based classification methods are applicable only to a specific problem and lack versatility, so that the deep-learning method is becoming a vital alternative to feature extraction. This paper proposes the first deep convolutional neural network architecture for classifying and segmenting pancreatic histopathological images on a relatively large WSI dataset. Our automatic patch-level approach achieved 95.3% classification accuracy and the WSI-level approach achieved 100%. Additionally, we visualized the classification and segmentation outcomes of histopathological images to determine which areas of an image are more important for PDAC identification. Experimental results demonstrate that our proposed model can effectively diagnose PDAC using histopathological images, which illustrates the potential of this practical application.

16.
Front Mol Biosci ; 8: 689799, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34179094

RESUMEN

Type 1 diabetes is a chronic disease of the pancreas characterized by the loss of insulin-producing beta cells. Access to human pancreas samples for research purposes has been historically limited, restricting pathological analyses to animal models. However, intrinsic differences between animals and humans have made clinical translation very challenging. Recently, human pancreas samples have become available through several biobanks worldwide, and this has opened numerous opportunities for scientific discovery. In addition, the use of new imaging technologies has unraveled many mysteries of the human pancreas not merely in the presence of disease, but also in physiological conditions. Nowadays, multiplex immunofluorescence protocols as well as sophisticated image analysis tools can be employed. Here, we described the use of QuPath-an open-source platform for image analysis-for the investigation of human pancreas samples. We demonstrate that QuPath can be adequately used to analyze whole-slide images with the aim of identifying the islets of Langerhans and define their cellular composition as well as other basic morphological characteristics. In addition, we show that QuPath can identify immune cell populations in the exocrine tissue and islets of Langerhans, accurately localizing and quantifying immune infiltrates in the pancreas. Therefore, we present a tool and analysis pipeline that allows for the accurate characterization of the human pancreas, enabling the study of the anatomical and physiological changes underlying pancreatic diseases such as type 1 diabetes. The standardization and implementation of these analysis tools is of critical importance to understand disease pathogenesis, and may be informative for the design of new therapies aimed at preserving beta cell function and halting the inflammation caused by the immune attack.

17.
Phys Med Biol ; 66(14)2021 07 12.
Artículo en Inglés | MEDLINE | ID: mdl-34181583

RESUMEN

Whole slide histopathology images (WSIs) play a crucial role in diagnosing lymph node metastasis of breast cancer, which usually lack fine-grade annotations of tumor regions and have large resolutions (typically 105 × 105pixels). Multi-instance learning has gradually become a dominant weakly supervised learning framework for WSI classification when only slide-level labels are available. In this paper, we develop a novel second-order multiple instances learning method (SoMIL) with an adaptive aggregator stacked by the attention mechanism and recurrent neural network (RNN) for histopathological image classification. To be specific, the proposed method applies a second-order pooling module (matrix power normalization covariance) for instance-level feature extraction of weakly supervised learning framework, attempting to explore second-order statistics of deep features for histopathological images. Additionally, we utilize an efficient channel attention mechanism to adaptively highlight the most discriminative instance features, followed by an RNN to update the final bag-level representation for the slide classification. Experimental results on the lymph node metastasis dataset of 2016 Camelyon grand challenge demonstrate the significant improvement of our proposed SoMIL framework compared with other state-of-the-art multi-instance learning methods. Moreover, in the external validation on 130 WSIs, SoMIL also achieves an impressive area under the curve performance that competitive to the fully-supervised framework.


Asunto(s)
Neoplasias de la Mama , Redes Neurales de la Computación , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Metástasis Linfática
18.
Comput Med Imaging Graph ; 89: 101863, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33578222

RESUMEN

The mortality rate of Breast Cancer in women has increased, both in west and east. Early detection is important to improve the survival rate of cancer patients. The manual detection and identification of cancer in whole slide images are critical and difficult tasks for pathologists. In this work, we introduce PMNet, a pipeline to detect regions with invasive characteristics in whole slide images. Our method employs scaled networks for detecting breast cancer in whole slide images. It classifies whole slide images on patch level into normal, benign, in situ and invasive tumors. Our approach yielded f1-score of 88.9(±1.7)% that outperforms the benchmark f1-score of 81.2(±1.3)% on patch level and achieved an average dice coefficient of 69.8% on 10 whole slide images compared to the benchmark average dice coefficient of 61.5% on BACH dataset. Similarly, on the dryad test dataset that comprises of 173 whole slide images, we achieved an average dice coefficient of 82.7% as compared to the previous state-of-art of 76% without fine-tuning on this dataset. We further proposed a method to generate patch level annotations for the image level TCGA breast cancer database that will be useful for future deep learning methods.


Asunto(s)
Neoplasias de la Mama , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Probabilidad
19.
Front Med (Lausanne) ; 6: 193, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31632974

RESUMEN

Stain normalization is an important processing task for computer-aided diagnosis (CAD) systems in modern digital pathology. This task reduces the color and intensity variations present in stained images from different laboratories. Consequently, stain normalization typically increases the prediction accuracy of CAD systems. However, there are computational challenges that this normalization step must overcome, especially for real-time applications: the memory and run-time bottlenecks associated with the processing of images in high resolution, e.g., 40X. Moreover, stain normalization can be sensitive to the quality of the input images, e.g., when they contain stain spots or dirt. In this case, the algorithm may fail to accurately estimate the stain vectors. We present a high-performance system for stain normalization using a state-of-the-art unsupervised method based on stain-vector estimation. Using a highly-optimized normalization engine, our architecture enables high-speed and large-scale processing of high-resolution whole-slide images. This optimized engine integrates an automated thresholding technique to determine the useful pixels and uses a novel pixel-sampling method that significantly reduces the processing time of the normalization algorithm. We demonstrate the performance of our architecture using measurements from images of different sizes and scanner formats that belong to four different datasets. The results show that our optimizations achieve up to 58x speedup compared to a baseline implementation. We also prove the scalability of our system by showing that the processing time scales almost linearly with the amount of tissue pixels present in the image. Furthermore, we show that the output of the normalization algorithm can be adversely affected when the input images include artifacts. To address this issue, we enhance the stain normalization pipeline by introducing a parameter cross-checking technique that automatically detects the distortion of the algorithm's critical parameters. To assess the robustness of the proposed method we employ a machine learning (ML) pipeline that classifies images for detection of prostate cancer. The results show that the enhanced normalization algorithm increases the classification accuracy of the ML pipeline in the presence of poor-quality input images. For an exemplary ML pipeline, our new method increases the accuracy on an unseen dataset from 0.79 to 0.87.

20.
Med Image Anal ; 58: 101549, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31499320

RESUMEN

The whole slide histopathology images (WSIs) play a critical role in gastric cancer diagnosis. However, due to the large scale of WSIs and various sizes of the abnormal area, how to select informative regions and analyze them are quite challenging during the automatic diagnosis process. The multi-instance learning based on the most discriminative instances can be of great benefit for whole slide gastric image diagnosis. In this paper, we design a recalibrated multi-instance deep learning method (RMDL) to address this challenging problem. We first select the discriminative instances, and then utilize these instances to diagnose diseases based on the proposed RMDL approach. The designed RMDL network is capable of capturing instance-wise dependencies and recalibrating instance features according to the importance coefficient learned from the fused features. Furthermore, we build a large whole-slide gastric histopathology image dataset with detailed pixel-level annotations. Experimental results on the constructed gastric dataset demonstrate the significant improvement on the accuracy of our proposed framework compared with other state-of-the-art multi-instance learning methods. Moreover, our method is general and can be extended to other diagnosis tasks of different cancer types based on WSIs.


Asunto(s)
Aprendizaje Profundo , Neoplasias Gástricas/patología , Calibración , Conjuntos de Datos como Asunto , Diagnóstico Diferencial , Humanos , Coloración y Etiquetado , Neoplasias Gástricas/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA