Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 262
Filtrar
1.
Med Image Anal ; 97: 103294, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39128377

RESUMEN

Multiple instance learning (MIL)-based methods have been widely adopted to process the whole slide image (WSI) in the field of computational pathology. Due to the sparse slide-level supervision, these methods usually lack good localization on the tumor regions, leading to poor interpretability. Moreover, they lack robust uncertainty estimation of prediction results, leading to poor reliability. To solve the above two limitations, we propose an explainable and evidential multiple instance learning (E2-MIL) framework for whole slide image classification. E2-MIL is mainly composed of three modules: a detail-aware attention distillation module (DAM), a structure-aware attention refined module (SRM), and an uncertainty-aware instance classifier (UIC). Specifically, DAM helps the global network locate more detail-aware positive instances by utilizing the complementary sub-bags to learn detailed attention knowledge from the local network. In addition, a masked self-guidance loss is also introduced to help bridge the gap between the slide-level labels and instance-level classification tasks. SRM generates a structure-aware attention map that locates the entire tumor region structure by effectively modeling the spatial relations between clustering instances. Moreover, UIC provides accurate instance-level classification results and robust predictive uncertainty estimation to improve the model reliability based on subjective logic theory. Extensive experiments on three large multi-center subtyping datasets demonstrate both slide-level and instance-level performance superiority of E2-MIL.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Reproducibilidad de los Resultados , Algoritmos , Aprendizaje Automático
2.
Phys Med Biol ; 69(18)2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39191290

RESUMEN

Objective.In this study, we propose a semi-supervised learning (SSL) scheme using a patch-based deep learning (DL) framework to tackle the challenge of high-precision classification of seven lung tumor growth patterns, despite having a small amount of labeled data in whole slide images (WSIs). This scheme aims to enhance generalization ability with limited data and reduce dependence on large amounts of labeled data. It effectively addresses the common challenge of high demand for labeled data in medical image analysis.Approach.To address these challenges, the study employs a SSL approach enhanced by a dynamic confidence threshold mechanism. This mechanism adjusts based on the quantity and quality of pseudo labels generated. This dynamic thresholding mechanism helps avoid the imbalance of pseudo-label categories and the low number of pseudo-labels that may result from a higher fixed threshold. Furthermore, the research introduces a multi-teacher knowledge distillation (MTKD) technique. This technique adaptively weights predictions from multiple teacher models to transfer reliable knowledge and safeguard student models from low-quality teacher predictions.Main results.The framework underwent rigorous training and evaluation using a dataset of 150 WSIs, each representing one of the seven growth patterns. The experimental results demonstrate that the framework is highly accurate in classifying lung tumor growth patterns in histopathology images. Notably, the performance of the framework is comparable to that of fully supervised models and human pathologists. In addition, the framework's evaluation metrics on a publicly available dataset are higher than those of previous studies, indicating good generalizability.Significance.This research demonstrates that a SSL approach can achieve results comparable to fully supervised models and expert pathologists, thus opening new possibilities for efficient and cost-effective medical images analysis. The implementation of dynamic confidence thresholding and MTKD techniques represents a significant advancement in applying DL to complex medical image analysis tasks. This advancement could lead to faster and more accurate diagnoses, ultimately improving patient outcomes and fostering the overall progress of healthcare technology.


Asunto(s)
Adenocarcinoma del Pulmón , Procesamiento de Imagen Asistido por Computador , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Adenocarcinoma del Pulmón/diagnóstico por imagen , Adenocarcinoma del Pulmón/patología , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático Supervisado , Aprendizaje Profundo
3.
J Med Imaging (Bellingham) ; 11(4): 047501, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39087085

RESUMEN

Purpose: Endometrial cancer (EC) is one of the most common types of cancer affecting women. While the hematoxylin-and-eosin (H&E) staining remains the standard for histological analysis, the immunohistochemistry (IHC) method provides molecular-level visualizations. Our study proposes a digital staining method to generate the hematoxylin-3,3'-diaminobenzidine (H-DAB) IHC stain of Ki-67 for the whole slide image of the EC tumor from its H&E stain counterpart. Approach: We employed a color unmixing technique to yield stain density maps from the optical density (OD) of the stains and utilized the U-Net for end-to-end inference. The effectiveness of the proposed method was evaluated using the Pearson correlation between the digital and physical stain's labeling index (LI), a key metric indicating tumor proliferation. Two different cross-validation schemes were designed in our study: intraslide validation and cross-case validation (CCV). In the widely used intraslide scheme, the training and validation sets might include different regions from the same slide. The rigorous CCV validation scheme strictly prohibited any validation slide from contributing to training. Results: The proposed method yielded a high-resolution digital stain with preserved histological features, indicating a reliable correlation with the physical stain in terms of the Ki-67 LI. In the intraslide scheme, using intraslide patches resulted in a biased accuracy (e.g., R = 0.98 ) significantly higher than that of CCV. The CCV scheme retained a fair correlation (e.g., R = 0.66 ) between the LIs calculated from the digital stain and its physical IHC counterpart. Inferring the OD of the IHC stain from that of the H&E stain enhanced the correlation metric, outperforming that of the baseline model using the RGB space. Conclusions: Our study revealed that molecule-level insights could be obtained from H&E images using deep learning. Furthermore, the improvement brought via OD inference indicated a possible method for creating more generalizable models for digital staining via per-stain analysis.

4.
Quant Imaging Med Surg ; 14(8): 5831-5844, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39144041

RESUMEN

Background: Axillary lymph node (ALN) status is a crucial prognostic indicator for breast cancer metastasis, with manual interpretation of whole slide images (WSIs) being the current standard practice. However, this method is subjective and time-consuming. Recent advancements in deep learning-based methods for medical image analysis have shown promise in improving clinical diagnosis. This study aims to leverage these technological advancements to develop a deep learning model based on features extracted from primary tumor biopsies for preoperatively identifying ALN metastasis in early-stage breast cancer patients with negative nodes. Methods: We present DLCNBC-SA, a deep learning-based network specifically tailored for core needle biopsy and clinical data feature extraction, which integrates a self-attention mechanism (CNBC-SA). The proposed model consists of a feature extractor based on convolutional neural network (CNN) and an improved self-attention mechanism module, which can preserve the independence of features in WSIs for analysis and enhancement to provide rich feature representation. To validate the performance of the proposed model, we conducted comparative experiments and ablation studies using publicly available datasets, and verification was performed through quantitative analysis. Results: The comparative experiment illustrates the superior performance of the proposed model in the task of binary classification of ALNs, as compared to alternative methods. Our method achieved outstanding performance [area under the curve (AUC): 0.882] in this task, significantly surpassing the state-of-the-art (SOTA) method on the same dataset (AUC: 0.862). The ablation experiment reveals that incorporating RandomRotation data augmentation technology and utilizing Adadelta optimizer can effectively enhance the performance of the proposed model. Conclusions: The experimental results demonstrate that the model proposed in this paper outperforms the SOTA model on the same dataset, thereby establishing its reliability as an assistant for pathologists in analyzing WSIs of breast cancer. Consequently, it significantly enhances both the efficiency and accuracy of doctors during the diagnostic process.

5.
Virchows Arch ; 2024 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-39107524

RESUMEN

The aim of the present study was to develop and validate a quantitative image analysis (IA) algorithm to aid pathologists in assessing bright-field HER2 in situ hybridization (ISH) tests in solid cancers. A cohort of 80 sequential cases (40 HER2-negative and 40 HER2-positive) were evaluated for HER2 gene amplification with bright-field ISH. We developed an IA algorithm using the ISH Module from HALO software to automatically quantify HER2 and CEP17 copy numbers per cell as well as the HER2/CEP17 ratio. We observed a high correlation of HER2/CEP17 ratio, an average of HER2 and CEP17 copy number per cell between visual and IA quantification (Pearson's correlation coefficient of 0.842, 0.916, and 0.765, respectively). IA was able to count from 124 cells to 47,044 cells (median of 5565 cells). The margin of error for the visual quantification of the HER2/CEP17 ratio and of the average of HER2 copy number per cell decreased from a median of 0.23 to 0.02 and from a median of 0.49 to 0.04, respectively, in IA. Curve estimation regression models showed that a minimum of 469 or 953 invasive cancer cells per case is needed to reach an average margin of error below 0.1 for the HER2/CEP17 ratio or for the average of HER2 copy number per cell, respectively. Lastly, on average, a case took 212.1 s to execute the IA, which means that it evaluates about 130 cells/s and requires 6.7 s/mm2. The concordance of the IA software with the visual scoring was 95%, with a sensitivity of 90% and a specificity of 100%. All four discordant cases were able to achieve concordant results after the region of interest adjustment. In conclusion, this validation study underscores the usefulness of IA in HER2 ISH testing, displaying excellent concordance with visual scoring and significantly reducing margins of error.

6.
BMC Cancer ; 24(1): 1069, 2024 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-39210289

RESUMEN

BACKGROUND: Thyroid cancer is a common thyroid malignancy. The majority of thyroid lesion needs intraoperative frozen pathology diagnosis, which provides important information for precision operation. As digital whole slide images (WSIs) develop, deep learning methods for histopathological classification of the thyroid gland (paraffin sections) have achieved outstanding results. Our current study is to clarify whether deep learning assists pathology diagnosis for intraoperative frozen thyroid lesions or not. METHODS: We propose an artificial intelligence-assisted diagnostic system for frozen thyroid lesions that applies prior knowledge in tandem with a dichotomous judgment of whether the lesion is cancerous or not and a quadratic judgment of the type of cancerous lesion to categorize the frozen thyroid lesions into five categories: papillary thyroid carcinoma, medullary thyroid carcinoma, anaplastic thyroid carcinoma, follicular thyroid tumor, and non-cancerous lesion. We obtained 4409 frozen digital pathology sections (WSI) of thyroid from the First Affiliated Hospital of Sun Yat-sen University (SYSUFH) to train and test the model, and the performance was validated by a six-fold cross validation, 101 papillary microcarcinoma sections of thyroid were used to validate the system's sensitivity, and 1388 WSIs of thyroid were used for the evaluation of the external dataset. The deep learning models were compared in terms of several metrics such as accuracy, F1 score, recall, precision and AUC (Area Under Curve). RESULTS: We developed the first deep learning-based frozen thyroid diagnostic classifier for histopathological WSI classification of papillary carcinoma, medullary carcinoma, follicular tumor, anaplastic carcinoma, and non-carcinoma lesion. On test slides, the system had an accuracy of 0.9459, a precision of 0.9475, and an AUC of 0.9955. In the papillary carcinoma test slides, the system was able to accurately predict even lesions as small as 2 mm in diameter. Tested with the acceleration component, the cut processing can be performed in 346.12 s and the visual inference prediction results can be obtained in 98.61 s, thus meeting the time requirements for intraoperative diagnosis. Our study employs a deep learning approach for high-precision classification of intraoperative frozen thyroid lesion distribution in the clinical setting, which has potential clinical implications for assisting pathologists and precision surgery of thyroid lesions.


Asunto(s)
Aprendizaje Profundo , Secciones por Congelación , Cáncer Papilar Tiroideo , Neoplasias de la Tiroides , Humanos , Neoplasias de la Tiroides/patología , Neoplasias de la Tiroides/diagnóstico , Neoplasias de la Tiroides/cirugía , Cáncer Papilar Tiroideo/patología , Cáncer Papilar Tiroideo/diagnóstico , Cáncer Papilar Tiroideo/cirugía , Carcinoma Papilar/patología , Carcinoma Papilar/cirugía , Carcinoma Papilar/diagnóstico , Adenocarcinoma Folicular/patología , Adenocarcinoma Folicular/diagnóstico , Adenocarcinoma Folicular/cirugía , Glándula Tiroides/patología , Glándula Tiroides/cirugía , Carcinoma Neuroendocrino/patología , Carcinoma Neuroendocrino/diagnóstico , Carcinoma Neuroendocrino/cirugía , Femenino , Masculino , Persona de Mediana Edad , Adulto , Periodo Intraoperatorio , Carcinoma Anaplásico de Tiroides/patología , Carcinoma Anaplásico de Tiroides/diagnóstico , Carcinoma Anaplásico de Tiroides/cirugía
7.
Med Image Anal ; 97: 103257, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38981282

RESUMEN

The alignment of tissue between histopathological whole-slide-images (WSI) is crucial for research and clinical applications. Advances in computing, deep learning, and availability of large WSI datasets have revolutionised WSI analysis. Therefore, the current state-of-the-art in WSI registration is unclear. To address this, we conducted the ACROBAT challenge, based on the largest WSI registration dataset to date, including 4,212 WSIs from 1,152 breast cancer patients. The challenge objective was to align WSIs of tissue that was stained with routine diagnostic immunohistochemistry to its H&E-stained counterpart. We compare the performance of eight WSI registration algorithms, including an investigation of the impact of different WSI properties and clinical covariates. We find that conceptually distinct WSI registration methods can lead to highly accurate registration performances and identify covariates that impact performances across methods. These results provide a comparison of the performance of current WSI registration methods and guide researchers in selecting and developing methods.


Asunto(s)
Algoritmos , Neoplasias de la Mama , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Femenino , Interpretación de Imagen Asistida por Computador/métodos , Inmunohistoquímica
8.
Cancer Cytopathol ; 2024 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-39003588

RESUMEN

BACKGROUND: This study evaluated the diagnostic effectiveness of the AIxURO platform, an artificial intelligence-based tool, to support urine cytology for bladder cancer management, which typically requires experienced cytopathologists and substantial diagnosis time. METHODS: One cytopathologist and two cytotechnologists reviewed 116 urine cytology slides and corresponding whole-slide images (WSIs) from urology patients. They used three diagnostic modalities: microscopy, WSI review, and AIxURO, per The Paris System for Reporting Urinary Cytology (TPS) criteria. Performance metrics, including TPS-guided and binary diagnosis, inter- and intraobserver agreement, and screening time, were compared across all methods and reviewers. RESULTS: AIxURO improved diagnostic accuracy by increasing sensitivity (from 25.0%-30.6% to 63.9%), positive predictive value (PPV; from 21.6%-24.3% to 31.1%), and negative predictive value (NPV; from 91.3%-91.6% to 95.3%) for atypical urothelial cell (AUC) cases. For suspicious for high-grade urothelial carcinoma (SHGUC) cases, it improved sensitivity (from 15.2%-27.3% to 33.3%), PPV (from 31.3%-47.4% to 61.1%), and NPV (from 91.6%-92.7% to 93.3%). Binary diagnoses exhibited an improvement in sensitivity (from 77.8%-82.2% to 90.0%) and NPV (from 91.7%-93.4% to 95.8%). Interobserver agreement across all methods showed moderate consistency (κ = 0.57-0.61), with the cytopathologist demonstrating higher intraobserver agreement than the two cytotechnologists across the methods (κ = 0.75-0.88). AIxURO significantly reduced screening time by 52.3%-83.2% from microscopy and 43.6%-86.7% from WSI review across all reviewers. Screening-positive (AUC+) cases required more time than negative cases across all methods and reviewers. CONCLUSIONS: AIxURO demonstrates the potential to improve both sensitivity and efficiency in bladder cancer diagnostics via urine cytology. Its integration into the cytopathological screening workflow could markedly decrease screening times, which would improve overall diagnostic processes.

9.
Brief Bioinform ; 25(4)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38960406

RESUMEN

Spatial transcriptomics data play a crucial role in cancer research, providing a nuanced understanding of the spatial organization of gene expression within tumor tissues. Unraveling the spatial dynamics of gene expression can unveil key insights into tumor heterogeneity and aid in identifying potential therapeutic targets. However, in many large-scale cancer studies, spatial transcriptomics data are limited, with bulk RNA-seq and corresponding Whole Slide Image (WSI) data being more common (e.g. TCGA project). To address this gap, there is a critical need to develop methodologies that can estimate gene expression at near-cell (spot) level resolution from existing WSI and bulk RNA-seq data. This approach is essential for reanalyzing expansive cohort studies and uncovering novel biomarkers that have been overlooked in the initial assessments. In this study, we present STGAT (Spatial Transcriptomics Graph Attention Network), a novel approach leveraging Graph Attention Networks (GAT) to discern spatial dependencies among spots. Trained on spatial transcriptomics data, STGAT is designed to estimate gene expression profiles at spot-level resolution and predict whether each spot represents tumor or non-tumor tissue, especially in patient samples where only WSI and bulk RNA-seq data are available. Comprehensive tests on two breast cancer spatial transcriptomics datasets demonstrated that STGAT outperformed existing methods in accurately predicting gene expression. Further analyses using the TCGA breast cancer dataset revealed that gene expression estimated from tumor-only spots (predicted by STGAT) provides more accurate molecular signatures for breast cancer sub-type and tumor stage prediction, and also leading to improved patient survival and disease-free analysis. Availability: Code is available at https://github.com/compbiolabucf/STGAT.


Asunto(s)
Perfilación de la Expresión Génica , RNA-Seq , Transcriptoma , Humanos , RNA-Seq/métodos , Perfilación de la Expresión Génica/métodos , Neoplasias de la Mama/genética , Neoplasias de la Mama/metabolismo , Regulación Neoplásica de la Expresión Génica , Biología Computacional/métodos , Femenino , Biomarcadores de Tumor/genética , Biomarcadores de Tumor/metabolismo
10.
Front Oncol ; 14: 1346237, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39035745

RESUMEN

Pancreatic cancer is one of the most lethal cancers worldwide, with a 5-year survival rate of less than 5%, the lowest of all cancer types. Pancreatic ductal adenocarcinoma (PDAC) is the most common and aggressive pancreatic cancer and has been classified as a health emergency in the past few decades. The histopathological diagnosis and prognosis evaluation of PDAC is time-consuming, laborious, and challenging in current clinical practice conditions. Pathological artificial intelligence (AI) research has been actively conducted lately. However, accessing medical data is challenging; the amount of open pathology data is small, and the absence of open-annotation data drawn by medical staff makes it difficult to conduct pathology AI research. Here, we provide easily accessible high-quality annotation data to address the abovementioned obstacles. Data evaluation is performed by supervised learning using a deep convolutional neural network structure to segment 11 annotated PDAC histopathological whole slide images (WSIs) drawn by medical staff directly from an open WSI dataset. We visualized the segmentation results of the histopathological images with a Dice score of 73% on the WSIs, including PDAC areas, thus identifying areas important for PDAC diagnosis and demonstrating high data quality. Additionally, pathologists assisted by AI can significantly increase their work efficiency. The pathological AI guidelines we propose are effective in developing histopathological AI for PDAC and are significant in the clinical field.

11.
Dig Dis Sci ; 69(8): 2985-2995, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38837111

RESUMEN

BACKGROUND: Colorectal cancer (CRC) is a malignant tumor within the digestive tract with both a high incidence rate and mortality. Early detection and intervention could improve patient clinical outcomes and survival. METHODS: This study computationally investigates a set of prognostic tissue and cell features from diagnostic tissue slides. With the combination of clinical prognostic variables, the pathological image features could predict the prognosis in CRC patients. Our CRC prognosis prediction pipeline sequentially consisted of three modules: (1) A MultiTissue Net to delineate outlines of different tissue types within the WSI of CRC for further ROI selection by pathologists. (2) Development of three-level quantitative image metrics related to tissue compositions, cell shape, and hidden features from a deep network. (3) Fusion of multi-level features to build a prognostic CRC model for predicting survival for CRC. RESULTS: Experimental results suggest that each group of features has a particular relationship with the prognosis of patients in the independent test set. In the fusion features combination experiment, the accuracy rate of predicting patients' prognosis and survival status is 81.52%, and the AUC value is 0.77. CONCLUSION: This paper constructs a model that can predict the postoperative survival of patients by using image features and clinical information. Some features were found to be associated with the prognosis and survival of patients.


Asunto(s)
Neoplasias Colorrectales , Humanos , Neoplasias Colorrectales/patología , Neoplasias Colorrectales/mortalidad , Pronóstico , Masculino , Femenino , Interpretación de Imagen Asistida por Computador , Valor Predictivo de las Pruebas
12.
ArXiv ; 2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38903738

RESUMEN

Whole Slide Images (WSI), obtained by high-resolution digital scanning of microscope slides at multiple scales, are the cornerstone of modern Digital Pathology. However, they represent a particular challenge to AI-based/AI-mediated analysis because pathology labeling is typically done at slide-level, instead of tile-level. It is not just that medical diagnostics is recorded at the specimen level, the detection of oncogene mutation is also experimentally obtained, and recorded by initiatives like The Cancer Genome Atlas (TCGA), at the slide level. This configures a dual challenge: a) accurately predicting the overall cancer phenotype and b) finding out what cellular morphologies are associated with it at the tile level. To address these challenges, a weakly supervised Multiple Instance Learning (MIL) approach was explored for two prevalent cancer types, Invasive Breast Carcinoma (TCGA-BRCA) and Lung Squamous Cell Carcinoma (TCGA-LUSC). This approach was explored for tumor detection at low magnification levels and TP53 mutations at various levels. Our results show that a novel additive implementation of MIL matched the performance of reference implementation (AUC 0.96), and was only slightly outperformed by Attention MIL (AUC 0.97). More interestingly from the perspective of the molecular pathologist, these different AI architectures identify distinct sensitivities to morphological features (through the detection of Regions of Interest, RoI) at different amplification levels. Tellingly, TP53 mutation was most sensitive to features at the higher applications where cellular morphology is resolved.

13.
Biomed Phys Eng Express ; 10(5)2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-38925106

RESUMEN

Detecting the Kirsten Rat Sarcoma Virus (KRAS) gene mutation is significant for colorectal cancer (CRC) patients. TheKRASgene encodes a protein involved in the epidermal growth factor receptor (EGFR) signaling pathway, and mutations in this gene can negatively impact the use of monoclonal antibodies in anti-EGFR therapy and affect treatment decisions. Currently, commonly used methods like next-generation sequencing (NGS) identifyKRASmutations but are expensive, time-consuming, and may not be suitable for every cancer patient sample. To address these challenges, we have developedKRASFormer, a novel framework that predictsKRASgene mutations from Haematoxylin and Eosin (H & E) stained WSIs that are widely available for most CRC patients.KRASFormerconsists of two stages: the first stage filters out non-tumor regions and selects only tumour cells using a quality screening mechanism, and the second stage predicts theKRASgene either wildtype' or mutant' using a Vision Transformer-based XCiT method. The XCiT employs cross-covariance attention to capture clinically meaningful long-range representations of textural patterns in tumour tissue andKRASmutant cells. We evaluated the performance of the first stage using an independent CRC-5000 dataset, and the second stage included both The Cancer Genome Atlas colon and rectal cancer (TCGA-CRC-DX) and in-house cohorts. The results of our experiments showed that the XCiT outperformed existing state-of-the-art methods, achieving AUCs for ROC curves of 0.691 and 0.653 on TCGA-CRC-DX and in-house datasets, respectively. Our findings emphasize three key consequences: the potential of using H & E-stained tissue slide images for predictingKRASgene mutations as a cost-effective and time-efficient means for guiding treatment choice with CRC patients; the increase in performance metrics of a Transformer-based model; and the value of the collaboration between pathologists and data scientists in deriving a morphologically meaningful model.


Asunto(s)
Neoplasias Colorrectales , Mutación , Proteínas Proto-Oncogénicas p21(ras) , Humanos , Neoplasias Colorrectales/genética , Neoplasias Colorrectales/patología , Proteínas Proto-Oncogénicas p21(ras)/genética , Algoritmos , Receptores ErbB/genética , Secuenciación de Nucleótidos de Alto Rendimiento/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Curva ROC
14.
Comput Biol Med ; 178: 108710, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38843570

RESUMEN

BACKGROUND: Efficient and precise diagnosis of non-small cell lung cancer (NSCLC) is quite critical for subsequent targeted therapy and immunotherapy. Since the advent of whole slide images (WSIs), the transition from traditional histopathology to digital pathology has aroused the application of convolutional neural networks (CNNs) in histopathological recognition and diagnosis. HookNet can make full use of macroscopic and microscopic information for pathological diagnosis, but it cannot integrate other excellent CNN structures. The new version of HookEfficientNet is based on a combination of HookNet structure and EfficientNet that performs well in the recognition of general objects. Here, a high-precision artificial intelligence-guided histopathological recognition system was established by HookEfficientNet to provide a basis for the intelligent differential diagnosis of NSCLC. METHODS: A total of 216 WSIs of lung adenocarcinoma (LUAD) and 192 WSIs of lung squamous cell carcinoma (LUSC) were recruited from the First Affiliated Hospital of Zhengzhou University. Deep learning methods based on HookEfficientNet, HookNet and EfficientNet B4-B6 were developed and compared with each other using area under the curve (AUC) and the Youden index. Temperature scaling was used to calibrate the heatmap and highlight the cancer region of interest. Four pathologists of different levels blindly reviewed 108 WSIs of LUAD and LUSC, and the diagnostic results were compared with the various deep learning models. RESULTS: The HookEfficientNet model outperformed HookNet and EfficientNet B4-B6. After temperature scaling, the HookEfficientNet model achieved AUCs of 0.973, 0.980, and 0.989 and Youden index values of 0.863, 0.899, and 0.922 for LUAD, LUSC and normal lung tissue, respectively, in the testing set. The accuracy of the model was better than the average accuracy from experienced pathologists, and the model was superior to pathologists in the diagnosis of LUSC. CONCLUSIONS: HookEfficientNet can effectively recognize LUAD and LUSC with performance superior to that of senior pathologists, especially for LUSC. The model has great potential to facilitate the application of deep learning-assisted histopathological diagnosis for LUAD and LUSC in the future.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Aprendizaje Profundo , Neoplasias Pulmonares , Redes Neurales de la Computación , Humanos , Carcinoma de Pulmón de Células no Pequeñas/patología , Neoplasias Pulmonares/patología , Interpretación de Imagen Asistida por Computador/métodos , Diagnóstico por Computador/métodos
15.
Comput Biol Med ; 178: 108714, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38889627

RESUMEN

BACKGROUND: The emergence of digital whole slide image (WSI) has driven the development of computational pathology. However, obtaining patch-level annotations is challenging and time-consuming due to the high resolution of WSI, which limits the applicability of fully supervised methods. We aim to address the challenges related to patch-level annotations. METHODS: We propose a universal framework for weakly supervised WSI analysis based on Multiple Instance Learning (MIL). To achieve effective aggregation of instance features, we design a feature aggregation module from multiple dimensions by considering feature distribution, instances correlation and instance-level evaluation. First, we implement instance-level standardization layer and deep projection unit to improve the separation of instances in the feature space. Then, a self-attention mechanism is employed to explore dependencies between instances. Additionally, an instance-level pseudo-label evaluation method is introduced to enhance the available information during the weak supervision process. Finally, a bag-level classifier is used to obtain preliminary WSI classification results. To achieve even more accurate WSI label predictions, we have designed a key instance selection module that strengthens the learning of local features for instances. Combining the results from both modules leads to an improvement in WSI prediction accuracy. RESULTS: Experiments conducted on Camelyon16, TCGA-NSCLC, SICAPv2, PANDA and classical MIL benchmark datasets demonstrate that our proposed method achieves a competitive performance compared to some recent methods, with maximum improvement of 14.6 % in terms of classification accuracy. CONCLUSION: Our method can improve the classification accuracy of whole slide images in a weakly supervised way, and more accurately detect lesion areas.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
16.
J Imaging Inform Med ; 2024 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-38886290

RESUMEN

The efficacy of immune checkpoint inhibitors is significantly influenced by the tumor immune microenvironment (TIME). RNA sequencing of tumor tissue can offer valuable insights into TIME, but its high cost and long turnaround time seriously restrict its utility in routine clinical examinations. Several recent studies have suggested that ultrahigh-resolution pathology images can infer cellular and molecular characteristics. However, few study pay attention to the quantitative estimation of various tumor infiltration immune cells from pathology images. In this paper, we integrated contrastive learning and weakly supervised learning to infer tumor-associated macrophages and potential immunotherapy benefit from whole slide images (WSIs) of H &E stained pathological sections. We split the high-resolution WSIs into tiles and then apply contrastive learning to extract features of each tile. After aggregating the features at the tile level, we employ weak supervisory signals to fine-tune the encoder for various downstream tasks. Comprehensive experiments on two independent breast cancer cohorts and spatial transcriptomics data demonstrate that the computational pathological features accurately predict the proportion of tumor-infiltrating immune cells, particularly the infiltration level of macrophages, as well as the immune subtypes and potential immunotherapy benefit. These findings demonstrate that our model effectively captures pathological features beyond human vision, establishing a mapping relationship between cellular compositions and histological morphology, thus expanding the clinical applications of digital pathology images.

17.
Cancers (Basel) ; 16(11)2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38893251

RESUMEN

The presence of spread through air spaces (STASs) in early-stage lung adenocarcinoma is a significant prognostic factor associated with disease recurrence and poor outcomes. Although current STAS detection methods rely on pathological examinations, the advent of artificial intelligence (AI) offers opportunities for automated histopathological image analysis. This study developed a deep learning (DL) model for STAS prediction and investigated the correlation between the prediction results and patient outcomes. To develop the DL-based STAS prediction model, 1053 digital pathology whole-slide images (WSIs) from the competition dataset were enrolled in the training set, and 227 WSIs from the National Taiwan University Hospital were enrolled for external validation. A YOLOv5-based framework comprising preprocessing, candidate detection, false-positive reduction, and patient-based prediction was proposed for STAS prediction. The model achieved an area under the curve (AUC) of 0.83 in predicting STAS presence, with 72% accuracy, 81% sensitivity, and 63% specificity. Additionally, the DL model demonstrated a prognostic value in disease-free survival compared to that of pathological evaluation. These findings suggest that DL-based STAS prediction could serve as an adjunctive screening tool and facilitate clinical decision-making in patients with early-stage lung adenocarcinoma.

18.
Sci Rep ; 14(1): 13304, 2024 Jun 10.
Artículo en Inglés | MEDLINE | ID: mdl-38858367

RESUMEN

The limited field of view of high-resolution microscopic images hinders the study of biological samples in a single shot. Stitching of microscope images (tiles) captured by the whole-slide imaging (WSI) technique solves this problem. However, stitching is challenging due to the repetitive textures of tissues, the non-informative background part of the slide, and the large number of tiles that impact performance and computational time. To address these challenges, we proposed the Fast and Robust Microscopic Image Stitching (FRMIS) algorithm, which relies on pairwise and global alignment. The speeded up robust features (SURF) were extracted and matched within a small part of the overlapping region to compute the transformation and align two neighboring tiles. In cases where the transformation could not be computed due to an insufficient number of matched features, features were extracted from the entire overlapping region. This enhances the efficiency of the algorithm since most of the computational load is related to pairwise registration and reduces misalignment that may occur by matching duplicated features in tiles with repetitive textures. Then, global alignment was achieved by constructing a weighted graph where the weight of each edge is determined by the normalized inverse of the number of matched features between two tiles. FRMIS has been evaluated on experimental and synthetic datasets from different modalities with different numbers of tiles and overlaps, demonstrating faster stitching time compared to existing algorithms such as the Microscopy Image Stitching Tool (MIST) toolbox. FRMIS outperforms MIST by 481% for bright-field, 259% for phase-contrast, and 282% for fluorescence modalities, while also being robust to uneven illumination.

19.
Lab Invest ; 104(8): 102094, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38871058

RESUMEN

Accurate assessment of epidermal growth factor receptor (EGFR) mutation status and subtype is critical for the treatment of non-small cell lung cancer patients. Conventional molecular testing methods for detecting EGFR mutations have limitations. In this study, an artificial intelligence-powered deep learning framework was developed for the weakly supervised prediction of EGFR mutations in non-small cell lung cancer from hematoxylin and eosin-stained histopathology whole-slide images. The study cohort was partitioned into training and validation subsets. Foreground regions containing tumor tissue were extracted from whole-slide images. A convolutional neural network employing a contrastive learning paradigm was implemented to extract patch-level morphologic features. These features were aggregated using a vision transformer-based model to predict EGFR mutation status and classify patient cases. The established prediction model was validated on unseen data sets. In internal validation with a cohort from the University of Science and Technology of China (n = 172), the model achieved patient-level areas under the receiver-operating characteristic curve (AUCs) of 0.927 and 0.907, sensitivities of 81.6% and 83.3%, and specificities of 93.0% and 92.3%, for surgical resection and biopsy specimens, respectively, in EGFR mutation subtype prediction. External validation with cohorts from the Second Affiliated Hospital of Anhui Medical University and the First Affiliated Hospital of Wannan Medical College (n = 193) yielded patient-level AUCs of 0.849 and 0.867, sensitivities of 79.2% and 80.7%, and specificities of 91.7% and 90.7% for surgical and biopsy specimens, respectively. Further validation with The Cancer Genome Atlas data set (n = 81) showed an AUC of 0.861, a sensitivity of 84.6%, and a specificity of 90.5%. Deep learning solutions demonstrate potential advantages for automated, noninvasive, fast, cost-effective, and accurate inference of EGFR alterations from histomorphology. Integration of such artificial intelligence frameworks into routine digital pathology workflows could augment existing molecular testing pipelines.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Aprendizaje Profundo , Receptores ErbB , Hematoxilina , Neoplasias Pulmonares , Mutación , Humanos , Receptores ErbB/genética , Carcinoma de Pulmón de Células no Pequeñas/genética , Carcinoma de Pulmón de Células no Pequeñas/patología , Neoplasias Pulmonares/genética , Neoplasias Pulmonares/patología , Eosina Amarillenta-(YS) , Femenino , Masculino , Persona de Mediana Edad , Anciano
20.
Eur J Surg Oncol ; 50(7): 108369, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38703632

RESUMEN

BACKGROUND: TNM staging is the main reference standard for prognostic prediction of colorectal cancer (CRC), but the prognosis heterogeneity of patients with the same stage is still large. This study aimed to classify the tumor microenvironment of patients with stage III CRC and quantify the classified tumor tissues based on deep learning to explore the prognostic value of the developed tumor risk signature (TRS). METHODS: A tissue classification model was developed to identify nine tissues (adipose, background, debris, lymphocytes, mucus, smooth muscle, normal mucosa, stroma, and tumor) in whole-slide images (WSIs) of stage III CRC patients. This model was used to extract tumor tissues from WSIs of 265 stage III CRC patients from The Cancer Genome Atlas and 70 stage III CRC patients from the Sixth Affiliated Hospital of Sun Yat-sen University. We used three different deep learning models for tumor feature extraction and applied a Cox model to establish the TRS. Survival analysis was conducted to explore the prognostic performance of TRS. RESULTS: The tissue classification model achieved 94.4 % accuracy in identifying nine tissue types. The TRS showed a Harrell's concordance index of 0.736, 0.716, and 0.711 in the internal training, internal validation, and external validation sets. Survival analysis showed that TRS had significant predictive ability (hazard ratio: 3.632, p = 0.03) for prognostic prediction. CONCLUSION: The TRS is an independent and significant prognostic factor for PFS of stage III CRC patients and it contributes to risk stratification of patients with different clinical stages.


Asunto(s)
Neoplasias Colorrectales , Aprendizaje Profundo , Estadificación de Neoplasias , Microambiente Tumoral , Humanos , Neoplasias Colorrectales/patología , Pronóstico , Masculino , Femenino , Persona de Mediana Edad , Anciano , Modelos de Riesgos Proporcionales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA