Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
PLoS One ; 19(8): e0304702, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39208135

RESUMEN

BACKGROUND: The tile-based approach has been widely used for slide-level predictions in whole slide image (WSI) analysis. However, the irregular shapes and variable dimensions of tumor regions pose challenges for the process. To address this issue, we proposed PathEX, a framework that integrates intersection over tile (IoT) and background over tile (BoT) algorithms to extract tile images around boundaries of annotated regions while excluding the blank tile images within these regions. METHODS: We developed PathEX, which incorporated IoT and BoT into tile extraction, for training a classification model in CAM (239 WSIs) and PAIP (40 WSIs) datasets. By adjusting the IoT and BoT parameters, we generated eight training sets and corresponding models for each dataset. The performance of PathEX was assessed on the testing set comprising 13,076 tile images from 48 WSIs of CAM dataset and 6,391 tile images from 10 WSIs of PAIP dataset. RESULTS: PathEX could extract tile images around boundaries of annotated region differently by adjusting the IoT parameter, while exclusion of blank tile images within annotated regions achieved by setting the BoT parameter. As adjusting IoT from 0.1 to 1.0, and 1-BoT from 0.0 to 0.5, we got 8 train sets. Experimentation revealed that set C demonstrates potential as the most optimal candidate. Nevertheless, a combination of IoT values ranging from 0.2 to 0.5 and 1-BoT values ranging from 0.2 to 0.5 also yielded favorable outcomes. CONCLUSIONS: In this study, we proposed PathEX, a framework that integrates IoT and BoT algorithms for tile image extraction at the boundaries of annotated regions while excluding blank tiles within these regions. Researchers can conveniently set the thresholds for IoT and BoT to facilitate tile image extraction in their own studies. The insights gained from this research provide valuable guidance for tile image extraction in digital pathology applications.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias
2.
Front Physiol ; 15: 1279982, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38357498

RESUMEN

Introduction: Early predictive pathological complete response (pCR) is beneficial for optimizing neoadjuvant chemotherapy (NAC) strategies for breast cancer. The hematoxylin and eosin (HE)-stained slices of biopsy tissues contain a large amount of information on tumor epithelial cells and stromal. The fusion of pathological image features and clinicopathological features is expected to build a model to predict pCR of NAC in breast cancer. Methods: We retrospectively collected a total of 440 breast cancer patients from three hospitals who underwent NAC. HE-stained slices of biopsy tissues were scanned to form whole-slide images (WSIs), and pathological images of representative regions of interest (ROI) of each WSI were selected at different magnifications. Based on several different deep learning models, we propose a novel feature extraction method on pathological images with different magnifications. Further, fused with clinicopathological features, a multimodal breast cancer NAC pCR prediction model based on a support vector machine (SVM) classifier was developed and validated with two additional validation cohorts (VCs). Results: Through experimental validation of several different deep learning models, we found that the breast cancer pCR prediction model based on the SVM classifier, which uses the VGG16 model for feature extraction of pathological images at ×20 magnification, has the best prediction efficacy. The area under the curve (AUC) of deep learning pathological model (DPM) were 0.79, 0.73, and 0.71 for TC, VC1, and VC2, respectively, all of which exceeded 0.70. The AUCs of clinical model (CM), a clinical prediction model established by using clinicopathological features, were 0.79 for TC, 0.73 for VC1, and 0.71 for VC2, respectively. The multimodal deep learning clinicopathological model (DPCM) established by fusing pathological images and clinicopathological features improved the AUC of TC from 0.79 to 0.84. The AUC of VC2 improved from 0.71 to 0.78. Conclusion: Our study reveals that pathological images of HE-stained slices of pre-NAC biopsy tissues can be used to build a pCR prediction model. Combining pathological images and clinicopathological features can further enhance the predictive efficacy of the model.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA