Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 254
Filtrar
1.
Neural Netw ; 180: 106697, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39305784

RESUMEN

Local feature extraction plays a crucial role in numerous critical visual tasks. However, there remains room for improvement in both descriptors and keypoints, particularly regarding the discriminative power of descriptors and the localization precision of keypoints. To address these challenges, this study introduces a novel local feature extraction pipeline named OSDFeat (Object and Spatial Discrimination Feature). OSDFeat employs a decoupling strategy, training descriptor and detection networks independently. Inspired by semantic correspondence, we propose an Object and Spatial Discrimination ResUNet (OSD-ResUNet). OSD-ResUNet captures features from the feature map that differentiate object appearance and spatial context, thus enhancing descriptor performance. To further improve the discriminative capability of descriptors, we propose a Discrimination Information Retained Normalization module (DIRN). DIRN complementarily integrates spatial-wise normalization and channel-wise normalization, yielding descriptors that are more distinguishable and informative. In the detection network, we propose a Cross Saliency Pooling module (CSP). CSP employs a cross-shaped kernel to aggregate long-range context in both vertical and horizontal dimensions. By enhancing the saliency of keypoints, CSP enables the detection network to effectively utilize descriptor information and achieve more precise localization of keypoints. Compared to the previous best local feature extraction methods, OSDFeat achieves Mean Matching Accuracy of 79.4% in local feature matching task, improving by 1.9% and achieving state-of-the-art results. Additionally, OSDFeat achieves competitive results in Visual Localization and 3D Reconstruction. The results of this study indicate that object and spatial discrimination can improve the accuracy and robustness of local feature, even in challenging environments. The code is available at https://github.com/pandaandyy/OSDFeat.

2.
J Invest Dermatol ; 2024 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-39306030

RESUMEN

The diagnosis of early-stage mycosis fungoides (MF) is challenging due to shared clinical and histopathological features with benign inflammatory dermatoses (BIDs). Recent evidence has shown that deep learning (DL) can assist pathologists in cancer classification, but this field is largely unexplored for cutaneous lymphomas. This study evaluates DL in distinguishing early-stage MF from BIDs using a unique dataset of 924 hematoxylin and eosin-stained whole-slide images from skin biopsies, including 233 early-stage MF and 353 BID patients. All MF patients were diagnosed after clinicopathological correlation. The classification accuracy of weakly-supervised DL models was benchmarked against three expert pathologists. The highest performance on a temporal test set was at 200x magnification (0.25 µm per pixel resolution), with a mean area-under-the-curve of 0.827 ± 0.044 and a mean balanced accuracy of 76.2 ± 3.9%. This nearly matched the 77.7% mean balanced accuracy of the three expert-pathologists. Most (63.5%) attention heatmaps corresponded well with the pathologists' region-of-interest. Considering the difficulty of the MF versus BID classification task, the results of this study show promise for future applications of weakly-supervised DL in diagnosing early-stage MF. Achieving clinical-grade performance will require larger multi-institutional datasets and improved methodologies, such as multimodal DL with incorporation of clinical data.

3.
Cancers (Basel) ; 16(17)2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39272955

RESUMEN

Lung cancer is the leading cause of cancer-related death in the United States. Lung adenocarcinoma (LUAD) is one of the most common subtypes of lung cancer that can be treated with resection. While resection can be curative, there is a significant risk of recurrence, which necessitates close monitoring and additional treatment planning. Traditionally, microscopic evaluation of tumor grading in resected specimens is a standard pathologic practice that informs subsequent therapy and patient management. However, this approach is labor-intensive and subject to inter-observer variability. To address the challenge of accurately predicting recurrence, we propose a deep learning-based model to predict the 5-year recurrence of LUAD in patients following surgical resection. In our model, we introduce an innovative dual-attention architecture that significantly enhances computational efficiency. Our model demonstrates excellent performance in recurrent risk stratification, achieving a hazard ratio of 2.29 (95% CI: 1.69-3.09, p < 0.005), which outperforms several existing deep learning methods. This study contributes to ongoing efforts to use deep learning models for automatically learning histologic patterns from whole slide images (WSIs) and predicting LUAD recurrence risk, thereby improving the accuracy and efficiency of treatment decision making.

4.
Artículo en Inglés | MEDLINE | ID: mdl-39271574

RESUMEN

PURPOSE: Anasarca is a condition that results from organ dysfunctions, such as heart, kidney, or liver failure, characterized by the presence of edema throughout the body. The quantification of accumulated edema may have potential clinical benefits. This work focuses on accurately estimating the amount of edema non-invasively using abdominal CT scans, with minimal false positives. However, edema segmentation is challenging due to the complex appearance of edema and the lack of manually annotated volumes. METHODS: We propose a weakly supervised approach for edema segmentation using initial edema labels from the current state-of-the-art method for edema segmentation (Intensity Prior), along with labels of surrounding tissues as anatomical priors. A multi-class 3D nnU-Net was employed as the segmentation network, and training was performed using an iterative annotation workflow. RESULTS: We evaluated segmentation accuracy on a test set of 25 patients with edema. The average Dice Similarity Coefficient of the proposed method was similar to Intensity Prior (61.5% vs. 61.7%; p = 0.83 ). However, the proposed method reduced the average False Positive Rate significantly, from 1.8% to 1.1% ( p < 0.001 ). Edema volumes computed using automated segmentation had a strong correlation with manual annotation ( R 2 = 0.87 ). CONCLUSION: Weakly supervised learning using 3D multi-class labels and iterative annotation is an efficient way to perform high-quality edema segmentation with minimal false positives. Automated edema segmentation can produce edema volume estimates that are highly correlated with manual annotation. The proposed approach is promising for clinical applications to monitor anasarca using estimated edema volumes.

5.
Front Oncol ; 14: 1389396, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39267847

RESUMEN

Introduction: Pathologists rely on whole slide images (WSIs) to diagnose cancer by identifying tumor cells and subtypes. Deep learning models, particularly weakly supervised ones, classify WSIs using image tiles but may overlook false positives and negatives due to the heterogeneous nature of tumors. Both cancerous and healthy cells can proliferate in patterns that extend beyond individual tiles, leading to errors at the tile level that result in inaccurate tumor-level classifications. Methods: To address this limitation, we introduce NATMIL (Neighborhood Attention Transformer Multiple Instance Learning), which utilizes the Neighborhood Attention Transformer to incorporate contextual dependencies among WSI tiles. NATMIL enhances multiple instance learning by integrating a broader tissue context into the model. Our approach enhances the accuracy of tumor classification by considering the broader tissue context, thus reducing errors associated with isolated tile analysis. Results: We conducted a quantitative analysis to evaluate NATMIL's performance against other weakly supervised algorithms. When applied to subtyping non-small cell lung cancer (NSCLC) and lymph node (LN) tumors, NATMIL demonstrated superior accuracy. Specifically, NATMIL achieved accuracy values of 89.6% on the Camelyon dataset and 88.1% on the TCGA-LUSC dataset, outperforming existing methods. These results underscore NATMIL's potential as a robust tool for improving the precision of cancer diagnosis using WSIs. Discussion: Our findings demonstrate that NATMIL significantly improves tumor classification accuracy by reducing errors associated with isolated tile analysis. The integration of contextual dependencies enhances the precision of cancer diagnosis using WSIs, highlighting NATMILs´ potential as a robust tool in pathology.

6.
Med Phys ; 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39140793

RESUMEN

BACKGROUND: Recent advancements in anomaly detection have paved the way for novel radiological reading assistance tools that support the identification of findings, aimed at saving time. The clinical adoption of such applications requires a low rate of false positives while maintaining high sensitivity. PURPOSE: In light of recent interest and development in multi pathology identification, we present a novel method, based on a recent contrastive self-supervised approach, for multiple chest-related abnormality identification including low lung density area ("LLDA"), consolidation ("CONS"), nodules ("NOD") and interstitial pattern ("IP"). Our approach alerts radiologists about abnormal regions within a computed tomography (CT) scan by providing 3D localization. METHODS: We introduce a new method for the classification and localization of multiple chest pathologies in 3D Chest CT scans. Our goal is to distinguish four common chest-related abnormalities: "LLDA", "CONS", "NOD", "IP" and "NORMAL". This method is based on a 3D patch-based classifier with a Resnet backbone encoder pretrained leveraging recent contrastive self supervised approach and a fine-tuned classification head. We leverage the SimCLR contrastive framework for pretraining on an unannotated dataset of randomly selected patches and we then fine-tune it on a labeled dataset. During inference, this classifier generates probability maps for each abnormality across the CT volume, which are aggregated to produce a multi-label patient-level prediction. We compare different training strategies, including random initialization, ImageNet weight initialization, frozen SimCLR pretrained weights and fine-tuned SimCLR pretrained weights. Each training strategy is evaluated on a validation set for hyperparameter selection and tested on a test set. Additionally, we explore the fine-tuned SimCLR pretrained classifier for 3D pathology localization and conduct qualitative evaluation. RESULTS: Validated on 111 chest scans for hyperparameter selection and subsequently tested on 251 chest scans with multi-abnormalities, our method achieves an AUROC of 0.931 (95% confidence interval [CI]: [0.9034, 0.9557], p $ p$ -value < 0.001) and 0.963 (95% CI: [0.952, 0.976], p $ p$ -value < 0.001) in the multi-label and binary (i.e., normal versus abnormal) settings, respectively. Notably, our method surpasses the area under the receiver operating characteristic (AUROC) threshold of 0.9 for two abnormalities: IP (0.974) and LLDA (0.952), while achieving values of 0.853 and 0.791 for NOD and CONS, respectively. Furthermore, our results highlight the superiority of incorporating contrastive pretraining within the patch classifier, outperforming Imagenet pretraining weights and non-pretrained counterparts with uninitialized weights (F1 score = 0.943, 0.792, and 0.677 respectively). Qualitatively, the method achieved a satisfactory 88.8% completeness rate in localization and maintained an 88.3% accuracy rate against false positives. CONCLUSIONS: The proposed method integrates self-supervised learning algorithms for pretraining, utilizes a patch-based approach for 3D pathology localization and develops an aggregation method for multi-label prediction at patient-level. It shows promise in efficiently detecting and localizing multiple anomalies within a single scan.

7.
J Imaging ; 10(7)2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-39057732

RESUMEN

Precise annotations for large medical image datasets can be time-consuming. Additionally, when dealing with volumetric regions of interest, it is typical to apply segmentation techniques on 2D slices, compromising important information for accurately segmenting 3D structures. This study presents a deep learning pipeline that simultaneously tackles both challenges. Firstly, to streamline the annotation process, we employ a semi-automatic segmentation approach using bounding boxes as masks, which is less time-consuming than pixel-level delineation. Subsequently, recursive self-training is utilized to enhance annotation quality. Finally, a 2.5D segmentation technique is adopted, wherein a slice of a volumetric image is segmented using a pseudo-RGB image. The pipeline was applied to segment the carotid artery tree in T1-weighted brain magnetic resonance images. Utilizing 42 volumetric non-contrast T1-weighted brain scans from four datasets, we delineated bounding boxes around the carotid arteries in the axial slices. Pseudo-RGB images were generated from these slices, and recursive segmentation was conducted using a Res-Unet-based neural network architecture. The model's performance was tested on a separate dataset, with ground truth annotations provided by a radiologist. After recursive training, we achieved an Intersection over Union (IoU) score of (0.68 ± 0.08) on the unseen dataset, demonstrating commendable qualitative results.

8.
Comput Biol Med ; 179: 108902, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39038392

RESUMEN

In the field of histopathology, many studies on the classification of whole slide images (WSIs) using artificial intelligence (AI) technology have been reported. We have studied the disease progression assessment of glioma. Adult-type diffuse gliomas, a type of brain tumor, are classified into astrocytoma, oligodendroglioma, and glioblastoma. Astrocytoma and oligodendroglioma are also called low grade glioma (LGG), and glioblastoma is also called glioblastoma multiforme (GBM). LGG patients frequently have isocitrate dehydrogenase (IDH) mutations. Patients with IDH mutations have been reported to have a better prognosis than patients without IDH mutations. Therefore, IDH mutations are an essential indicator for the classification of glioma. That is why we focused on the IDH1 mutation. In this paper, we aimed to classify the presence or absence of the IDH1 mutation using WSIs and clinical data of glioma patients. Ensemble learning between the WSIs model and the clinical data model is used to classify the presence or absence of IDH1 mutation. By using slide level labels, we combined patch-based imaging information from hematoxylin and eosin (H & E) stained WSIs, along with clinical data using deep image feature extraction and machine learning classifier for predicting IDH1 gene mutation prediction versus wild-type across cohort of 546 patients. We experimented with different deep learning (DL) models including attention-based multiple instance learning (ABMIL) models on imaging data along with gradient boosting machine (LightGBM) for the clinical variables. Further, we used hyperparameter optimization to find the best overall model in terms of classification accuracy. We obtained the highest area under the curve (AUC) of 0.823 for WSIs, 0.782 for clinical data, and 0.852 for ensemble results using MaxViT and LightGBM combination, respectively. Our experimental results indicate that the overall accuracy of the AI models can be improved by using both clinical data and images.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Glioma , Isocitrato Deshidrogenasa , Mutación , Humanos , Isocitrato Deshidrogenasa/genética , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Glioma/genética , Glioma/diagnóstico por imagen , Glioma/patología , Masculino , Femenino , Adulto , Persona de Mediana Edad
9.
Comput Med Imaging Graph ; 116: 102416, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39018640

RESUMEN

Despite that deep learning has achieved state-of-the-art performance for automatic medical image segmentation, it often requires a large amount of pixel-level manual annotations for training. Obtaining these high-quality annotations is time-consuming and requires specialized knowledge, which hinders the widespread application that relies on such annotations to train a model with good segmentation performance. Using scribble annotations can substantially reduce the annotation cost, but often leads to poor segmentation performance due to insufficient supervision. In this work, we propose a novel framework named as ScribSD+ that is based on multi-scale knowledge distillation and class-wise contrastive regularization for learning from scribble annotations. For a student network supervised by scribbles and the teacher based on Exponential Moving Average (EMA), we first introduce multi-scale prediction-level Knowledge Distillation (KD) that leverages soft predictions of the teacher network to supervise the student at multiple scales, and then propose class-wise contrastive regularization which encourages feature similarity within the same class and dissimilarity across different classes, thereby effectively improving the segmentation performance of the student network. Experimental results on the ACDC dataset for heart structure segmentation and a fetal MRI dataset for placenta and fetal brain segmentation demonstrate that our method significantly improves the student's performance and outperforms five state-of-the-art scribble-supervised learning methods. Consequently, the method has a potential for reducing the annotation cost in developing deep learning models for clinical diagnosis.


Asunto(s)
Aprendizaje Profundo , Humanos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Femenino , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Embarazo , Aprendizaje Automático Supervisado
10.
Med Image Anal ; 97: 103274, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39043109

RESUMEN

High performance of deep learning on medical image segmentation rely on large-scale pixel-level dense annotations, which poses a substantial burden on medical experts due to the laborious and time-consuming annotation process, particularly for 3D images. To reduce the labeling cost as well as maintain relatively satisfactory segmentation performance, weakly-supervised learning with sparse labels has attained increasing attentions. In this work, we present a scribble-based framework for medical image segmentation, called Dynamically Mixed Soft Pseudo-label Supervision (DMSPS). Concretely, we extend a backbone with an auxiliary decoder to form a dual-branch network to enhance the feature capture capability of the shared encoder. Considering that most pixels do not have labels and hard pseudo-labels tend to be over-confident to result in poor segmentation, we propose to use soft pseudo-labels generated by dynamically mixing the decoders' predictions as auxiliary supervision. To further enhance the model's performance, we adopt a two-stage approach where the sparse scribbles are expanded based on predictions with low uncertainties from the first-stage model, leading to more annotated pixels to train the second-stage model. Experiments on ACDC dataset for cardiac structure segmentation, WORD dataset for 3D abdominal organ segmentation and BraTS2020 dataset for 3D brain tumor segmentation showed that: (1) compared with the baseline, our method improved the average DSC from 50.46% to 89.51%, from 75.46% to 87.56% and from 52.61% to 76.53% on the three datasets, respectively; (2) DMSPS achieved better performance than five state-of-the-art scribble-supervised segmentation methods, and is generalizable to different segmentation backbones. The code is available online at: https://github.com/HiLab-git/DMSPS.


Asunto(s)
Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Aprendizaje Profundo , Aprendizaje Automático Supervisado , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos
11.
Biomed Phys Eng Express ; 10(5)2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39019048

RESUMEN

Precise segmentation for skin cancer lesions at different stages is conducive to early detection and further treatment. Considering the huge cost of obtaining pixel-perfect annotations for this task, segmentation using less expensive image-level labels has become a research direction. Most image-level label weakly supervised segmentation uses class activation mapping (CAM) methods. A common consequence of this method is incomplete foreground segmentation, insufficient segmentation, or false negatives. At the same time, when performing weakly supervised segmentation of skin cancer lesions, ulcers, redness, and swelling may appear near the segmented areas of individual disease categories. This co-occurrence problem affects the model's accuracy in segmenting class-related tissue boundaries to a certain extent. The above two issues are determined by the loosely constrained nature of image-level labels that penalize the entire image space. Therefore, providing pixel-level constraints for weak supervision of image-level labels is the key to improving performance. To solve the above problems, this paper proposes a joint unsupervised constraint-assisted weakly supervised segmentation model (UCA-WSS). The weakly supervised part of the model adopts a dual-branch adversarial erasure mechanism to generate higher-quality CAM. The unsupervised part uses contrastive learning and clustering algorithms to generate foreground labels and fine boundary labels to assist segmentation and solve common co-occurrence problems in weakly supervised skin cancer lesion segmentation through unsupervised constraints. The model proposed in the article is evaluated comparatively with other related models on some public dermatology data sets. Experimental results show that our model performs better on the skin cancer segmentation task than other weakly supervised segmentation models, showing the potential of combining unsupervised constraint methods on weakly supervised segmentation.


Asunto(s)
Algoritmos , Semántica , Neoplasias Cutáneas , Humanos , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/patología , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Automático Supervisado , Bases de Datos Factuales , Piel/diagnóstico por imagen , Piel/patología , Aprendizaje Automático no Supervisado
12.
Sensors (Basel) ; 24(11)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38894146

RESUMEN

Instrument pose estimation is a key demand in computer-aided surgery, and its main challenges lie in two aspects: Firstly, the difficulty of obtaining stable corresponding image feature points due to the instruments' high refraction and complicated background, and secondly, the lack of labeled pose data. This study aims to tackle the pose estimation problem of surgical instruments in the current endoscope system using a single endoscopic image. More specifically, a weakly supervised method based on the instrument's image segmentation contour is proposed, with the effective assistance of synthesized endoscopic images. Our method consists of the following three modules: a segmentation module to automatically detect the instrument in the input image, followed by a point inference module to predict the image locations of the implicit feature points of the instrument, and a point back-propagatable Perspective-n-Point module to estimate the pose from the tentative 2D-3D corresponding points. To alleviate the over-reliance on point correspondence accuracy, the local errors of feature point matching and the global inconsistency of the corresponding contours are simultaneously minimized. Our proposed method is validated with both real and synthetic images in comparison with the current state-of-the-art methods.

13.
IEEE J Transl Eng Health Med ; 12: 457-467, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38899144

RESUMEN

OBJECTIVE: Pulmonary cavity lesion is one of the commonly seen lesions in lung caused by a variety of malignant and non-malignant diseases. Diagnosis of a cavity lesion is commonly based on accurate recognition of the typical morphological characteristics. A deep learning-based model to automatically detect, segment, and quantify the region of cavity lesion on CT scans has potential in clinical diagnosis, monitoring, and treatment efficacy assessment. METHODS: A weakly-supervised deep learning-based method named CSA2-ResNet was proposed to quantitatively characterize cavity lesions in this paper. The lung parenchyma was firstly segmented using a pretrained 2D segmentation model, and then the output with or without cavity lesions was fed into the developed deep neural network containing hybrid attention modules. Next, the visualized lesion was generated from the activation region of the classification network using gradient-weighted class activation mapping, and image processing was applied for post-processing to obtain the expected segmentation results of cavity lesions. Finally, the automatic characteristic measurement of cavity lesions (e.g., area and thickness) was developed and verified. RESULTS: the proposed weakly-supervised segmentation method achieved an accuracy, precision, specificity, recall, and F1-score of 98.48%, 96.80%, 97.20%, 100%, and 98.36%, respectively. There is a significant improvement (P < 0.05) compared to other methods. Quantitative characterization of morphology also obtained good analysis effects. CONCLUSIONS: The proposed easily-trained and high-performance deep learning model provides a fast and effective way for the diagnosis and dynamic monitoring of pulmonary cavity lesions in clinic. Clinical and Translational Impact Statement: This model used artificial intelligence to achieve the detection and quantitative analysis of pulmonary cavity lesions in CT scans. The morphological features revealed in experiments can be utilized as potential indicators for diagnosis and dynamic monitoring of patients with cavity lesions.


Asunto(s)
Aprendizaje Profundo , Pulmón , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Pulmón/diagnóstico por imagen , Pulmón/patología , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Enfermedades Pulmonares/diagnóstico por imagen , Enfermedades Pulmonares/patología , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Redes Neurales de la Computación , Aprendizaje Automático Supervisado , Algoritmos
14.
Med Image Anal ; 97: 103247, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38941857

RESUMEN

The automated segmentation of Intracranial Arteries (IA) in Digital Subtraction Angiography (DSA) plays a crucial role in the quantification of vascular morphology, significantly contributing to computer-assisted stroke research and clinical practice. Current research primarily focuses on the segmentation of single-frame DSA using proprietary datasets. However, these methods face challenges due to the inherent limitation of single-frame DSA, which only partially displays vascular contrast, thereby hindering accurate vascular structure representation. In this work, we introduce DIAS, a dataset specifically developed for IA segmentation in DSA sequences. We establish a comprehensive benchmark for evaluating DIAS, covering full, weak, and semi-supervised segmentation methods. Specifically, we propose the vessel sequence segmentation network, in which the sequence feature extraction module effectively captures spatiotemporal representations of intravascular contrast, achieving intracranial artery segmentation in 2D+Time DSA sequences. For weakly-supervised IA segmentation, we propose a novel scribble learning-based image segmentation framework, which, under the guidance of scribble labels, employs cross pseudo-supervision and consistency regularization to improve the performance of the segmentation network. Furthermore, we introduce the random patch-based self-training framework, aimed at alleviating the performance constraints encountered in IA segmentation due to the limited availability of annotated DSA data. Our extensive experiments on the DIAS dataset demonstrate the effectiveness of these methods as potential baselines for future research and clinical applications. The dataset and code are publicly available at https://doi.org/10.5281/zenodo.11401368 and https://github.com/lseventeen/DIAS.


Asunto(s)
Angiografía de Substracción Digital , Humanos , Angiografía de Substracción Digital/métodos , Benchmarking , Arterias Cerebrales/diagnóstico por imagen , Algoritmos , Angiografía Cerebral/métodos , Conjuntos de Datos como Asunto , Procesamiento de Imagen Asistido por Computador/métodos , Bases de Datos Factuales
15.
Cancer Res Treat ; 2024 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-38938010

RESUMEN

Purpose: The molecular classification of breast cancer is crucial for effective treatment. The emergence of digital pathology has ushered in a new era in which weakly supervised learning leveraging whole-slide images has gained prominence in developing deep learning models because this approach alleviates the need for extensive manual annotation. Weakly supervised learning was employed to classify the molecular subtypes of breast cancer. Methods: Our approach capitalizes on two whole-slide image datasets: one consisting of breast cancer cases from the Korea University Guro Hospital (KG) and the other originating from The Cancer Genomic Atlas dataset (TCGA). Furthermore, we visualized the inferred results using an attention-based heat map and reviewed the histomorphological features of the most attentive patches. Results: The KG+TCGA-trained model achieved an area under the receiver operating characteristics value of 0.749. An inherent challenge lies in the imbalance among subtypes. Additionally, discrepancies between the two datasets resulted in different molecular subtype proportions. To mitigate this imbalance, we merged the two datasets, and the resulting model exhibited improved performance. The attentive patches correlated well with widely recognized histomorphologic features. The triple-negative subtype has a high incidence of high-grade nuclei, tumor necrosis, and intratumoral tumor-infiltrating lymphocytes. The luminal A subtype showed a high incidence of collagen fibers. Conclusions: The artificial intelligence (AI) model based on weakly supervised learning showed promising performance. A review of the most attentive patches provided insights into the predictions of the AI model. AI models can become invaluable screening tools that reduce costs and workloads in practice.

16.
Sensors (Basel) ; 24(12)2024 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-38931677

RESUMEN

The annotation of magnetic resonance imaging (MRI) images plays an important role in deep learning-based MRI segmentation tasks. Semi-automatic annotation algorithms are helpful for improving the efficiency and reducing the difficulty of MRI image annotation. However, the existing semi-automatic annotation algorithms based on deep learning have poor pre-annotation performance in the case of insufficient segmentation labels. In this paper, we propose a semi-automatic MRI annotation algorithm based on semi-weakly supervised learning. In order to achieve a better pre-annotation performance in the case of insufficient segmentation labels, semi-supervised and weakly supervised learning were introduced, and a semi-weakly supervised learning segmentation algorithm based on sparse labels was proposed. In addition, in order to improve the contribution rate of a single segmentation label to the performance of the pre-annotation model, an iterative annotation strategy based on active learning was designed. The experimental results on public MRI datasets show that the proposed algorithm achieved an equivalent pre-annotation performance when the number of segmentation labels was much less than that of the fully supervised learning algorithm, which proves the effectiveness of the proposed algorithm.

17.
Bioengineering (Basel) ; 11(6)2024 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-38927798

RESUMEN

Interstitial lung disease (ILD) is characterized by progressive pathological changes that require timely and accurate diagnosis. The early detection and progression assessment of ILD are important for effective management. This study introduces a novel quantitative evaluation method utilizing chest radiographs to analyze pixel-wise changes in ILD. Using a weakly supervised learning framework, the approach incorporates the contrastive unpaired translation model and a newly developed ILD extent scoring algorithm for more precise and objective quantification of disease changes than conventional visual assessments. The ILD extent score calculated through this method demonstrated a classification accuracy of 92.98% between ILD and normal classes. Additionally, using an ILD follow-up dataset for interval change analysis, this method assessed disease progression with an accuracy of 85.29%. These findings validate the reliability of the ILD extent score as a tool for ILD monitoring. The results of this study suggest that the proposed quantitative method may improve the monitoring and management of ILD.

18.
Comput Med Imaging Graph ; 115: 102395, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38729092

RESUMEN

In this paper, we hypothesize that it is possible to localize image regions of preclinical tumors in a Chest X-ray (CXR) image by a weakly-supervised training of a survival prediction model using a dataset containing CXR images of healthy patients and their time-to-death label. These visual explanations can empower clinicians in early lung cancer detection and increase patient awareness of their susceptibility to the disease. To test this hypothesis, we train a censor-aware multi-class survival prediction deep learning classifier that is robust to imbalanced training, where classes represent quantized number of days for time-to-death prediction. Such multi-class model allows us to use post-hoc interpretability methods, such as Grad-CAM, to localize image regions of preclinical tumors. For the experiments, we propose a new benchmark based on the National Lung Cancer Screening Trial (NLST) dataset to test weakly-supervised preclinical tumor localization and survival prediction models, and results suggest that our proposed method shows state-of-the-art C-index survival prediction and weakly-supervised preclinical tumor localization results. To our knowledge, this constitutes a pioneer approach in the field that is able to produce visual explanations of preclinical events associated with survival prediction results.


Asunto(s)
Detección Precoz del Cáncer , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/mortalidad , Detección Precoz del Cáncer/métodos , Radiografía Torácica , Aprendizaje Profundo , Análisis de Supervivencia
19.
Artículo en Inglés | MEDLINE | ID: mdl-38765185

RESUMEN

Colorectal cancer (CRC) is the third most common cancer in the United States. Tumor Budding (TB) detection and quantification are crucial yet labor-intensive steps in determining the CRC stage through the analysis of histopathology images. To help with this process, we adapt the Segment Anything Model (SAM) on the CRC histopathology images to segment TBs using SAM-Adapter. In this approach, we automatically take task-specific prompts from CRC images and train the SAM model in a parameter-efficient way. We compare the predictions of our model with the predictions from a trained-from-scratch model using the annotations from a pathologist. As a result, our model achieves an intersection over union (IoU) of 0.65 and an instance-level Dice score of 0.75, which are promising in matching the pathologist's TB annotation. We believe our study offers a novel solution to identify TBs on H&E-stained histopathology images. Our study also demonstrates the value of adapting the foundation model for pathology image segmentation tasks.

20.
Comput Methods Programs Biomed ; 250: 108164, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38718709

RESUMEN

BACKGROUND AND OBJECTIVE: Current automatic electrocardiogram (ECG) diagnostic systems could provide classification outcomes but often lack explanations for these results. This limitation hampers their application in clinical diagnoses. Previous supervised learning could not highlight abnormal segmentation output accurately enough for clinical application without manual labeling of large ECG datasets. METHOD: In this study, we present a multi-instance learning framework called MA-MIL, which has designed a multi-layer and multi-instance structure that is aggregated step by step at different scales. We evaluated our method using the public MIT-BIH dataset and our private dataset. RESULTS: The results show that our model performed well in both ECG classification output and heartbeat level, sub-heartbeat level abnormal segment detection, with accuracy and F1 scores of 0.987 and 0.986 for ECG classification and 0.968 and 0.949 for heartbeat level abnormal detection, respectively. Compared to visualization methods, the IoU values of MA-MIL improved by at least 17 % and at most 31 % across all categories. CONCLUSIONS: MA-MIL could accurately locate the abnormal ECG segment, offering more trustworthy results for clinical application.


Asunto(s)
Algoritmos , Electrocardiografía , Aprendizaje Automático Supervisado , Electrocardiografía/métodos , Humanos , Frecuencia Cardíaca , Bases de Datos Factuales , Procesamiento de Señales Asistido por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA