Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Med Image Anal ; 97: 103253, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38968907

RESUMEN

Airway-related quantitative imaging biomarkers are crucial for examination, diagnosis, and prognosis in pulmonary diseases. However, the manual delineation of airway structures remains prohibitively time-consuming. While significant efforts have been made towards enhancing automatic airway modelling, current public-available datasets predominantly concentrate on lung diseases with moderate morphological variations. The intricate honeycombing patterns present in the lung tissues of fibrotic lung disease patients exacerbate the challenges, often leading to various prediction errors. To address this issue, the 'Airway-Informed Quantitative CT Imaging Biomarker for Fibrotic Lung Disease 2023' (AIIB23) competition was organized in conjunction with the official 2023 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI). The airway structures were meticulously annotated by three experienced radiologists. Competitors were encouraged to develop automatic airway segmentation models with high robustness and generalization abilities, followed by exploring the most correlated QIB of mortality prediction. A training set of 120 high-resolution computerised tomography (HRCT) scans were publicly released with expert annotations and mortality status. The online validation set incorporated 52 HRCT scans from patients with fibrotic lung disease and the offline test set included 140 cases from fibrosis and COVID-19 patients. The results have shown that the capacity of extracting airway trees from patients with fibrotic lung disease could be enhanced by introducing voxel-wise weighted general union loss and continuity loss. In addition to the competitive image biomarkers for mortality prediction, a strong airway-derived biomarker (Hazard ratio>1.5, p < 0.0001) was revealed for survival prognostication compared with existing clinical measurements, clinician assessment and AI-based biomarkers.


Asunto(s)
Biomarcadores , Fibrosis Pulmonar , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Fibrosis Pulmonar/diagnóstico por imagen , Benchmarking , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
2.
J Med Syst ; 48(1): 14, 2024 Jan 16.
Artículo en Inglés | MEDLINE | ID: mdl-38227131

RESUMEN

Many automated approaches have been proposed in literature to quantify clinically relevant wound features based on image processing analysis, aiming at removing human subjectivity and accelerate clinical practice. In this work we present a fully automated image processing pipeline leveraging deep learning and a large wound segmentation dataset to perform wound detection and following prediction of the Photographic Wound Assessment Tool (PWAT), automatizing the clinical judgement of the adequate wound healing. Starting from images acquired by smartphone cameras, a series of textural and morphological features are extracted from the wound areas, aiming to mimic the typical clinical considerations for wound assessment. The resulting extracted features can be easily interpreted by the clinician and allow a quantitative estimation of the PWAT scores. The features extracted from the region-of-interests detected by our pre-trained neural network model correctly predict the PWAT scale values with a Spearman's correlation coefficient of 0.85 on a set of unseen images. The obtained results agree with the current state-of-the-art and provide a benchmark for future artificial intelligence applications in this research field.


Asunto(s)
Inteligencia Artificial , Benchmarking , Humanos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Fotograbar
3.
Sci Rep ; 13(1): 20409, 2023 Nov 21.
Artículo en Inglés | MEDLINE | ID: mdl-37989779

RESUMEN

Subsurface stratigraphic modeling is crucial for a variety of environmental, societal, and economic challenges. However, the need for specific sedimentological skills in sediment core analysis may constitute a limitation. Methods based on Machine Learning and Deep Learning can play a central role in automatizing this time-consuming procedure. In this work, using a robust dataset of high-resolution digital images from continuous sediment cores of Holocene age that reflect a wide spectrum of continental to shallow-marine depositional environments, we outline a novel deep-learning-based approach to perform automatic semantic segmentation directly on core images, leveraging the power of convolutional neural networks. To optimize the interpretation process and maximize scientific value, we use six sedimentary facies associations as target classes in lieu of ineffective classification methods based uniquely on lithology. We propose an automated model that can rapidly characterize sediment cores, allowing immediate guidance for stratigraphic correlation and subsurface reconstructions.

4.
J Pers Med ; 13(3)2023 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-36983660

RESUMEN

BACKGROUND: Benign renal tumors, such as renal oncocytoma (RO), can be erroneously diagnosed as malignant renal cell carcinomas (RCC), because of their similar imaging features. Computer-aided systems leveraging radiomic features can be used to better discriminate benign renal tumors from the malignant ones. The purpose of this work was to build a machine learning model to distinguish RO from clear cell RCC (ccRCC). METHOD: We collected CT images of 77 patients, with 30 cases of RO (39%) and 47 cases of ccRCC (61%). Radiomic features were extracted both from the tumor volumes identified by the clinicians and from the tumor's zone of transition (ZOT). We used a genetic algorithm to perform feature selection, identifying the most descriptive set of features for the tumor classification. We built a decision tree classifier to distinguish between ROs and ccRCCs. We proposed two versions of the pipeline: in the first one, the feature selection was performed before the splitting of the data, while in the second one, the feature selection was performed after, i.e., on the training data only. We evaluated the efficiency of the two pipelines in cancer classification. RESULTS: The ZOT features were found to be the most predictive by the genetic algorithm. The pipeline with the feature selection performed on the whole dataset obtained an average ROC AUC score of 0.87 ± 0.09. The second pipeline, in which the feature selection was performed on the training data only, obtained an average ROC AUC score of 0.62 ± 0.17. CONCLUSIONS: The obtained results confirm the efficiency of ZOT radiomic features in capturing the renal tumor characteristics. We showed that there is a significant difference in the performances of the two proposed pipelines, highlighting how some already published radiomic analyses could be too optimistic about the real generalization capabilities of the models.

5.
Animals (Basel) ; 13(6)2023 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-36978498

RESUMEN

Wound management is a fundamental task in standard clinical practice. Automated solutions already exist for humans, but there is a lack of applications regarding wound management for pets. Precise and efficient wound assessment is helpful to improve diagnosis and to increase the effectiveness of treatment plans for chronic wounds. In this work, we introduced a novel pipeline for the segmentation of pet wound images. Starting from a model pre-trained on human-based wound images, we applied a combination of transfer learning (TL) and active semi-supervised learning (ASSL) to automatically label a large dataset. Additionally, we provided a guideline for future applications of TL+ASSL training strategy on image datasets. We compared the effectiveness of the proposed training strategy, monitoring the performance of an EfficientNet-b3 U-Net model against the lighter solution provided by a MobileNet-v2 U-Net model. We obtained 80% of correctly segmented images after five rounds of ASSL training. The EfficientNet-b3 U-Net model significantly outperformed the MobileNet-v2 one. We proved that the number of available samples is a key factor for the correct usage of ASSL training. The proposed approach is a viable solution to reduce the time required for the generation of a segmentation dataset.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA