Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.001
Filtrar
1.
Data Brief ; 56: 110852, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39281010

RESUMEN

Detecting and screening clouds is the first step in most optical remote sensing analyses. Cloud formation is diverse, presenting many shapes, thicknesses, and altitudes. This variety poses a significant challenge to the development of effective cloud detection algorithms, as most datasets lack an unbiased representation. To address this issue, we have built CloudSEN12+, a significant expansion of the CloudSEN12 dataset. This new dataset doubles the expert-labeled annotations, making it the largest cloud and cloud shadow detection dataset for Sentinel-2 imagery up to date. We have carefully reviewed and refined our previous annotations to ensure maximum trustworthiness. We expect CloudSEN12+ will be a valuable resource for the cloud detection research community.

2.
Heliyon ; 10(17): e36248, 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39286137

RESUMEN

This Proposed work explores how machine learning can be used to diagnose conjunctivitis, a common eye ailment. The main goal of the study is to capture eye images using camera-based systems, perform image pre-processing, and employ image segmentation techniques, particularly the UNet++ and U-net models. Additionally, the study involves extracting features from the relevant areas within the segmented images and using Convolutional Neural Networks for classification. All this is carried out using TensorFlow, a well-known machine-learning platform. The research involves thorough training and assessment of both the UNet and U-net++ segmentation models. A comprehensive analysis is conducted, focusing on their accuracy and performance. The study goes further to evaluate these models using both the UBIRIS dataset and a custom dataset created for this specific research. The experimental results emphasize a substantial improvement in the quality of segmentation achieved by the U-net++ model, the model achieved an overall accuracy of 97.07. Furthermore, the UNet++ architecture displays better accuracy in comparison to the traditional U-net model. These outcomes highlight the potential of U-net++ as a valuable advancement in the field of machine learning-based conjunctivitis diagnosis.

3.
Ultrasound Med Biol ; 2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39244483

RESUMEN

OBJECTIVE: As metabolic dysfunction-associated steatotic liver disease (MASLD) becomes more prevalent worldwide, it is imperative to create more accurate technologies that make it easy to assess the liver in a point-of-care setting. The aim of this study is to test the performance of a new software tool implemented in Velacur (Sonic Incytes), a liver stiffness and ultrasound attenuation measurement device, on patients with MASLD. This tool employs a deep learning-based method to detect and segment shear waves in the liver tissue for subsequent analysis to improve tissue characterization for patient diagnosis. METHODS: This new tool consists of a deep learning based algorithm, which was trained on 15,045 expert-segmented images from 103 patients, using a U-Net architecture. The algorithm was then tested on 4429 images from 36 volunteers and patients with MASLD. Test subjects were scanned at different clinics with different Velacur operators. Evaluation was performed on both individual images (image based) and averaged across all images collected from a patient (patient based). Ground truth was defined by expert segmentation of the shear waves within each image. For evaluation, sensitivity and specificity for correct wave detection in the image were calculated. For those images containing waves, the Dice coefficient was calculated. A prototype of the software tool was also implemented on Velacur and assessed by operators in real world settings. RESULTS: The wave detection algorithm had a sensitivity of 81% and a specificity of 84%, with a Dice coefficient of 0.74 and 0.75 for image based and patient-based averages respectively. The implementation of this software tool as an overlay on the B-Mode ultrasound resulted in improved exam quality collected by operators. CONCLUSION: The shear wave algorithm performed well on a test set of volunteers and patients with metabolic dysfunction-associated steatotic liver disease. The addition of this software tool, implemented on the Velacur system, improved the quality of the liver assessments performed in a real world, point of care setting.

4.
J Imaging Inform Med ; 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39227537

RESUMEN

Thermography is a non-invasive and non-contact method for detecting cancer in its initial stages by examining the temperature variation between both breasts. Preprocessing methods such as resizing, ROI (region of interest) segmentation, and augmentation are frequently used to enhance the accuracy of breast thermogram analysis. In this study, a modified U-Net architecture (DTCWAU-Net) that uses dual-tree complex wavelet transform (DTCWT) and attention gate for breast thermal image segmentation for frontal and lateral view thermograms, aiming to outline ROI for potential tumor detection, was proposed. The proposed approach achieved an average Dice coefficient of 93.03% and a sensitivity of 94.82%, showcasing its potential for accurate breast thermogram segmentation. Classification of breast thermograms into healthy or cancerous categories was carried out by extracting texture- and histogram-based features and deep features from segmented thermograms. Feature selection was performed using Neighborhood Component Analysis (NCA), followed by the application of machine learning classifiers. When compared to other state-of-the-art approaches for detecting breast cancer using a thermogram, the proposed methodology showed a higher accuracy of 99.90% for VGG16 deep features with NCA and Random Forest classifier. Simulation results expound that the proposed method can be used in breast cancer screening, facilitating early detection, and enhancing treatment outcomes.

5.
Skeletal Radiol ; 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39230576

RESUMEN

OBJECTIVE: A fully automated laminar cartilage composition (MRI-based T2) analysis method was technically and clinically validated by comparing radiographically normal knees with (CL-JSN) and without contra-lateral joint space narrowing or other signs of radiographic osteoarthritis (OA, CL-noROA). MATERIALS AND METHODS: 2D U-Nets were trained from manually segmented femorotibial cartilages (n = 72) from all 7 echoes (AllE), or from the 1st echo only (1stE) of multi-echo-spin-echo (MESE) MRIs acquired by the Osteoarthritis Initiative (OAI). Because of its greater accuracy, only the AllE U-Net was then applied to knees from the OAI healthy reference cohort (n = 10), CL-JSN (n = 39), and (1:1) matched CL-noROA knees (n = 39) that all had manual expert segmentation, and to 982 non-matched CL-noROA knees without expert segmentation. RESULTS: The agreement (Dice similarity coefficient) between automated vs. manual expert cartilage segmentation was between 0.82 ± 0.05/0.79 ± 0.06 (AllE/1stE) and 0.88 ± 0.03/0.88 ± 0.03 (AllE/1stE) across femorotibial cartilage plates. The deviation between automated vs. manually derived laminar T2 reached up to - 2.2 ± 2.6 ms/ + 4.1 ± 10.2 ms (AllE/1stE). The AllE U-Net showed a similar sensitivity to cross-sectional laminar T2 differences between CL-JSN and CL-noROA knees in the matched (Cohen's D ≤ 0.54) and the non-matched (D ≤ 0.54) comparison as the matched manual analyses (D ≤ 0.48). Longitudinally, the AllE U-Net also showed a similar sensitivity to CL-JSN vs. CS-noROA differences in the matched (D ≤ 0.51) and the non-matched (D ≤ 0.43) comparison as matched manual analyses (D ≤ 0.41). CONCLUSION: The fully automated T2 analysis showed a high agreement, acceptable accuracy, and similar sensitivity to cross-sectional and longitudinal laminar T2 differences in an early OA model, compared with manual expert analysis. TRIAL REGISTRATION: Clinicaltrials.gov identification: NCT00080171.

6.
Technol Health Care ; 2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39240595

RESUMEN

BACKGROUND: Liver cancer poses a significant health challenge due to its high incidence rates and complexities in detection and treatment. Accurate segmentation of liver tumors using medical imaging plays a crucial role in early diagnosis and treatment planning. OBJECTIVE: This study proposes a novel approach combining U-Net and ResNet architectures with the Adam optimizer and sigmoid activation function. The method leverages ResNet's deep residual learning to address training issues in deep neural networks. At the same time, U-Net's structure facilitates capturing local and global contextual information essential for precise tumor characterization. The model aims to enhance segmentation accuracy by effectively capturing intricate tumor features and contextual details by integrating these architectures. The Adam optimizer expedites model convergence by dynamically adjusting the learning rate based on gradient statistics during training. METHODS: To validate the effectiveness of the proposed approach, segmentation experiments are conducted on a diverse dataset comprising 130 CT scans of liver cancers. Furthermore, a state-of-the-art fusion strategy is introduced, combining the robust feature learning capabilities of the UNet-ResNet classifier with Snake-based Level Set Segmentation. RESULTS: Experimental results demonstrate impressive performance metrics, including an accuracy of 0.98 and a minimal loss of 0.10, underscoring the efficacy of the proposed methodology in liver cancer segmentation. CONCLUSION: This fusion approach effectively delineates complex and diffuse tumor shapes, significantly reducing errors.

7.
Sensors (Basel) ; 24(17)2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39275756

RESUMEN

Liver cancer is one of the malignancies with high mortality rates worldwide, and its timely detection and accurate diagnosis are crucial for improving patient prognosis. To address the limitations of traditional image segmentation techniques and the U-Net network in capturing fine image features, this study proposes an improved model based on the U-Net architecture, named RHEU-Net. By replacing traditional convolution modules in the encoder and decoder with improved residual modules, the network's feature extraction capabilities and gradient stability are enhanced. A Hybrid Gated Attention (HGA) module is integrated before the skip connections, enabling the parallel processing of channel and spatial attentions, optimizing the feature fusion strategy, and effectively replenishing image details. A Multi-Scale Feature Enhancement (MSFE) layer is introduced at the bottleneck, utilizing multi-scale feature extraction technology to further enhance the expression of receptive fields and contextual information, improving the overall feature representation effect. Testing on the LiTS2017 dataset demonstrated that RHEU-Net achieved Dice scores of 95.72% for liver segmentation and 70.19% for tumor segmentation. These results validate the effectiveness of RHEU-Net and underscore its potential for clinical application.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias Hepáticas , Redes Neurales de la Computación , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/patología , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Hígado/diagnóstico por imagen , Hígado/patología
8.
Strahlenther Onkol ; 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39283345

RESUMEN

BACKGROUND: The hypothesis of changing network layers to increase the accuracy of dose distribution prediction, instead of expanding their dimensions, which requires complex calculations, has been considered in our study. MATERIALS AND METHODS: A total of 137 prostate cancer patients treated with the tomotherapy technique were categorized as 80% training and validating as well as 20% testing for the nested UNet and UNet architectures. Mean absolute error (MAE) was used to measure the dosimetry indices of dose-volume histograms (DVHs), and geometry indices, including the structural similarity index measure (SSIM), dice similarity coefficient (DSC), and Jaccard similarity coefficient (JSC), were used to evaluate the isodose volume (IV) similarity prediction. To verify a statistically significant difference, the two-way statistical Wilcoxon test was used at a level of 0.05 (p < 0.05). RESULTS: Use of a nested UNet architecture reduced the predicted dose MAE in DVH indices. The MAE for planning target volume (PTV), bladder, rectum, and right and left femur were D98% = 1.11 ± 0.90; D98% = 2.27 ± 2.85, Dmean = 0.84 ± 0.62; D98% = 1.47 ± 12.02, Dmean = 0.77 ± 1.59; D2% = 0.65 ± 0.70, Dmean = 0.96 ± 2.82; and D2% = 1.18 ± 6.65, Dmean = 0.44 ± 1.13, respectively. Additionally, the greatest geometric similarity was observed in the mean SSIM for UNet and nested UNet (0.91 vs. 0.94, respectively). CONCLUSION: The nested UNet network can be considered a suitable network due to its ability to improve the accuracy of dose distribution prediction compared to the UNet network in an acceptable time.

9.
Bioinform Biol Insights ; 18: 11779322241272387, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39246684

RESUMEN

Objectives: This article focuses on the detection of cells in low-contrast brightfield microscopy images; in our case, it is chronic lymphocytic leukaemia cells. The automatic detection of cells from brightfield time-lapse microscopic images brings new opportunities in cell morphology and migration studies; to achieve the desired results, it is advisable to use state-of-the-art image segmentation methods that not only detect the cell but also detect its boundaries with the highest possible accuracy, thus defining its shape and dimensions. Methods: We compared eight state-of-the-art neural network architectures with different backbone encoders for image data segmentation, namely U-net, U-net++, the Pyramid Attention Network, the Multi-Attention Network, LinkNet, the Feature Pyramid Network, DeepLabV3, and DeepLabV3+. The training process involved training each of these networks for 1000 epochs using the PyTorch and PyTorch Lightning libraries. For instance segmentation, the watershed algorithm and three-class image semantic segmentation were used. We also used StarDist, a deep learning-based tool for object detection with star-convex shapes. Results: The optimal combination for semantic segmentation was the U-net++ architecture with a ResNeSt-269 background with a data set intersection over a union score of 0.8902. For the cell characteristics examined (area, circularity, solidity, perimeter, radius, and shape index), the difference in mean value using different chronic lymphocytic leukaemia cell segmentation approaches appeared to be statistically significant (Mann-Whitney U test, P < .0001). Conclusion: We found that overall, the algorithms demonstrate equal agreement with ground truth, but with the comparison, it can be seen that the different approaches prefer different morphological features of the cells. Consequently, choosing the most suitable method for instance-based cell segmentation depends on the particular application, namely, the specific cellular traits being investigated.

10.
Heliyon ; 10(16): e35933, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39258194

RESUMEN

The growing interest in Subseasonal to Seasonal (S2S) prediction data across different industries underscores its potential use in comprehending weather patterns, extreme conditions, and important sectors such as agriculture and energy management. However, concerns about its accuracy have been raised. Furthermore, enhancing the precision of rainfall predictions remains challenging in S2S forecasts. This study enhanced the sub-seasonal to seasonal (S2S) prediction skills for precipitation amount and occurrence over the East Asian region by employing deep learning-based post-processing techniques. We utilized a modified U-Net architecture that wraps all its convolutional layers with TimeDistributed layers as a deep learning model. For the training datasets, the precipitation prediction data of six S2S climate models and their multi-model ensemble (MME) were constructed, and the daily precipitation occurrence was obtained from the three thresholds values, 0 % of the daily precipitation for no-rain events, <33 % for light-rain, >67 % for heavy-rain. Based on the precipitation amount prediction skills of the six climate models, deep learning-based post-processing outperformed post-processing using multiple linear regression (MLR) in the lead times of weeks 2-4. The prediction accuracy of precipitation occurrence with MLR-based post-processing did not significantly improve, whereas deep learning-based post-processing enhanced the prediction accuracy in the total lead times, demonstrating superiority over MLR. We enhanced the prediction accuracy in forecasting the amount and occurrence of precipitation in individual climate models using deep learning-based post-processing.

11.
Sci Rep ; 14(1): 21298, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39266655

RESUMEN

Learning operators with deep neural networks is an emerging paradigm for scientific computing. Deep Operator Network (DeepONet) is a modular operator learning framework that allows for flexibility in choosing the kind of neural network to be used in the trunk and/or branch of the DeepONet. This is beneficial as it has been shown many times that different types of problems require different kinds of network architectures for effective learning. In this work, we design an efficient neural operator based on the DeepONet architecture. We introduce U-Net enhanced DeepONet (U-DeepONet) for learning the solution operator of highly complex CO2-water two-phase flow in heterogeneous porous media. The U-DeepONet is more accurate in predicting gas saturation and pressure buildup than the state-of-the-art U-Net based Fourier Neural Operator (U-FNO) and the Fourier-enhanced Multiple-Input Operator (Fourier-MIONet) trained on the same dataset. Moreover, our U-DeepONet is significantly more efficient in training times than both the U-FNO (more than 18 times faster) and the Fourier-MIONet (more than 5 times faster), while consuming less computational resources. We also show that the U-DeepONet is more data efficient and better at generalization than both the U-FNO and the Fourier-MIONet.

12.
Magn Reson Med ; 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39270056

RESUMEN

PURPOSE: To shorten CEST acquisition time by leveraging Z-spectrum undersampling combined with deep learning for CEST map construction from undersampled Z-spectra. METHODS: Fisher information gain analysis identified optimal frequency offsets (termed "Fisher offsets") for the multi-pool fitting model, maximizing information gain for the amplitude and the FWHM parameters. These offsets guided initial subsampling levels. A U-NET, trained on undersampled brain CEST images from 18 volunteers, produced CEST maps at 3 T with varied undersampling levels. Feasibility was first tested using retrospective undersampling at three levels, followed by prospective in vivo undersampling (15 of 53 offsets), reducing scan time significantly. Additionally, glioblastoma grade IV pathology was simulated to evaluate network performance in patient-like cases. RESULTS: Traditional multi-pool models failed to quantify CEST maps from undersampled images (structural similarity index [SSIM] <0.2, peak SNR <20, Pearson r <0.1). Conversely, U-NET fitting successfully addressed undersampled data challenges. The study suggests CEST scan time reduction is feasible by undersampling 15, 25, or 35 of 53 Z-spectrum offsets. Prospective undersampling cut scan time by 3.5 times, with a maximum mean squared error of 4.4e-4, r = 0.82, and SSIM = 0.84, compared to the ground truth. The network also reliably predicted CEST values for simulated glioblastoma pathology. CONCLUSION: The U-NET architecture effectively quantifies CEST maps from undersampled Z-spectra at various undersampling levels.

13.
Comput Biol Med ; 182: 109139, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39270456

RESUMEN

We developed a method for automated detection of motion and noise artifacts (MNA) in electrodermal activity (EDA) signals, based on a one-dimensional U-Net architecture. EDA has been widely employed in diverse applications to assess sympathetic functions. However, EDA signals can be easily corrupted by MNA, which frequently occur in wearable systems, particularly those used for ambulatory recording. MNA can lead to false decisions, resulting in inaccurate assessment and diagnosis. Several approaches have been proposed for MNA detection; however, questions remain regarding the generalizability and the feasibility of implementation of the algorithms in real-time especially those involving deep learning approaches. In this work, we propose a deep learning approach based on a one-dimensional U-Net architecture using spectrograms of EDA for MNA detection. We developed our method using four distinct datasets, including two independent testing datasets, with a total of 9602 128-s EDA segments from 104 subjects. Our proposed scheme, including data augmentation, spectrogram computation, and 1D U-Net, yielded balanced accuracies of 80.0 ± 13.7 % and 75.0 ± 14.0 % for the two independent test datasets; these results are better than or comparable to those of other five state-of-the-art methods. Additionally, the computation time of our feature computation and machine learning classification was significantly lower than that of other methods (p < .001). The model requires only 0.28 MB of memory, which is far smaller than the two deep learning approaches (4.93 and 54.59 MB) which were used as comparisons to our study. Our model can be implemented in real-time in embedded systems, even with limited memory and an inefficient microprocessor, without compromising the accuracy of MNA detection.

14.
Radiography (Lond) ; 30(5): 1442-1450, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39179459

RESUMEN

INTRODUCTION: No study has yet investigated the minimum amount of data required for deep learning-based liver contouring. Therefore, this study aimed to investigate the feasibility of automated liver contouring using limited data. METHODS: Radiotherapy planning Computed tomography (CT) images were subjected to various preprocessing methods, such as denoising and windowing. Segmentation was conducted using the modified Attention U-Net and Residual U-Net networks. Two different modified networks were trained separately for different training sizes. For each architecture, the model trained with the training set size that achieved the highest dice similarity coefficient (DSC) score was selected for further evaluation. Two unseen external datasets with different distributions from the training set were also used to examine the generalizability of the proposed method. RESULTS: The modified Residual U-Net and Attention U-Net networks achieved average DSCs of 97.62% and 96.48%, respectively, on the test set, using 62 training cases. The average Hausdorff distances (AHDs) for the modified Residual U-Net and Attention U-Net networks were 0.57 mm and 0.71 mm, respectively. Also, the modified Residual U-Net and Attention U-Net networks were tested on two unseen external datasets, achieving DSCs of 95.35% and 95.82% for data from another center and 95.16% and 94.93% for the AbdomenCT-1K dataset, respectively. CONCLUSION: This study demonstrates that deep learning models can accurately segment livers using a small training set. The method, utilizing simple preprocessing and modified network architectures, shows strong performance on unseen datasets, indicating its generalizability. IMPLICATIONS FOR PRACTICE: This promising result suggests its potential for automated liver contouring in radiotherapy planning.


Asunto(s)
Aprendizaje Profundo , Hígado , Planificación de la Radioterapia Asistida por Computador , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Planificación de la Radioterapia Asistida por Computador/métodos , Hígado/diagnóstico por imagen , Neoplasias Hepáticas/radioterapia , Neoplasias Hepáticas/diagnóstico por imagen , Estudios de Factibilidad
15.
Comput Biol Med ; 180: 109000, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39133952

RESUMEN

The fetus's health is evaluated with the biometric parameters obtained from the low-resolution ultrasound images. The accuracy of biometric parameters in existing protocols typically depends on conventional image processing approaches and hence, is prone to error. This study introduces the Attention Gate Double U-Net with Guided Decoder (ADU-GD) model specifically crafted for fetal biometric parameter prediction. The attention network and guided decoder are specifically designed to dynamically merge local features with their global dependencies, enhancing the precision of parameter estimation. The ADU-GD displays superior performance with Mean Absolute Error of 0.99 mm and segmentation accuracy of 99.1 % when benchmarked against the well-established models. The proposed model consistently achieved a high Dice index score of about 99.1 ± 0.8, with a minimal Hausdorff distance of about 1.01 ± 1.07 and a low Average Symmetric Surface Distance of about 0.25 ± 0.21, demonstrating the model's excellence. In a comprehensive evaluation, ADU-GD emerged as a frontrunner, outperforming existing deep-learning models such as Double U-Net, DeepLabv3, FCN-32s, PSPNet, SegNet, Trans U-Net, Swin U-Net, Mask-R2CNN, and RDHCformer models in terms of Mean Absolute Error for crucial fetal dimensions, including Head Circumference, Abdomen Circumference, Femur Length, and BiParietal Diameter. It achieved superior accuracy with MAE values of 2.2 mm, 2.6 mm, 0.6 mm, and 1.2 mm, respectively.


Asunto(s)
Feto , Ultrasonografía Prenatal , Humanos , Femenino , Ultrasonografía Prenatal/métodos , Embarazo , Feto/diagnóstico por imagen , Feto/anatomía & histología , Biometría/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Profundo , Redes Neurales de la Computación
16.
Diagnostics (Basel) ; 14(16)2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39202266

RESUMEN

Post-mortem (PM) imaging has potential for identifying individuals by comparing ante-mortem (AM) and PM images. Radiographic images of bones contain significant information for personal identification. However, PM images are affected by soft tissue decomposition; therefore, it is desirable to extract only images of bones that change little over time. This study evaluated the effectiveness of U-Net for bone image extraction from two-dimensional (2D) X-ray images. Two types of pseudo 2D X-ray images were created from the PM computed tomography (CT) volumetric data using ray-summation processing for training U-Net. One was a projection of all body tissues, and the other was a projection of only bones. The performance of the U-Net for bone extraction was evaluated using Intersection over Union, Dice coefficient, and the area under the receiver operating characteristic curve. Additionally, AM chest radiographs were used to evaluate its performance with real 2D images. Our results indicated that bones could be extracted visually and accurately from both AM and PM images using U-Net. The extracted bone images could provide useful information for personal identification in forensic pathology.

17.
Ultrasonics ; 144: 107439, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39180922

RESUMEN

In observatory seismology, the effective automatic processing of seismograms is a time-consuming task. A contemporary approach for seismogram processing is based on the Deep Neural Network formalism, which has been successfully applied in many fields. Here, we present a 4D network, based on U-net architecture, that simultaneously processes seismograms from an entire network. We also interpret Acoustic Emission data based on a laboratory loading experiment. The obtained data was a very good testing set, similar to real seismograms. Our Neural network is designed to detect multiple events. Input data are created by augmentation from previously interpreted single events. The advantage of the approach is that the positions of (multiple) events are exactly known, thus, the efficiency of detection can be evaluated. Even if the method reaches an average efficiency of only around 30% for the onset of individual tracks, average efficiency for the detection of double events was approximately 97% for a maximum target, with a prediction difference of 20 samples. Such is the main benefit of simultaneous network signal processing.

18.
Biomed Phys Eng Express ; 10(6)2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39214119

RESUMEN

Echocardiography is one the most commonly used imaging modalities for the diagnosis of congenital heart disease. Echocardiographic image analysis is crucial to obtaining accurate cardiac anatomy information. Semantic segmentation models can be used to precisely delimit the borders of the left ventricle, and allow an accurate and automatic identification of the region of interest, which can be extremely useful for cardiologists. In the field of computer vision, convolutional neural network (CNN) architectures remain dominant. Existing CNN approaches have proved highly efficient for the segmentation of various medical images over the past decade. However, these solutions usually struggle to capture long-range dependencies, especially when it comes to images with objects of different scales and complex structures. In this study, we present an efficient method for semantic segmentation of echocardiographic images that overcomes these challenges by leveraging the self-attention mechanism of the Transformer architecture. The proposed solution extracts long-range dependencies and efficiently processes objects at different scales, improving performance in a variety of tasks. We introduce Shifted Windows Transformer models (Swin Transformers), which encode both the content of anatomical structures and the relationship between them. Our solution combines the Swin Transformer and U-Net architectures, producing a U-shaped variant. The validation of the proposed method is performed with the EchoNet-Dynamic dataset used to train our model. The results show an accuracy of 0.97, a Dice coefficient of 0.87, and an Intersection over union (IoU) of 0.78. Swin Transformer models are promising for semantically segmenting echocardiographic images and may help assist cardiologists in automatically analyzing and measuring complex echocardiographic images.


Asunto(s)
Algoritmos , Ecocardiografía , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Ecocardiografía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Ventrículos Cardíacos/diagnóstico por imagen , Cardiopatías Congénitas/diagnóstico por imagen , Corazón/diagnóstico por imagen
19.
Biomed Phys Eng Express ; 10(5)2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39094595

RESUMEN

Dynamic 2-[18F] fluoro-2-deoxy-D-glucose positron emission tomography (dFDG-PET) for human brain imaging has considerable clinical potential, yet its utilization remains limited. A key challenge in the quantitative analysis of dFDG-PET is characterizing a patient-specific blood input function, traditionally reliant on invasive arterial blood sampling. This research introduces a novel approach employing non-invasive deep learning model-based computations from the internal carotid arteries (ICA) with partial volume (PV) corrections, thereby eliminating the need for invasive arterial sampling. We present an end-to-end pipeline incorporating a 3D U-Net based ICA-net for ICA segmentation, alongside a Recurrent Neural Network (RNN) based MCIF-net for the derivation of a model-corrected blood input function (MCIF) with PV corrections. The developed 3D U-Net and RNN was trained and validated using a 5-fold cross-validation approach on 50 human brain FDG PET scans. The ICA-net achieved an average Dice score of 82.18% and an Intersection over Union of 68.54% across all tested scans. Furthermore, the MCIF-net exhibited a minimal root mean squared error of 0.0052. The application of this pipeline to ground truth data for dFDG-PET brain scans resulted in the precise localization of seizure onset regions, which contributed to a successful clinical outcome, with the patient achieving a seizure-free state after treatment. These results underscore the efficacy of the ICA-net and MCIF-net deep learning pipeline in learning the ICA structure's distribution and automating MCIF computation with PV corrections. This advancement marks a significant leap in non-invasive neuroimaging.


Asunto(s)
Encéfalo , Aprendizaje Profundo , Fluorodesoxiglucosa F18 , Tomografía de Emisión de Positrones , Humanos , Tomografía de Emisión de Positrones/métodos , Encéfalo/diagnóstico por imagen , Encéfalo/irrigación sanguínea , Procesamiento de Imagen Asistido por Computador/métodos , Mapeo Encefálico/métodos , Redes Neurales de la Computación , Arteria Carótida Interna/diagnóstico por imagen , Masculino , Algoritmos , Femenino , Radiofármacos
20.
J Imaging ; 10(8)2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39194991

RESUMEN

Liver segmentation technologies play vital roles in clinical diagnosis, disease monitoring, and surgical planning due to the complex anatomical structure and physiological functions of the liver. This paper provides a comprehensive review of the developments, challenges, and future directions in liver segmentation technology. We systematically analyzed high-quality research published between 2014 and 2024, focusing on liver segmentation methods, public datasets, and evaluation metrics. This review highlights the transition from manual to semi-automatic and fully automatic segmentation methods, describes the capabilities and limitations of available technologies, and provides future outlooks.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA