Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.001
Filtrar
1.
J Imaging Inform Med ; 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39227537

RESUMEN

Thermography is a non-invasive and non-contact method for detecting cancer in its initial stages by examining the temperature variation between both breasts. Preprocessing methods such as resizing, ROI (region of interest) segmentation, and augmentation are frequently used to enhance the accuracy of breast thermogram analysis. In this study, a modified U-Net architecture (DTCWAU-Net) that uses dual-tree complex wavelet transform (DTCWT) and attention gate for breast thermal image segmentation for frontal and lateral view thermograms, aiming to outline ROI for potential tumor detection, was proposed. The proposed approach achieved an average Dice coefficient of 93.03% and a sensitivity of 94.82%, showcasing its potential for accurate breast thermogram segmentation. Classification of breast thermograms into healthy or cancerous categories was carried out by extracting texture- and histogram-based features and deep features from segmented thermograms. Feature selection was performed using Neighborhood Component Analysis (NCA), followed by the application of machine learning classifiers. When compared to other state-of-the-art approaches for detecting breast cancer using a thermogram, the proposed methodology showed a higher accuracy of 99.90% for VGG16 deep features with NCA and Random Forest classifier. Simulation results expound that the proposed method can be used in breast cancer screening, facilitating early detection, and enhancing treatment outcomes.

2.
Skeletal Radiol ; 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39230576

RESUMEN

OBJECTIVE: A fully automated laminar cartilage composition (MRI-based T2) analysis method was technically and clinically validated by comparing radiographically normal knees with (CL-JSN) and without contra-lateral joint space narrowing or other signs of radiographic osteoarthritis (OA, CL-noROA). MATERIALS AND METHODS: 2D U-Nets were trained from manually segmented femorotibial cartilages (n = 72) from all 7 echoes (AllE), or from the 1st echo only (1stE) of multi-echo-spin-echo (MESE) MRIs acquired by the Osteoarthritis Initiative (OAI). Because of its greater accuracy, only the AllE U-Net was then applied to knees from the OAI healthy reference cohort (n = 10), CL-JSN (n = 39), and (1:1) matched CL-noROA knees (n = 39) that all had manual expert segmentation, and to 982 non-matched CL-noROA knees without expert segmentation. RESULTS: The agreement (Dice similarity coefficient) between automated vs. manual expert cartilage segmentation was between 0.82 ± 0.05/0.79 ± 0.06 (AllE/1stE) and 0.88 ± 0.03/0.88 ± 0.03 (AllE/1stE) across femorotibial cartilage plates. The deviation between automated vs. manually derived laminar T2 reached up to - 2.2 ± 2.6 ms/ + 4.1 ± 10.2 ms (AllE/1stE). The AllE U-Net showed a similar sensitivity to cross-sectional laminar T2 differences between CL-JSN and CL-noROA knees in the matched (Cohen's D ≤ 0.54) and the non-matched (D ≤ 0.54) comparison as the matched manual analyses (D ≤ 0.48). Longitudinally, the AllE U-Net also showed a similar sensitivity to CL-JSN vs. CS-noROA differences in the matched (D ≤ 0.51) and the non-matched (D ≤ 0.43) comparison as matched manual analyses (D ≤ 0.41). CONCLUSION: The fully automated T2 analysis showed a high agreement, acceptable accuracy, and similar sensitivity to cross-sectional and longitudinal laminar T2 differences in an early OA model, compared with manual expert analysis. TRIAL REGISTRATION: Clinicaltrials.gov identification: NCT00080171.

3.
Bioinform Biol Insights ; 18: 11779322241272387, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39246684

RESUMEN

Objectives: This article focuses on the detection of cells in low-contrast brightfield microscopy images; in our case, it is chronic lymphocytic leukaemia cells. The automatic detection of cells from brightfield time-lapse microscopic images brings new opportunities in cell morphology and migration studies; to achieve the desired results, it is advisable to use state-of-the-art image segmentation methods that not only detect the cell but also detect its boundaries with the highest possible accuracy, thus defining its shape and dimensions. Methods: We compared eight state-of-the-art neural network architectures with different backbone encoders for image data segmentation, namely U-net, U-net++, the Pyramid Attention Network, the Multi-Attention Network, LinkNet, the Feature Pyramid Network, DeepLabV3, and DeepLabV3+. The training process involved training each of these networks for 1000 epochs using the PyTorch and PyTorch Lightning libraries. For instance segmentation, the watershed algorithm and three-class image semantic segmentation were used. We also used StarDist, a deep learning-based tool for object detection with star-convex shapes. Results: The optimal combination for semantic segmentation was the U-net++ architecture with a ResNeSt-269 background with a data set intersection over a union score of 0.8902. For the cell characteristics examined (area, circularity, solidity, perimeter, radius, and shape index), the difference in mean value using different chronic lymphocytic leukaemia cell segmentation approaches appeared to be statistically significant (Mann-Whitney U test, P < .0001). Conclusion: We found that overall, the algorithms demonstrate equal agreement with ground truth, but with the comparison, it can be seen that the different approaches prefer different morphological features of the cells. Consequently, choosing the most suitable method for instance-based cell segmentation depends on the particular application, namely, the specific cellular traits being investigated.

4.
Sensors (Basel) ; 24(17)2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39275756

RESUMEN

Liver cancer is one of the malignancies with high mortality rates worldwide, and its timely detection and accurate diagnosis are crucial for improving patient prognosis. To address the limitations of traditional image segmentation techniques and the U-Net network in capturing fine image features, this study proposes an improved model based on the U-Net architecture, named RHEU-Net. By replacing traditional convolution modules in the encoder and decoder with improved residual modules, the network's feature extraction capabilities and gradient stability are enhanced. A Hybrid Gated Attention (HGA) module is integrated before the skip connections, enabling the parallel processing of channel and spatial attentions, optimizing the feature fusion strategy, and effectively replenishing image details. A Multi-Scale Feature Enhancement (MSFE) layer is introduced at the bottleneck, utilizing multi-scale feature extraction technology to further enhance the expression of receptive fields and contextual information, improving the overall feature representation effect. Testing on the LiTS2017 dataset demonstrated that RHEU-Net achieved Dice scores of 95.72% for liver segmentation and 70.19% for tumor segmentation. These results validate the effectiveness of RHEU-Net and underscore its potential for clinical application.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias Hepáticas , Redes Neurales de la Computación , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/patología , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Hígado/diagnóstico por imagen , Hígado/patología
5.
Heliyon ; 10(16): e35933, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39258194

RESUMEN

The growing interest in Subseasonal to Seasonal (S2S) prediction data across different industries underscores its potential use in comprehending weather patterns, extreme conditions, and important sectors such as agriculture and energy management. However, concerns about its accuracy have been raised. Furthermore, enhancing the precision of rainfall predictions remains challenging in S2S forecasts. This study enhanced the sub-seasonal to seasonal (S2S) prediction skills for precipitation amount and occurrence over the East Asian region by employing deep learning-based post-processing techniques. We utilized a modified U-Net architecture that wraps all its convolutional layers with TimeDistributed layers as a deep learning model. For the training datasets, the precipitation prediction data of six S2S climate models and their multi-model ensemble (MME) were constructed, and the daily precipitation occurrence was obtained from the three thresholds values, 0 % of the daily precipitation for no-rain events, <33 % for light-rain, >67 % for heavy-rain. Based on the precipitation amount prediction skills of the six climate models, deep learning-based post-processing outperformed post-processing using multiple linear regression (MLR) in the lead times of weeks 2-4. The prediction accuracy of precipitation occurrence with MLR-based post-processing did not significantly improve, whereas deep learning-based post-processing enhanced the prediction accuracy in the total lead times, demonstrating superiority over MLR. We enhanced the prediction accuracy in forecasting the amount and occurrence of precipitation in individual climate models using deep learning-based post-processing.

6.
Sci Rep ; 14(1): 21298, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39266655

RESUMEN

Learning operators with deep neural networks is an emerging paradigm for scientific computing. Deep Operator Network (DeepONet) is a modular operator learning framework that allows for flexibility in choosing the kind of neural network to be used in the trunk and/or branch of the DeepONet. This is beneficial as it has been shown many times that different types of problems require different kinds of network architectures for effective learning. In this work, we design an efficient neural operator based on the DeepONet architecture. We introduce U-Net enhanced DeepONet (U-DeepONet) for learning the solution operator of highly complex CO2-water two-phase flow in heterogeneous porous media. The U-DeepONet is more accurate in predicting gas saturation and pressure buildup than the state-of-the-art U-Net based Fourier Neural Operator (U-FNO) and the Fourier-enhanced Multiple-Input Operator (Fourier-MIONet) trained on the same dataset. Moreover, our U-DeepONet is significantly more efficient in training times than both the U-FNO (more than 18 times faster) and the Fourier-MIONet (more than 5 times faster), while consuming less computational resources. We also show that the U-DeepONet is more data efficient and better at generalization than both the U-FNO and the Fourier-MIONet.

7.
Data Brief ; 56: 110852, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39281010

RESUMEN

Detecting and screening clouds is the first step in most optical remote sensing analyses. Cloud formation is diverse, presenting many shapes, thicknesses, and altitudes. This variety poses a significant challenge to the development of effective cloud detection algorithms, as most datasets lack an unbiased representation. To address this issue, we have built CloudSEN12+, a significant expansion of the CloudSEN12 dataset. This new dataset doubles the expert-labeled annotations, making it the largest cloud and cloud shadow detection dataset for Sentinel-2 imagery up to date. We have carefully reviewed and refined our previous annotations to ensure maximum trustworthiness. We expect CloudSEN12+ will be a valuable resource for the cloud detection research community.

8.
Heliyon ; 10(17): e36248, 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39286137

RESUMEN

This Proposed work explores how machine learning can be used to diagnose conjunctivitis, a common eye ailment. The main goal of the study is to capture eye images using camera-based systems, perform image pre-processing, and employ image segmentation techniques, particularly the UNet++ and U-net models. Additionally, the study involves extracting features from the relevant areas within the segmented images and using Convolutional Neural Networks for classification. All this is carried out using TensorFlow, a well-known machine-learning platform. The research involves thorough training and assessment of both the UNet and U-net++ segmentation models. A comprehensive analysis is conducted, focusing on their accuracy and performance. The study goes further to evaluate these models using both the UBIRIS dataset and a custom dataset created for this specific research. The experimental results emphasize a substantial improvement in the quality of segmentation achieved by the U-net++ model, the model achieved an overall accuracy of 97.07. Furthermore, the UNet++ architecture displays better accuracy in comparison to the traditional U-net model. These outcomes highlight the potential of U-net++ as a valuable advancement in the field of machine learning-based conjunctivitis diagnosis.

9.
Magn Reson Med ; 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39270056

RESUMEN

PURPOSE: To shorten CEST acquisition time by leveraging Z-spectrum undersampling combined with deep learning for CEST map construction from undersampled Z-spectra. METHODS: Fisher information gain analysis identified optimal frequency offsets (termed "Fisher offsets") for the multi-pool fitting model, maximizing information gain for the amplitude and the FWHM parameters. These offsets guided initial subsampling levels. A U-NET, trained on undersampled brain CEST images from 18 volunteers, produced CEST maps at 3 T with varied undersampling levels. Feasibility was first tested using retrospective undersampling at three levels, followed by prospective in vivo undersampling (15 of 53 offsets), reducing scan time significantly. Additionally, glioblastoma grade IV pathology was simulated to evaluate network performance in patient-like cases. RESULTS: Traditional multi-pool models failed to quantify CEST maps from undersampled images (structural similarity index [SSIM] <0.2, peak SNR <20, Pearson r <0.1). Conversely, U-NET fitting successfully addressed undersampled data challenges. The study suggests CEST scan time reduction is feasible by undersampling 15, 25, or 35 of 53 Z-spectrum offsets. Prospective undersampling cut scan time by 3.5 times, with a maximum mean squared error of 4.4e-4, r = 0.82, and SSIM = 0.84, compared to the ground truth. The network also reliably predicted CEST values for simulated glioblastoma pathology. CONCLUSION: The U-NET architecture effectively quantifies CEST maps from undersampled Z-spectra at various undersampling levels.

10.
Comput Biol Med ; 182: 109139, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39270456

RESUMEN

We developed a method for automated detection of motion and noise artifacts (MNA) in electrodermal activity (EDA) signals, based on a one-dimensional U-Net architecture. EDA has been widely employed in diverse applications to assess sympathetic functions. However, EDA signals can be easily corrupted by MNA, which frequently occur in wearable systems, particularly those used for ambulatory recording. MNA can lead to false decisions, resulting in inaccurate assessment and diagnosis. Several approaches have been proposed for MNA detection; however, questions remain regarding the generalizability and the feasibility of implementation of the algorithms in real-time especially those involving deep learning approaches. In this work, we propose a deep learning approach based on a one-dimensional U-Net architecture using spectrograms of EDA for MNA detection. We developed our method using four distinct datasets, including two independent testing datasets, with a total of 9602 128-s EDA segments from 104 subjects. Our proposed scheme, including data augmentation, spectrogram computation, and 1D U-Net, yielded balanced accuracies of 80.0 ± 13.7 % and 75.0 ± 14.0 % for the two independent test datasets; these results are better than or comparable to those of other five state-of-the-art methods. Additionally, the computation time of our feature computation and machine learning classification was significantly lower than that of other methods (p < .001). The model requires only 0.28 MB of memory, which is far smaller than the two deep learning approaches (4.93 and 54.59 MB) which were used as comparisons to our study. Our model can be implemented in real-time in embedded systems, even with limited memory and an inefficient microprocessor, without compromising the accuracy of MNA detection.

11.
Strahlenther Onkol ; 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39283345

RESUMEN

BACKGROUND: The hypothesis of changing network layers to increase the accuracy of dose distribution prediction, instead of expanding their dimensions, which requires complex calculations, has been considered in our study. MATERIALS AND METHODS: A total of 137 prostate cancer patients treated with the tomotherapy technique were categorized as 80% training and validating as well as 20% testing for the nested UNet and UNet architectures. Mean absolute error (MAE) was used to measure the dosimetry indices of dose-volume histograms (DVHs), and geometry indices, including the structural similarity index measure (SSIM), dice similarity coefficient (DSC), and Jaccard similarity coefficient (JSC), were used to evaluate the isodose volume (IV) similarity prediction. To verify a statistically significant difference, the two-way statistical Wilcoxon test was used at a level of 0.05 (p < 0.05). RESULTS: Use of a nested UNet architecture reduced the predicted dose MAE in DVH indices. The MAE for planning target volume (PTV), bladder, rectum, and right and left femur were D98% = 1.11 ± 0.90; D98% = 2.27 ± 2.85, Dmean = 0.84 ± 0.62; D98% = 1.47 ± 12.02, Dmean = 0.77 ± 1.59; D2% = 0.65 ± 0.70, Dmean = 0.96 ± 2.82; and D2% = 1.18 ± 6.65, Dmean = 0.44 ± 1.13, respectively. Additionally, the greatest geometric similarity was observed in the mean SSIM for UNet and nested UNet (0.91 vs. 0.94, respectively). CONCLUSION: The nested UNet network can be considered a suitable network due to its ability to improve the accuracy of dose distribution prediction compared to the UNet network in an acceptable time.

12.
Technol Health Care ; 2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39240595

RESUMEN

BACKGROUND: Liver cancer poses a significant health challenge due to its high incidence rates and complexities in detection and treatment. Accurate segmentation of liver tumors using medical imaging plays a crucial role in early diagnosis and treatment planning. OBJECTIVE: This study proposes a novel approach combining U-Net and ResNet architectures with the Adam optimizer and sigmoid activation function. The method leverages ResNet's deep residual learning to address training issues in deep neural networks. At the same time, U-Net's structure facilitates capturing local and global contextual information essential for precise tumor characterization. The model aims to enhance segmentation accuracy by effectively capturing intricate tumor features and contextual details by integrating these architectures. The Adam optimizer expedites model convergence by dynamically adjusting the learning rate based on gradient statistics during training. METHODS: To validate the effectiveness of the proposed approach, segmentation experiments are conducted on a diverse dataset comprising 130 CT scans of liver cancers. Furthermore, a state-of-the-art fusion strategy is introduced, combining the robust feature learning capabilities of the UNet-ResNet classifier with Snake-based Level Set Segmentation. RESULTS: Experimental results demonstrate impressive performance metrics, including an accuracy of 0.98 and a minimal loss of 0.10, underscoring the efficacy of the proposed methodology in liver cancer segmentation. CONCLUSION: This fusion approach effectively delineates complex and diffuse tumor shapes, significantly reducing errors.

13.
Ultrasound Med Biol ; 2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39244483

RESUMEN

OBJECTIVE: As metabolic dysfunction-associated steatotic liver disease (MASLD) becomes more prevalent worldwide, it is imperative to create more accurate technologies that make it easy to assess the liver in a point-of-care setting. The aim of this study is to test the performance of a new software tool implemented in Velacur (Sonic Incytes), a liver stiffness and ultrasound attenuation measurement device, on patients with MASLD. This tool employs a deep learning-based method to detect and segment shear waves in the liver tissue for subsequent analysis to improve tissue characterization for patient diagnosis. METHODS: This new tool consists of a deep learning based algorithm, which was trained on 15,045 expert-segmented images from 103 patients, using a U-Net architecture. The algorithm was then tested on 4429 images from 36 volunteers and patients with MASLD. Test subjects were scanned at different clinics with different Velacur operators. Evaluation was performed on both individual images (image based) and averaged across all images collected from a patient (patient based). Ground truth was defined by expert segmentation of the shear waves within each image. For evaluation, sensitivity and specificity for correct wave detection in the image were calculated. For those images containing waves, the Dice coefficient was calculated. A prototype of the software tool was also implemented on Velacur and assessed by operators in real world settings. RESULTS: The wave detection algorithm had a sensitivity of 81% and a specificity of 84%, with a Dice coefficient of 0.74 and 0.75 for image based and patient-based averages respectively. The implementation of this software tool as an overlay on the B-Mode ultrasound resulted in improved exam quality collected by operators. CONCLUSION: The shear wave algorithm performed well on a test set of volunteers and patients with metabolic dysfunction-associated steatotic liver disease. The addition of this software tool, implemented on the Velacur system, improved the quality of the liver assessments performed in a real world, point of care setting.

14.
J Imaging Inform Med ; 2024 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-39117939

RESUMEN

To propose a deep learning framework "SpineCurve-net" for automated measuring the 3D Cobb angles from computed tomography (CT) images of presurgical scoliosis patients. A total of 116 scoliosis patients were analyzed, divided into a training set of 89 patients (average age 32.4 ± 24.5 years) and a validation set of 27 patients (average age 17.3 ± 5.8 years). Vertebral identification and curve fitting were achieved through U-net and NURBS-net and resulted in a Non-Uniform Rational B-Spline (NURBS) curve of the spine. The 3D Cobb angles were measured in two ways: the predicted 3D Cobb angle (PRED-3D-CA), which is the maximum value in the smoothed angle map derived from the NURBS curve, and the 2D mapping Cobb angle (MAP-2D-CA), which is the maximal angle formed by the tangent vectors along the projected 2D spinal curve. The model segmented spinal masks effectively, capturing easily missed vertebral bodies. Spoke kernel filtering distinguished vertebral regions, centralizing spinal curves. The SpineCurve Network method's Cobb angle (PRED-3D-CA and MAP-2D-CA) measurements correlated strongly with the surgeons' annotated Cobb angle (ground truth, GT) based on 2D radiographs, revealing high Pearson correlation coefficients of 0.983 and 0.934, respectively. This paper proposed an automated technique for calculating the 3D Cobb angle in preoperative scoliosis patients, yielding results that are highly correlated with traditional 2D Cobb angle measurements. Given its capacity to accurately represent the three-dimensional nature of spinal deformities, this method shows potential in aiding physicians to develop more precise surgical strategies in upcoming cases.

15.
Skin Res Technol ; 30(8): e13783, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39113617

RESUMEN

BACKGROUND: In recent years, the increasing prevalence of skin cancers, particularly malignant melanoma, has become a major concern for public health. The development of accurate automated segmentation techniques for skin lesions holds immense potential in alleviating the burden on medical professionals. It is of substantial clinical importance for the early identification and intervention of skin cancer. Nevertheless, the irregular shape, uneven color, and noise interference of the skin lesions have presented significant challenges to the precise segmentation. Therefore, it is crucial to develop a high-precision and intelligent skin lesion segmentation framework for clinical treatment. METHODS: A precision-driven segmentation model for skin cancer images is proposed based on the Transformer U-Net, called BiADATU-Net, which integrates the deformable attention Transformer and bidirectional attention blocks into the U-Net. The encoder part utilizes deformable attention Transformer with dual attention block, allowing adaptive learning of global and local features. The decoder part incorporates specifically tailored scSE attention modules within skip connection layers to capture image-specific context information for strong feature fusion. Additionally, deformable convolution is aggregated into two different attention blocks to learn irregular lesion features for high-precision prediction. RESULTS: A series of experiments are conducted on four skin cancer image datasets (i.e., ISIC2016, ISIC2017, ISIC2018, and PH2). The findings show that our model exhibits satisfactory segmentation performance, all achieving an accuracy rate of over 96%. CONCLUSION: Our experiment results validate the proposed BiADATU-Net achieves competitive performance supremacy compared to some state-of-the-art methods. It is potential and valuable in the field of skin lesion segmentation.


Asunto(s)
Melanoma , Neoplasias Cutáneas , Humanos , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/patología , Melanoma/diagnóstico por imagen , Melanoma/patología , Algoritmos , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos , Dermoscopía/métodos , Aprendizaje Profundo
16.
Med Phys ; 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39088756

RESUMEN

BACKGROUND: The quality of treatment plans for breast cancer can vary greatly. This variation could be reduced by using dose prediction to automate treatment planning. Our work investigates novel methods for training deep-learning models that are capable of producing high-quality dose predictions for breast cancer treatment planning. PURPOSE: The goal of this work was to compare the performance impact of two novel techniques for deep learning dose prediction models for tangent field treatments for breast cancer. The first technique, a "glowing" mask algorithm, encodes the distance from a contour into each voxel in a mask. The second, a gradient-weighted mean squared error (MSE) loss function, emphasizes the error in high-dose gradient regions in the predicted image. METHODS: Four 3D U-Net deep learning models were trained using the planning CT and contours of the heart, lung, and tumor bed as inputs. The dataset consisted of 305 treatment plans split into 213/46/46 training/validation/test sets using a 70/15/15% split. We compared the impact of novel "glowing" anatomical mask inputs and a novel gradient-weighted MSE loss function to their standard counterparts, binary anatomical masks, and MSE loss, using an ablation study methodology. To assess performance, we examined the mean error and mean absolute error (ME/MAE) in dose across all within-body voxels, the error in mean dose to heart, ipsilateral lung, and tumor bed, dice similarity coefficient (DSC) across isodose volumes defined by 0%-100% prescribed dose thresholds, and gamma analysis (3%/3 mm). RESULTS: The combination of novel glowing masks and gradient weighted loss function yielded the best-performing model in this study. This model resulted in a mean ME of 0.40%, MAE of 2.70%, an error in mean dose to heart and lung of -0.10 and 0.01 Gy, and an error in mean dose to the tumor bed of -0.01%. The median DSC at 50/95/100% isodose levels were 0.91/0.87/0.82. The mean 3D gamma pass rate (3%/3 mm) was 93%. CONCLUSIONS: This study found the combination of novel anatomical mask inputs and loss function for dose prediction resulted in superior performance to their standard counterparts. These results have important implications for the field of radiotherapy dose prediction, as the methods used here can be easily incorporated into many other dose prediction models for other treatment sites. Additionally, this dose prediction model for breast radiotherapy has sufficient performance to be used in an automated planning pipeline for tangent field radiotherapy and has the major benefit of not requiring a PTV for accurate dose prediction.

17.
J Imaging Inform Med ; 2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39103563

RESUMEN

Obstructive sleep apnea is characterized by a decrease or cessation of breathing due to repetitive closure of the upper airway during sleep, leading to a decrease in blood oxygen saturation. In this study, employing a U-Net model, we utilized drug-induced sleep endoscopy images to segment the major causes of airway obstruction, including the epiglottis, oropharynx lateral walls, and tongue base. The evaluation metrics included sensitivity, specificity, accuracy, and Dice score, with airway sensitivity at 0.93 (± 0.06), specificity at 0.96 (± 0.01), accuracy at 0.95 (± 0.01), and Dice score at 0.84 (± 0.03), indicating overall high performance. The results indicate the potential for artificial intelligence (AI)-driven automatic interpretation of sleep disorder diagnosis, with implications for standardizing medical procedures and improving healthcare services. The study suggests that advancements in AI technology hold promise for enhancing diagnostic accuracy and treatment efficacy in sleep and respiratory disorders, fostering competitiveness in the medical AI market.

18.
J Imaging Inform Med ; 2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39103564

RESUMEN

Retinal vessel segmentation is crucial for the diagnosis of ophthalmic and cardiovascular diseases. However, retinal vessels are densely and irregularly distributed, with many capillaries blending into the background, and exhibit low contrast. Moreover, the encoder-decoder-based network for retinal vessel segmentation suffers from irreversible loss of detailed features due to multiple encoding and decoding, leading to incorrect segmentation of the vessels. Meanwhile, the single-dimensional attention mechanisms possess limitations, neglecting the importance of multidimensional features. To solve these issues, in this paper, we propose a detail-enhanced attention feature fusion network (DEAF-Net) for retinal vessel segmentation. First, the detail-enhanced residual block (DERB) module is proposed to strengthen the capacity for detailed representation, ensuring that intricate features are efficiently maintained during the segmentation of delicate vessels. Second, the multidimensional collaborative attention encoder (MCAE) module is proposed to optimize the extraction of multidimensional information. Then, the dynamic decoder (DYD) module is introduced to preserve spatial information during the decoding process and reduce the information loss caused by upsampling operations. Finally, the proposed detail-enhanced feature fusion (DEFF) module composed of DERB, MCAE and DYD modules fuses feature maps from both encoding and decoding and achieves effective aggregation of multi-scale contextual information. The experiments conducted on the datasets of DRIVE, CHASEDB1, and STARE, achieving Sen of 0.8305, 0.8784, and 0.8654, and AUC of 0.9886, 0.9913, and 0.9911 on DRIVE, CHASEDB1, and STARE, respectively, demonstrate the performance of our proposed network, particularly in the segmentation of fine retinal vessels.

19.
Br J Radiol ; 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39141433

RESUMEN

OBJECTIVES: This study aims to develop an automated approach for estimating the vertical rotation of the thorax, which can be used to assess the technical adequacy of chest X-ray radiographs (CXRs). METHODS: Total 800 chest radiographs were used to train and establish segmentation networks for outlining the lungs and spine regions in chest X-ray images. By measuring the widths of the left and right lungs between the central line of segmented spine and the lateral sides of the segmented lungs, the quantification of thoracic vertical rotation was achieved. Additionally, a life-size, full body anthropomorphic phantom was employed to collect chest radiographic images under various specified rotation angles for assessing the accuracy of the proposed approach. RESULTS: The deep learning networks effectively segmented the anatomical structures of the lungs and spine. The proposed approach demonstrated a mean estimation error of less than 2° for thoracic rotation, surpassing existing techniques and indicating its superiority. CONCLUSIONS: The proposed approach offers a robust assessment of thoracic rotation and presents new possibilities for automated image quality control in chest X-ray examinations. ADVANCES IN KNOWLEDGE: This study presents a novel deep learning-based approach for the automated estimation of vertical thoracic rotation in chest X-ray radiographs. The proposed method enables a quantitative assessment of the technical adequacy of CXR examinations and opens up new possibilities for automated screening and quality control of radiographs.

20.
Sci Rep ; 14(1): 18895, 2024 08 14.
Artículo en Inglés | MEDLINE | ID: mdl-39143126

RESUMEN

To develop a deep learning-based model capable of segmenting the left ventricular (LV) myocardium on native T1 maps from cardiac MRI in both long-axis and short-axis orientations. Models were trained on native myocardial T1 maps from 50 healthy volunteers and 75 patients using manual segmentation as the reference standard. Based on a U-Net architecture, we systematically optimized the model design using two different training metrics (Sørensen-Dice coefficient = DSC and Intersection-over-Union = IOU), two different activation functions (ReLU and LeakyReLU) and various numbers of training epochs. Training with DSC metric and a ReLU activation function over 35 epochs achieved the highest overall performance (mean error in T1 10.6 ± 17.9 ms, mean DSC 0.88 ± 0.07). Limits of agreement between model results and ground truth were from -35.5 to + 36.1 ms. This was superior to the agreement between two human raters (-34.7 to + 59.1 ms). Segmentation was as accurate for long-axis views (mean error T1: 6.77 ± 8.3 ms, mean DSC: 0.89 ± 0.03) as for short-axis images (mean error ΔT1: 11.6 ± 19.7 ms, mean DSC: 0.88 ± 0.08). Fully automated segmentation and quantitative analysis of native myocardial T1 maps is possible in both long-axis and short-axis orientations with very high accuracy.


Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Femenino , Adulto , Persona de Mediana Edad , Procesamiento de Imagen Asistido por Computador/métodos , Miocardio , Ventrículos Cardíacos/diagnóstico por imagen , Corazón/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA