Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 129
Filtrar
1.
Pediatr Cardiol ; 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39223338

RESUMEN

Fetal electrocardiogram (FECG) contains crucial information about the fetus during pregnancy, making the extraction of FECG signal essential for monitoring fetal health. However, extracting FECG signal from abdominal electrocardiogram (AECG) poses several challenges: (1) FECG signal is often contaminated by noise, and (2) FECG signal is frequently overshadowed by high-amplitude maternal electrocardiogram (MECG). To address these issues and enhance the accuracy of signal extraction, this paper proposes an improved Cycle Generative Adversarial Networks (CycleGAN) with integrated contrastive learning for FECG signal extraction. The model introduces a dual-attention mechanism in the generator of the generative adversarial network, incorporating a multi-head self-attention (MSA) module and a channel-wise self-attention (CSA) module to enhance the quality of generated signals. Additionally, a contrastive triplet loss is integrated into the CycleGAN loss function, optimizing training to increase the similarity between the extracted FECG signal and the scalp fetal electrocardiogram. The proposed method is evaluated using the ADFECG dataset and the PCDB dataset both from the Physionet. In terms of signal extraction quality, Mean Squared Error is reduced to 0.036, Mean Absolute Error (MAE) to 0.009, and Pearson Correlation Coefficient reaches 0.924. When validating the model performance, Structural Similarity Index achieves 95.54%, Peak Signal-to-Noise Ratio (PSNR) reaches 38.87 dB, and R-squared (R2) attains 95.12%. Furthermore, the positive predictive value (PPV), sensitivity (SEN) and F1-score for QRS wave cluster detection on the ADFECG dataset also reached 99.56%, 99.43% and 99.50%, respectively. On the PCDB dataset, the positive predictive value (PPV), sensitivity (SEN) and F1-score for QRS wave cluster detection also reached 98.24%, 98.60% and 98.42%, respectively. All of them are higher than other methods. Therefore, the proposed model has important applications in effective monitoring of fetal health during pregnancy.

2.
Small ; : e2403423, 2024 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-39254289

RESUMEN

Determining molecular structures is foundational in chemistry and biology. The notion of discerning molecular structures simply from the visual appearance of a material remained almost unthinkable until the advent of machine learning. This paper introduces a pioneering approach bridging the visual appearance of materials (both at the micro- and nanostructural levels) with traditional chemical structure analysis methods. Quaternary phosphonium salts are opted as the model compounds, given their significant roles in diverse chemical and medicinal fields and their ability to form homologs with only minute intermolecular variances. This research results in the successful creation of a neural network model capable of recognizing molecular structures from visual electron microscopy images of the material. The performance of the model is evaluated and related to the chemical nature of the studied chemicals. Additionally, unsupervised domain transfer is tested as a method to use the resulting model on optical microscopy images, as well as test models trained on optical images directly. The robustness of the method is further tested using a complex system of phosphonium salt mixtures. To the best of the authors' knowledge, this study offers the first evidence of the feasibility of discerning nearly indistinguishable molecular structures.

3.
Comput Med Imaging Graph ; 117: 102431, 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39243464

RESUMEN

CycleGAN has been leveraged to synthesize a CT image from an available MR image after trained on unpaired data. Due to the lack of direct constraints between the synthetic and the input images, CycleGAN cannot guarantee structural consistency and often generates inaccurate mappings that shift the anatomy, which is highly undesirable for downstream clinical applications such as MRI-guided radiotherapy treatment planning and PET/MRI attenuation correction. In this paper, we propose a cycle-consistent and semantics-preserving generative adversarial network, referred as CycleSGAN, for unpaired MR-to-CT image synthesis. Our design features a novel and generic way to incorporate semantic information into CycleGAN. This is done by designing a pair of three-player games within the CycleGAN framework where each three-player game consists of one generator and two discriminators to formulate two distinct types of adversarial learning: appearance adversarial learning and structure adversarial learning. These two types of adversarial learning are alternately trained to ensure both realistic image synthesis and semantic structure preservation. Results on unpaired hip MR-to-CT image synthesis show that our method produces better synthetic CT images in both accuracy and visual quality as compared to other state-of-the-art (SOTA) unpaired MR-to-CT image synthesis methods.

4.
Neural Netw ; 180: 106689, 2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39243510

RESUMEN

Compared to pixel-level content loss, domain-level style loss in CycleGAN-based dehazing algorithms just imposes relatively soft constraints on the intermediate translated images, resulting in struggling to accurately model haze-free features from real hazy scenes. Furthermore, globally perceptual discriminator may misclassify real hazy images with significant scene depth variations as clean style, thereby resulting in severe haze residue. To address these issues, we propose a pseudo self-distillation based CycleGAN with enhanced local adversarial interaction for image dehazing, termed as PSD-ELGAN. On the one hand, we leverage the characteristic of CycleGAN to generate pseudo image pairs during training. Knowledge distillation is employed in this unsupervised framework to transfer the informative high-quality features from the self-reconstruction network of real clean images to the dehazing generator of paired pseudo hazy images, which effectively improves its haze-free feature representation ability without increasing network parameters. On the other hand, in the output of dehazing generator, four non-uniform image patches severely affected by residual haze are adaptively selected as input samples. The local discriminator could easily distinguish their hazy style, thereby further compelling the dehazing generator to suppress haze residues in such regions, thus enhancing its dehazing performance. Extensive experiments show that our PSD-ELGAN can achieve promising results and better generality across various datasets.

5.
Biomed Phys Eng Express ; 10(5)2024 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-39094603

RESUMEN

Objective. Auto-segmentation in mouse micro-CT enhances the efficiency and consistency of preclinical experiments but often struggles with low-native-contrast and morphologically complex organs, such as the spleen, resulting in poor segmentation performance. While CT contrast agents can improve organ conspicuity, their use complicates experimental protocols and reduces feasibility. We developed a 3D Cycle Generative Adversarial Network (CycleGAN) incorporating anatomy-constrained U-Net models to leverage contrast-enhanced CT (CECT) insights to improve unenhanced native CT (NACT) segmentation.Approach.We employed a standard CycleGAN with an anatomical loss function to synthesize virtual CECT images from unpaired NACT scans at two different resolutions. Prior to training, two U-Nets were trained to automatically segment six major organs in NACT and CECT datasets, respectively. These pretrained 3D U-Nets were integrated during the CycleGAN training, segmenting synthetic images, and comparing them against ground truth annotations. The compound loss within the CycleGAN maintained anatomical fidelity. Full image processing was achieved for low-resolution datasets, while high-resolution datasets employed a patch-based method due to GPU memory constraints. Automated segmentation was applied to original NACT and synthetic CECT scans to evaluate CycleGAN performance using the Dice Similarity Coefficient (DSC) and the 95th percentile Hausdorff Distance (HD95p).Main results.High-resolution scans showed improved auto-segmentation, with an average DSC increase from 0.728 to 0.773 and a reduced HD95p from 1.19 mm to 0.94 mm. Low-resolution scans benefited more from synthetic contrast, showing a DSC increase from 0.586 to 0.682 and an HD95preduction from 3.46 mm to 1.24 mm.Significance.Implementing CycleGAN to synthesize CECT scans substantially improved the visibility of the mouse spleen, leading to more precise auto-segmentation. This approach shows the potential in preclinical imaging studies where contrast agent use is impractical.


Asunto(s)
Medios de Contraste , Imagenología Tridimensional , Bazo , Microtomografía por Rayos X , Animales , Ratones , Bazo/diagnóstico por imagen , Microtomografía por Rayos X/métodos , Imagenología Tridimensional/métodos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
6.
Ultrasound Med Biol ; 2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39181806

RESUMEN

OBJECTIVE: Deep-learning algorithms have been widely applied in the field of automatic kidney ultrasound (US) image segmentation. However, obtaining a large number of accurate kidney labels clinically is very difficult and time-consuming. To solve this problem, we have proposed an efficient cross-modal transfer learning method to improve the performance of the segmentation network on a limited labeled kidney US dataset. METHODS: We aim to implement an improved image-to-image translation network called Seg-CycleGAN to generate accurate annotated kidney US data from labeled abdomen computed tomography images. The Seg-CycleGAN framework primarily consists of two structures: (i) a standard CycleGAN network to visually simulate kidney US from a publicly available labeled abdomen computed tomography dataset; (ii) and a segmentation network to ensure accurate kidney anatomical structures in US images. Based on the large number of simulated kidney US images and small number of real annotated kidney US images, we then aimed to employ a fine-tuning strategy to obtain better segmentation results. RESULTS: To validate the effectiveness of the proposed method, we tested this method on both normal and abnormal kidney US images. The experimental results showed that the proposed method achieved a segmentation accuracy of 0.8548 in dice similarity coefficient on all testing datasets and 0.7622 on the abnormal testing dataset. CONCLUSIONS: Compared with existing data augmentation and transfer learning methods, the proposed method improved the accuracy and generalization of the kidney US image segmentation network on a limited number of training datasets. It therefore has the potential to significantly reduce annotation costs in clinical settings.

7.
Bioengineering (Basel) ; 11(8)2024 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-39199763

RESUMEN

BACKGROUND: Diffusion-weighted imaging (DWI), a pivotal component of multiparametric magnetic resonance imaging (mpMRI), plays a pivotal role in the detection, diagnosis, and evaluation of gastric cancer. Despite its potential, DWI is often marred by substantial anatomical distortions and sensitivity artifacts, which can hinder its practical utility. Presently, enhancing DWI's image quality necessitates reliance on cutting-edge hardware and extended scanning durations. The development of a rapid technique that optimally balances shortened acquisition time with improved image quality would have substantial clinical relevance. OBJECTIVES: This study aims to construct and evaluate the unsupervised learning framework called attention dual contrast vision transformer cyclegan (ADCVCGAN) for enhancing image quality and reducing scanning time in gastric DWI. METHODS: The ADCVCGAN framework, proposed in this study, employs high b-value DWI (b = 1200 s/mm2) as a reference for generating synthetic b-value DWI (s-DWI) from acquired lower b-value DWI (a-DWI, b = 800 s/mm2). Specifically, ADCVCGAN incorporates an attention mechanism CBAM module into the CycleGAN generator to enhance feature extraction from the input a-DWI in both the channel and spatial dimensions. Subsequently, a vision transformer module, based on the U-net framework, is introduced to refine detailed features, aiming to produce s-DWI with image quality comparable to that of b-DWI. Finally, images from the source domain are added as negative samples to the discriminator, encouraging the discriminator to steer the generator towards synthesizing images distant from the source domain in the latent space, with the goal of generating more realistic s-DWI. The image quality of the s-DWI is quantitatively assessed using metrics such as the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), feature similarity index (FSIM), mean squared error (MSE), weighted peak signal-to-noise ratio (WPSNR), and weighted mean squared error (WMSE). Subjective evaluations of different DWI images were conducted using the Wilcoxon signed-rank test. The reproducibility and consistency of b-ADC and s-ADC, calculated from b-DWI and s-DWI, respectively, were assessed using the intraclass correlation coefficient (ICC). A statistical significance level of p < 0.05 was considered. RESULTS: The s-DWI generated by the unsupervised learning framework ADCVCGAN scored significantly higher than a-DWI in quantitative metrics such as PSNR, SSIM, FSIM, MSE, WPSNR, and WMSE, with statistical significance (p < 0.001). This performance is comparable to the optimal level achieved by the latest synthetic algorithms. Subjective scores for lesion visibility, image anatomical details, image distortion, and overall image quality were significantly higher for s-DWI and b-DWI compared to a-DWI (p < 0.001). At the same time, there was no significant difference between the scores of s-DWI and b-DWI (p > 0.05). The consistency of b-ADC and s-ADC readings was comparable among different readers (ICC: b-ADC 0.87-0.90; s-ADC 0.88-0.89, respectively). The repeatability of b-ADC and s-ADC readings by the same reader was also comparable (Reader1 ICC: b-ADC 0.85-0.86, s-ADC 0.85-0.93; Reader2 ICC: b-ADC 0.86-0.87, s-ADC 0.89-0.92, respectively). CONCLUSIONS: ADCVCGAN shows excellent promise in generating gastric cancer DWI images. It effectively reduces scanning time, improves image quality, and ensures the authenticity of s-DWI images and their s-ADC values, thus providing a basis for assisting clinical decision making.

8.
Sensors (Basel) ; 24(15)2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39124038

RESUMEN

Anomaly detection systems based on artificial intelligence (AI) have demonstrated high performance and efficiency in a wide range of applications such as power plants and smart factories. However, due to the inherent reliance of AI systems on the quality of training data, they still demonstrate poor performance in certain environments. Especially in hazardous facilities with constrained data collection, deploying these systems remains a challenge. In this paper, we propose Generative Anomaly Detection using Prototypical Networks (GAD-PN) designed to detect anomalies using only a limited number of normal samples. GAD-PN is a structure that integrates CycleGAN with Prototypical Networks (PNs), learning from metadata similar to the target environment. This approach enables the collection of data that are difficult to gather in real-world environments by using simulation or demonstration models, thus providing opportunities to learn a variety of environmental parameters under ideal and normal conditions. During the inference phase, PNs can classify normal and leak samples using only a small number of normal data from the target environment by prototypes that represent normal and abnormal features. We also complement the challenge of collecting anomaly data by generating anomaly data from normal data using CycleGAN trained on anomaly features. It can also be adapted to various environments that have similar anomalous scenarios, regardless of differences in environmental parameters. To validate the proposed structure, data were collected specifically targeting pipe leakage scenarios, which are significant problems in environments such as power plants. In addition, acoustic ultrasound signals were collected from the pipe nozzles in three different environments. As a result, the proposed model achieved a leak detection accuracy of over 90% in all environments, even with only a small number of normal data. This performance shows an average improvement of approximately 30% compared with traditional unsupervised learning models trained with a limited dataset.

9.
Artículo en Inglés | MEDLINE | ID: mdl-39042332

RESUMEN

PURPOSE: Technological advances in instruments have greatly promoted the development of positron emission tomography (PET) scanners. State-of-the-art PET scanners such as uEXPLORER can collect PET images of significantly higher quality. However, these scanners are not currently available in most local hospitals due to the high cost of manufacturing and maintenance. Our study aims to convert low-quality PET images acquired by common PET scanners into images of comparable quality to those obtained by state-of-the-art scanners without the need for paired low- and high-quality PET images. METHODS: In this paper, we proposed an improved CycleGAN (IE-CycleGAN) model for unpaired PET image enhancement. The proposed method is based on CycleGAN, and the correlation coefficient loss and patient-specific prior loss were added to constrain the structure of the generated images. Furthermore, we defined a normalX-to-advanced training strategy to enhance the generalization ability of the network. The proposed method was validated on unpaired uEXPLORER datasets and Biograph Vision local hospital datasets. RESULTS: For the uEXPLORER dataset, the proposed method achieved better results than non-local mean filtering (NLM), block-matching and 3D filtering (BM3D), and deep image prior (DIP), which are comparable to Unet (supervised) and CycleGAN (supervised). For the Biograph Vision local hospital datasets, the proposed method achieved higher contrast-to-noise ratios (CNR) and tumor-to-background SUVmax ratios (TBR) than NLM, BM3D, and DIP. In addition, the proposed method showed higher contrast, SUVmax, and TBR than Unet (supervised) and CycleGAN (supervised) when applied to images from different scanners. CONCLUSION: The proposed unpaired PET image enhancement method outperforms NLM, BM3D, and DIP. Moreover, it performs better than the Unet (supervised) and CycleGAN (supervised) when implemented on local hospital datasets, which demonstrates its excellent generalization ability.

10.
Front Bioeng Biotechnol ; 12: 1334643, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38948382

RESUMEN

The simulation-to-reality (sim2real) problem is a common issue when deploying simulation-trained models to real-world scenarios, especially given the extremely high imbalance between simulation and real-world data (scarce real-world data). Although the cycle-consistent generative adversarial network (CycleGAN) has demonstrated promise in addressing some sim2real issues, it encounters limitations in situations of data imbalance due to the lower capacity of the discriminator and the indeterminacy of learned sim2real mapping. To overcome such problems, we proposed the imbalanced Sim2Real scheme (ImbalSim2Real). Differing from CycleGAN, the ImbalSim2Real scheme segments the dataset into paired and unpaired data for two-fold training. The unpaired data incorporated discriminator-enhanced samples to further squash the solution space of the discriminator, for enhancing the discriminator's ability. For paired data, a term targeted regression loss was integrated to ensure specific and quantitative mapping and further minimize the solution space of the generator. The ImbalSim2Real scheme was validated through numerical experiments, demonstrating its superiority over conventional sim2real methods. In addition, as an application of the proposed ImbalSim2Real scheme, we designed a finger joint stiffness self-sensing framework, where the validation loss for estimating real-world finger joint stiffness was reduced by roughly 41% compared to the supervised learning method that was trained with scarce real-world data and by 56% relative to the CycleGAN trained with the imbalanced dataset. Our proposed scheme and framework have potential applicability to bio-signal estimation when facing an imbalanced sim2real problem.

11.
Med Phys ; 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38889368

RESUMEN

BACKGROUND: Iodine maps, derived from image-processing of contrast-enhanced dual-energy computed tomography (DECT) scans, highlight the differences in tissue iodine intake. It finds multiple applications in radiology, including vascular imaging, pulmonary evaluation, kidney assessment, and cancer diagnosis. In radiation oncology, it can contribute to designing more accurate and personalized treatment plans. However, DECT scanners are not commonly available in radiation therapy centers. Additionally, the use of iodine contrast agents is not suitable for all patients, especially those allergic to iodine agents, posing further limitations to the accessibility of this technology. PURPOSE: The purpose of this work is to generate synthetic iodine map images from non-contrast single-energy CT (SECT) images using conditional denoising diffusion probabilistic model (DDPM). METHODS: One-hundered twenty-six head-and-neck patients' images were retrospectively investigated in this work. Each patient underwent non-contrast SECT and contrast DECT scans. Ground truth iodine maps were generated from contrast DECT scans using commercial software syngo.via installed in the clinic. A conditional DDPM was implemented in this work to synthesize iodine maps. Three-fold cross-validation was conducted, with each iteration selecting the data from 42 patients as the test dataset and the remainder as the training dataset. Pixel-to-pixel generative adversarial network (GAN) and CycleGAN served as reference methods for evaluating the proposed DDPM method. RESULTS: The accuracy of the proposed DDPM was evaluated using three quantitative metrics: mean absolute error (MAE) (1.039 ± 0.345 mg/mL), structural similarity index measure (SSIM) (0.89 ± 0.10) and peak signal-to-noise ratio (PSNR) (25.4 ± 3.5 db) respectively. Compared to the reference methods, the proposed technique showcased superior performance across the evaluated metrics, further validated by the paired two-tailed t-tests. CONCLUSION: The proposed conditional DDPM framework has demonstrated the feasibility of generating synthetic iodine map images from non-contrast SECT images. This method presents a potential clinical application, which is providing accurate iodine contrast map in instances where only non-contrast SECT is accessible.

12.
Phys Eng Sci Med ; 47(3): 1227-1243, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38884673

RESUMEN

To propose a style transfer model for multi-contrast magnetic resonance imaging (MRI) images with a cycle-consistent generative adversarial network (CycleGAN) and evaluate the image quality and prognosis prediction performance for glioblastoma (GBM) patients from the extracted radiomics features. Style transfer models of T1 weighted MRI image (T1w) to T2 weighted MRI image (T2w) and T2w to T1w with CycleGAN were constructed using the BraTS dataset. The style transfer model was validated with the Cancer Genome Atlas Glioblastoma Multiforme (TCGA-GBM) dataset. Moreover, imaging features were extracted from real and synthesized images. These features were transformed to rad-scores by the least absolute shrinkage and selection operator (LASSO)-Cox regression. The prognosis performance was estimated by the Kaplan-Meier method. For the accuracy of the image quality of the real and synthesized MRI images, the MI, RMSE, PSNR, and SSIM were 0.991 ± 2.10 × 10 - 4 , 2.79 ± 0.16, 40.16 ± 0.38, and 0.995 ± 2.11 × 10 - 4 , for T2w, and .992 ± 2.63 × 10 - 4 , 2.49 ± 6.89 × 10 - 2 , 40.51 ± 0.22, and 0.993 ± 3.40 × 10 - 4 for T1w, respectively. The survival time had a significant difference between good and poor prognosis groups for both real and synthesized T2w (p < 0.05). However, the survival time had no significant difference between good and poor prognosis groups for both real and synthesized T1w. On the other hand, there was no significant difference between the real and synthesized T2w in both good and poor prognoses. The results of T1w were similar in the point that there was no significant difference between the real and synthesized T1w. It was found that the synthesized image could be used for prognosis prediction. The proposed prognostic model using CycleGAN could reduce the cost and time of image scanning, leading to a promotion to build the patient's outcome prediction with multi-contrast images.


Asunto(s)
Glioblastoma , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Glioblastoma/diagnóstico por imagen , Humanos , Pronóstico , Masculino , Femenino , Persona de Mediana Edad , Neoplasias Encefálicas/diagnóstico por imagen , Redes Neurales de la Computación , Adulto , Estimación de Kaplan-Meier , Radiómica
13.
Oral Radiol ; 40(4): 508-519, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38941003

RESUMEN

OBJECTIVES: The objective of this study was to enhance the visibility of soft tissues on cone-beam computed tomography (CBCT) using a CycleGAN network trained on CT images. METHODS: Training and evaluation of the CycleGAN were conducted using CT and CBCT images collected from Aichi Gakuin University (α facility) and Osaka Dental University (ß facility). Synthesized images (sCBCT) output by the CycleGAN network were evaluated by comparing them with the original images (oCBCT) and CT images, and assessments were made using histogram analysis and human scoring of soft-tissue anatomical structures and cystic lesions. RESULTS: The histogram analysis showed that on sCBCT, soft-tissue anatomical structures showed significant shifts in voxel intensity toward values resembling those on CT, with the mean values for all structures approaching those of CT and the specialists' visibility scores being significantly increased. However, improvement in the visibility of cystic lesions was limited. CONCLUSIONS: Image synthesis using CycleGAN significantly improved the visibility of soft tissue on CBCT, with this improvement being particularly notable from the submandibular region to the floor of the mouth. Although the effect on the visibility of cystic lesions was limited, there is potential for further improvement through refinement of the training method.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Humanos , Inteligencia Artificial , Interpretación de Imagen Radiográfica Asistida por Computador , Redes Neurales de la Computación , Femenino , Masculino
14.
Sensors (Basel) ; 24(9)2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38733053

RESUMEN

The fetal electrocardiogram (FECG) records changes in the graph of fetal cardiac action potential during conduction, reflecting the developmental status of the fetus in utero and its physiological cardiac activity. Morphological alterations in the FECG can indicate intrauterine hypoxia, fetal distress, and neonatal asphyxia early on, enhancing maternal and fetal safety through prompt clinical intervention, thereby reducing neonatal morbidity and mortality. To reconstruct FECG signals with clear morphological information, this paper proposes a novel deep learning model, CBLS-CycleGAN. The model's generator combines spatial features extracted by the CNN with temporal features extracted by the BiLSTM network, thus ensuring that the reconstructed signals possess combined features with spatial and temporal dependencies. The model's discriminator utilizes PatchGAN, employing small segments of the signal as discriminative inputs to concentrate the training process on capturing signal details. Evaluating the model using two real FECG signal databases, namely "Abdominal and Direct Fetal ECG Database" and "Fetal Electrocardiograms, Direct and Abdominal with Reference Heartbeat Annotations", resulted in a mean MSE and MAE of 0.019 and 0.006, respectively. It detects the FQRS compound wave with a sensitivity, positive predictive value, and F1 of 99.51%, 99.57%, and 99.54%, respectively. This paper's model effectively preserves the morphological information of FECG signals, capturing not only the FQRS compound wave but also the fetal P-wave, T-wave, P-R interval, and ST segment information, providing clinicians with crucial diagnostic insights and a scientific foundation for developing rational treatment protocols.


Asunto(s)
Electrocardiografía , Redes Neurales de la Computación , Procesamiento de Señales Asistido por Computador , Humanos , Electrocardiografía/métodos , Femenino , Embarazo , Aprendizaje Profundo , Monitoreo Fetal/métodos , Algoritmos , Feto
15.
Sci Rep ; 14(1): 7777, 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38565939

RESUMEN

Low-energy and efficient coal gangue sorting is crucial for environmental protection. Multispectral imaging (MSI) has emerged as a promising technology in this domain. This work addresses the challenge of low resolution and poor recognition performance in underground MSI equipment. We propose an attention-based multi-level residual network (ANIMR) within a super-resolution reconstruction model (ANIMR-GAN) inspired by CycleGAN. This model incorporates improvements to the discriminator and loss function. We trained the model on 600 coal and gangue MSI samples and validated it on an independent set of 120 samples. The ANIMR-GAN, combined with a random forest classifier, achieved a maximum accuracy of 97.78% and an average accuracy of 93.72%. Furthermore, the study identifies the 959.37 nm band as optimal for coal and gangue classification. Compared to existing super-resolution methods, ANIMR-GAN offers advantages, paving the way for intelligent and efficient coal gangue sorting, ultimately promoting advancements in sustainable mineral processing.

16.
Molecules ; 29(7)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38611779

RESUMEN

Drug discovery involves a crucial step of optimizing molecules with the desired structural groups. In the domain of computer-aided drug discovery, deep learning has emerged as a prominent technique in molecular modeling. Deep generative models, based on deep learning, play a crucial role in generating novel molecules when optimizing molecules. However, many existing molecular generative models have limitations as they solely process input information in a forward way. To overcome this limitation, we propose an improved generative model called BD-CycleGAN, which incorporates BiLSTM (bidirectional long short-term memory) and Mol-CycleGAN (molecular cycle generative adversarial network) to preserve the information of molecular input. To evaluate the proposed model, we assess its performance by analyzing the structural distribution and evaluation matrices of generated molecules in the process of structural transformation. The results demonstrate that the BD-CycleGAN model achieves a higher success rate and exhibits increased diversity in molecular generation. Furthermore, we demonstrate its application in molecular docking, where it successfully increases the docking score for the generated molecules. The proposed BD-CycleGAN architecture harnesses the power of deep learning to facilitate the generation of molecules with desired structural features, thus offering promising advancements in the field of drug discovery processes.


Asunto(s)
Fármacos Anti-VIH , Simulación del Acoplamiento Molecular , Descubrimiento de Drogas , Hidrolasas , Memoria a Largo Plazo
17.
PeerJ Comput Sci ; 10: e1889, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38660158

RESUMEN

Through the application of computer vision and deep learning methodologies, real-time style transfer of images becomes achievable. This process involves the fusion of diverse artistic elements into a single image, resulting in the creation of innovative pieces of art. This article centers its focus on image style transfer within the realm of art education and introduces an ATT-CycleGAN model enriched with an attention mechanism to enhance the quality and precision of style conversion. The framework enhances the generators within CycleGAN. At first, images undergo encoder downsampling before entering the intermediate transformation model. In this intermediate transformation model, feature maps are acquired through four encoding residual blocks, which are subsequently input into an attention module. Channel attention is incorporated through multi-weight optimization achieved via global max-pooling and global average-pooling techniques. During the model's training process, transfer learning techniques are employed to improve model parameter initialization, enhancing training efficiency. Experimental results demonstrate the superior performance of the proposed model in image style transfer across various categories. In comparison to the traditional CycleGAN model, it exhibits a notable increase in structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) metrics. Specifically, on the Places365 and selfi2anime datasets, compared with the traditional CycleGAN model, SSIM is increased by 3.19% and 1.31% respectively, and PSNR is increased by 10.16% and 5.02% respectively. These findings provide valuable algorithmic support and crucial references for future research in the fields of art education, image segmentation, and style transfer.

18.
Med Image Anal ; 94: 103149, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38574542

RESUMEN

The variation in histologic staining between different medical centers is one of the most profound challenges in the field of computer-aided diagnosis. The appearance disparity of pathological whole slide images causes algorithms to become less reliable, which in turn impedes the wide-spread applicability of downstream tasks like cancer diagnosis. Furthermore, different stainings lead to biases in the training which in case of domain shifts negatively affect the test performance. Therefore, in this paper we propose MultiStain-CycleGAN, a multi-domain approach to stain normalization based on CycleGAN. Our modifications to CycleGAN allow us to normalize images of different origins without retraining or using different models. We perform an extensive evaluation of our method using various metrics and compare it to commonly used methods that are multi-domain capable. First, we evaluate how well our method fools a domain classifier that tries to assign a medical center to an image. Then, we test our normalization on the tumor classification performance of a downstream classifier. Furthermore, we evaluate the image quality of the normalized images using the Structural similarity index and the ability to reduce the domain shift using the Fréchet inception distance. We show that our method proves to be multi-domain capable, provides a very high image quality among the compared methods, and can most reliably fool the domain classifier while keeping the tumor classifier performance high. By reducing the domain influence, biases in the data can be removed on the one hand and the origin of the whole slide image can be disguised on the other, thus enhancing patient data privacy.


Asunto(s)
Colorantes , Neoplasias , Humanos , Colorantes/química , Coloración y Etiquetado , Algoritmos , Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador/métodos
19.
J Histotechnol ; : 1-4, 2024 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-38648120

RESUMEN

Hematoxylin and eosin staining can be hazardous, expensive, and prone to error and variability. To circumvent these issues, artificial intelligence/machine learning models such as generative adversarial networks (GANs), are being used to 'virtually' stain unstained tissue images indistinguishable from chemically stained tissue. Frameworks such as deep convolutional GANs (DCGAN) and conditional GANs (CGANs) have successfully generated highly reproducible 'stained' images. However, their utility may be limited by requiring registered, paired images which can be difficult to obtain. To avoid these dataset requirements, we attempted to use an unsupervised CycleGAN pix2pix model(5,6) to turn unpaired, unstained bright-field images into pathologist-approved digitally 'stained' images. Using formalin-fixed-paraffin-embedded liver samples, 5µm section images (20x) were obtained before and after staining to create "stained" an "unstained" datasets. Model implementation was conducted using Ubuntu 20.04.4 LTS, 32 GB RAM, Intel Core i7-9750 CPU @2.6 GHz, Nvidia GeForce RTX 2070 Mobile, Python 3.7.11 and Tensorflow 2.9.1. The CycleGAN framework utilized a u-net-based generator and discriminator from pix2pix, a CGAN. The CycleGAN used a modified loss function, cycle consistent loss that assumed unpaired images, so loss was measured twice. To our knowledge, this is the first documented application of this architecture using unpaired bright-field images. Results and suggested improvements are discussed.

20.
Odontology ; 2024 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-38607582

RESUMEN

The objectives of this study were to create a mutual conversion system between contrast-enhanced computed tomography (CECT) and non-CECT images using a cycle generative adversarial network (cycleGAN) for the internal jugular region. Image patches were cropped from CT images in 25 patients who underwent both CECT and non-CECT imaging. Using a cycleGAN, synthetic CECT and non-CECT images were generated from original non-CECT and CECT images, respectively. The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were calculated. Visual Turing tests were used to determine whether oral and maxillofacial radiologists could tell the difference between synthetic versus original images, and receiver operating characteristic (ROC) analyses were used to assess the radiologists' performances in discriminating lymph nodes from blood vessels. The PSNR of non-CECT images was higher than that of CECT images, while the SSIM was higher in CECT images. The Visual Turing test showed a higher perceptual quality in CECT images. The area under the ROC curve showed almost perfect performances in synthetic as well as original CECT images. In conclusion, synthetic CECT images created by cycleGAN appeared to have the potential to provide effective information in patients who could not receive contrast enhancement.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA