Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 11.729
Filtrar
1.
J Biomed Opt ; 30(Suppl 1): S13703, 2025 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-39034959

RESUMEN

Significance: Standardization of fluorescence molecular imaging (FMI) is critical for ensuring quality control in guiding surgical procedures. To accurately evaluate system performance, two metrics, the signal-to-noise ratio (SNR) and contrast, are widely employed. However, there is currently no consensus on how these metrics can be computed. Aim: We aim to examine the impact of SNR and contrast definitions on the performance assessment of FMI systems. Approach: We quantified the SNR and contrast of six near-infrared FMI systems by imaging a multi-parametric phantom. Based on approaches commonly used in the literature, we quantified seven SNRs and four contrast values considering different background regions and/or formulas. Then, we calculated benchmarking (BM) scores and respective rank values for each system. Results: We show that the performance assessment of an FMI system changes depending on the background locations and the applied quantification method. For a single system, the different metrics can vary up to ∼ 35 dB (SNR), ∼ 8.65 a . u . (contrast), and ∼ 0.67 a . u . (BM score). Conclusions: The definition of precise guidelines for FMI performance assessment is imperative to ensure successful clinical translation of the technology. Such guidelines can also enable quality control for the already clinically approved indocyanine green-based fluorescence image-guided surgery.


Asunto(s)
Benchmarking , Imagen Molecular , Imagen Óptica , Fantasmas de Imagen , Relación Señal-Ruido , Imagen Molecular/métodos , Imagen Molecular/normas , Imagen Óptica/métodos , Imagen Óptica/normas , Procesamiento de Imagen Asistido por Computador/métodos
2.
PLoS One ; 19(9): e0306706, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39240820

RESUMEN

In the field of image processing, common noise types include Gaussian noise, salt and pepper noise, speckle noise, uniform noise and pulse noise. Different types of noise require different denoising algorithms and techniques to maintain image quality and fidelity. Traditional image denoising methods not only remove image noise, but also result in the detail loss in the image. It cannot guarantee the clean removal of noise information while preserving the true signal of the image. To address the aforementioned issues, an image denoising method combining an improved threshold function and wavelet transform is proposed in the experiment. Unlike traditional threshold functions, the improved threshold function is a continuous function that can avoid the pseudo Gibbs effect after image denoising and improve image quality. During the process, the output image of the finite ridge wave transform is first combined with the wavelet transform to improve the denoising performance. Then, an improved threshold function is introduced to enhance the quality of the reconstructed image. In addition, to evaluate the performance of different algorithms, different densities of Gaussian noise are added to Lena images of black, white, and color in the experiment. The results showed that when adding 0.010.01 variance Gaussian noise to black and white images, the peak signal-to-noise ratio of the research method increased by 2.58dB in a positive direction. The mean square error decreased by 0.10dB. When using the algorithm for denoising, the research method had a minimum denoising time of only 13ms, which saved 9ms and 3ms compared to the hard threshold algorithm (Hard TA) and soft threshold algorithm (Soft TA), respectively. The research method exhibited higher stability, with an average similarity error fluctuating within 0.89%. The above results indicate that the research method has smaller errors and better system stability in image denoising. It can be applied in the field of digital image denoising, which can effectively promote the positive development of image denoising technology to a certain extent.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Relación Señal-Ruido , Análisis de Ondículas , Procesamiento de Imagen Asistido por Computador/métodos , Distribución Normal
3.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(4): 732-741, 2024 Aug 25.
Artículo en Chino | MEDLINE | ID: mdl-39218599

RESUMEN

Aiming at the problem that the feature extraction ability of forehead single-channel electroencephalography (EEG) signals is insufficient, which leads to decreased fatigue detection accuracy, a fatigue feature extraction and classification algorithm based on supervised contrastive learning is proposed. Firstly, the raw signals are filtered by empirical modal decomposition to improve the signal-to-noise ratio. Secondly, considering the limitation of the one-dimensional signal in information expression, overlapping sampling is used to transform the signal into a two-dimensional structure, and simultaneously express the short-term and long-term changes of the signal. The feature extraction network is constructed by depthwise separable convolution to accelerate model operation. Finally, the model is globally optimized by combining the supervised contrastive loss and the mean square error loss. Experiments show that the average accuracy of the algorithm for classifying three fatigue states can reach 75.80%, which is greatly improved compared with other advanced algorithms, and the accuracy and feasibility of fatigue detection by single-channel EEG signals are significantly improved. The results provide strong support for the application of single-channel EEG signals, and also provide a new idea for fatigue detection research.


Asunto(s)
Algoritmos , Electroencefalografía , Fatiga , Frente , Procesamiento de Señales Asistido por Computador , Humanos , Electroencefalografía/métodos , Fatiga/fisiopatología , Fatiga/diagnóstico , Relación Señal-Ruido
4.
J R Soc Interface ; 21(218): 20240222, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39226927

RESUMEN

The use of wearable sensors to monitor vital signs is increasingly important in assessing individual health. However, their accuracy often falls short of that of dedicated medical devices, limiting their usefulness in a clinical setting. This study introduces a new Bayesian filtering (BF) algorithm that is designed to learn the statistical characteristics of signal and noise, allowing for optimal smoothing. The algorithm is able to adapt to changes in the signal-to-noise ratio (SNR) over time, improving performance through windowed analysis and Bayesian criterion-based smoothing. By evaluating the algorithm on heart-rate (HR) data collected from Garmin Vivoactive 4 smartwatches worn by individuals with amyotrophic lateral sclerosis and multiple sclerosis, it is demonstrated that BF provides superior SNR tracking and smoothing compared with non-adaptive methods. The results show that BF accurately captures SNR variability, reducing the root mean square error from 2.84 bpm to 1.21 bpm and the mean absolute relative error from 3.46% to 1.36%. These findings highlight the potential of BF as a preprocessing tool to enhance signal quality from wearable sensors, particularly in HR data, thereby expanding their applications in clinical and research settings.


Asunto(s)
Algoritmos , Teorema de Bayes , Frecuencia Cardíaca , Relación Señal-Ruido , Dispositivos Electrónicos Vestibles , Humanos , Frecuencia Cardíaca/fisiología , Masculino , Femenino , Procesamiento de Señales Asistido por Computador
5.
Med Eng Phys ; 131: 104232, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39284657

RESUMEN

Different types of noise contaminating the surface electromyogram (EMG) signal may degrade the recognition performance. For noise removal, the type of noise has to first be identified. In this paper, we propose a real-time efficient system for identifying a clean EMG signal and noisy EMG signals contaminated with any one of the following three types of noise: electrocardiogram interference, spike noise, and power line interference. Two statistical descriptors, kurtosis and skewness, are used as input features for the cascading quadratic discriminant analysis classifier. An efficient simplification of kurtosis and skewness calculations that can reduce computation time and memory storage is proposed. The experimental results from the real-time system based on an ATmega 2560 microcontroller demonstrate that the kurtosis and skewness values show root mean square errors between the traditional and proposed efficient techniques of 0.08 and 0.09, respectively. The identification accuracy with five-fold cross-validation resulting from the quadratic discriminant analysis classifier is 96.00%.


Asunto(s)
Electromiografía , Procesamiento de Señales Asistido por Computador , Electromiografía/métodos , Factores de Tiempo , Humanos , Análisis Discriminante , Artefactos , Relación Señal-Ruido
6.
PLoS One ; 19(9): e0307619, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39264977

RESUMEN

Medical image security is paramount in the digital era but remains a significant challenge. This paper introduces an innovative zero-watermarking methodology tailored for medical imaging, ensuring robust protection without compromising image quality. We utilize Sped-up Robust features for high-precision feature extraction and singular value decomposition (SVD) to embed watermarks into the frequency domain, preserving the original image's integrity. Our methodology uniquely encodes watermarks in a non-intrusive manner, leveraging the robustness of the extracted features and the resilience of the SVD approach. The embedded watermark is imperceptible, maintaining the diagnostic value of medical images. Extensive experiments under various attacks, including Gaussian noise, JPEG compression, and geometric distortions, demonstrate the methodology's superior performance. The results reveal exceptional robustness, with high Normalized Correlation (NC) and Peak Signal-to-noise ratio (PSNR) values, outperforming existing techniques. Specifically, under Gaussian noise and rotation attacks, the watermark retrieved from the encrypted domain maintained an NC value close to 1.00, signifying near-perfect resilience. Even under severe attacks such as 30% cropping, the methodology exhibited a significantly higher NC compared to current state-of-the-art methods.


Asunto(s)
Algoritmos , Seguridad Computacional , Humanos , Diagnóstico por Imagen/métodos , Relación Señal-Ruido , Procesamiento de Imagen Asistido por Computador/métodos , Compresión de Datos/métodos
7.
Nat Commun ; 15(1): 8062, 2024 Sep 14.
Artículo en Inglés | MEDLINE | ID: mdl-39277607

RESUMEN

Cryo-transmission electron microscopy (cryo-EM) of frozen hydrated specimens is an efficient method for the structural analysis of purified biological molecules. However, cryo-EM and cryo-electron tomography are limited by the low signal-to-noise ratio (SNR) of recorded images, making detection of smaller particles challenging. For dose-resilient samples often studied in the physical sciences, electron ptychography - a coherent diffractive imaging technique using 4D scanning transmission electron microscopy (4D-STEM) - has recently demonstrated excellent SNR and resolution down to tens of picometers for thin specimens imaged at room temperature. Here we apply 4D-STEM and ptychographic data analysis to frozen hydrated proteins, reaching sub-nanometer resolution 3D reconstructions. We employ low-dose cryo-EM with an aberration-corrected, convergent electron beam to collect 4D-STEM data for our reconstructions. The high frame rate of the electron detector allows us to record large datasets of electron diffraction patterns with substantial overlaps between the interaction volumes of adjacent scan positions, from which the scattering potentials of the samples are iteratively reconstructed. The reconstructed micrographs show strong SNR enabling the reconstruction of the structure of apoferritin protein at up to 5.8 Å resolution. We also show structural analysis of the Phi92 capsid and sheath, tobacco mosaic virus, and bacteriorhodopsin at slightly lower resolutions.


Asunto(s)
Microscopía por Crioelectrón , Relación Señal-Ruido , Microscopía por Crioelectrón/métodos , Microscopía Electrónica de Transmisión de Rastreo/métodos , Imagenología Tridimensional/métodos , Apoferritinas/química , Apoferritinas/ultraestructura , Proteínas/química , Proteínas/ultraestructura , Procesamiento de Imagen Asistido por Computador/métodos
8.
Sensors (Basel) ; 24(17)2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39275704

RESUMEN

In vivo phosphorus-31 (31P) magnetic resonance spectroscopy (MRS) imaging (MRSI) is an important non-invasive imaging tool for studying cerebral energy metabolism, intracellular nicotinamide adenine dinucleotide (NAD) and redox ratio, and mitochondrial function. However, it is challenging to achieve high signal-to-noise ratio (SNR) 31P MRS/MRSI results owing to low phosphorus metabolites concentration and low phosphorous gyromagnetic ratio (γ). Many works have demonstrated that ultrahigh field (UHF) could significantly improve the 31P-MRS SNR. However, there is a lack of studies of the 31P MRSI SNR in the 10.5 Tesla (T) human scanner. In this study, we designed and constructed a novel 31P-1H dual-frequency loop-dipole probe that can operate at both 7T and 10.5T for a quantitative comparison of 31P MRSI SNR between the two magnetic fields, taking into account the RF coil B1 fields (RF coil receive and transmit fields) and relaxation times. We found that the SNR of the 31P MRS signal is 1.5 times higher at 10.5T as compared to 7T, and the power dependence of SNR on magnetic field strength (B0) is 1.9.


Asunto(s)
Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética , Fósforo , Relación Señal-Ruido , Humanos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/instrumentación , Espectroscopía de Resonancia Magnética/métodos , Fósforo/química , Ondas de Radio , Isótopos de Fósforo , Fantasmas de Imagen
9.
Sensors (Basel) ; 24(17)2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39275753

RESUMEN

INTRODUCTION: The disco-vertebral junction (DVJ) of the lumbar spine contains thin structures with short T2 values, including the cartilaginous endplate (CEP) sandwiched between the bony vertebral endplate (VEP) and the nucleus pulposus (NP). We previously demonstrated that ultrashort-echo-time (UTE) MRI, compared to conventional MRI, is able to depict the tissues at the DVJ with improved contrast. In this study, we sought to further optimize UTE MRI by characterizing the contrast-to-noise ratio (CNR) of these tissues when either single echo or echo subtraction images are used and with varying echo times (TEs). METHODS: In four cadaveric lumbar spines, we acquired 3D Cones (a UTE sequence) images at varying TEs from 0.032 ms to 16 ms. Additionally, spin echo T1- and T2-weighted images were acquired. The CNRs of CEP-NP and CEP-VEP were measured in all source images and 3D Cones echo subtraction images. RESULTS: In the spin echo images, it was challenging to distinguish the CEP from the VEP, as both had low signal intensity. However, the 3D Cones source images at the shortest TE of 0.032 ms provided an excellent contrast between the CEP and the VEP. As the TE increased, the contrast decreased in the source images. In contrast, the 3D Cones echo subtraction images showed increasing CNR values as the second TE increased, reaching statistical significance when the second TE was above 10 ms (p < 0.05). CONCLUSIONS: Our study highlights the feasibility of incorporating UTE MRI for the evaluation of the DVJ and its advantages over conventional spin echo sequences for improving the contrast between the CEP and adjacent tissues. Additionally, modulation of the contrast for the target tissues can be achieved using either source images or subtraction images, as well as by varying the echo times.


Asunto(s)
Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Vértebras Lumbares/diagnóstico por imagen , Disco Intervertebral/diagnóstico por imagen , Relación Señal-Ruido , Imagenología Tridimensional/métodos , Núcleo Pulposo/diagnóstico por imagen
10.
PLoS One ; 19(9): e0308658, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39269959

RESUMEN

Spectral Photon Counting Computed Tomography (SPCCT), a ground-breaking development in CT technology, has immense potential to address the persistent problem of metal artefacts in CT images. This study aims to evaluate the potential of Mars photon-counting CT technology in reducing metal artefacts. It focuses on identifying and quantifying clinically significant materials in the presence of metal objects. A multi-material phantom was used, containing inserts of varying concentrations of hydroxyapatite (a mineral present in teeth, bones, and calcified plaque), iodine (used as a contrast agent), CT water (to mimic soft tissue), and adipose (as a fat substitute). Three sets of scans were acquired: with aluminium, with stainless steel, and without a metal insert as a reference dataset. Data acquisition was performed using a Mars SPCCT scanner (Microlab 5×120); operated at 118 kVp and 80 µA. The images were subsequently reconstructed into five energy bins: 7-40, 40-50, 50-60, 60-79, and 79-118 keV. Evaluation metrics including signal-to-noise ratio (SNR), linearity of attenuation profiles, root mean square error (RMSE), and area under the curve (AUC) were employed to assess the energy and material-density images with and without metal inserts. Results show decreased metal artefacts and a better signal-to-noise ratio (up to 25%) with increased energy bins as compared to reference data. The attenuation profile also demonstrated high linearity (R2 >0.95) and lower RMSE across all material concentrations, even in the presence of aluminium and steel. Material identification accuracy for iodine and hydroxyapatite (with and without metal inserts) remained consistent, minimally impacting AUC values. For demonstration purposes, the biological sample was also scanned with the stainless steel volar implant and cortical bone screw, and the images were objectively assessed to indicate the potential effectiveness of SPCCT in replicating real-world clinical scenarios.


Asunto(s)
Metales , Fantasmas de Imagen , Fotones , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Metales/análisis , Metales/química , Humanos , Relación Señal-Ruido , Artefactos , Yodo/análisis , Durapatita/análisis
11.
Eur Radiol Exp ; 8(1): 103, 2024 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-39254920

RESUMEN

BACKGROUND: We aimed to determine the capabilities of compressed sensing (CS) and deep learning reconstruction (DLR) with those of conventional parallel imaging (PI) for improving image quality while reducing examination time on female pelvic 1.5-T magnetic resonance imaging (MRI). METHODS: Fifty-two consecutive female patients with various pelvic diseases underwent MRI with T1- and T2-weighted sequences using CS and PI. All CS data was reconstructed with and without DLR. Signal-to-noise ratio (SNR) of muscle and contrast-to-noise ratio (CNR) between fat tissue and iliac muscle on T1-weighted images (T1WI) and between myometrium and straight muscle on T2-weighted images (T2WI) were determined through region-of-interest measurements. Overall image quality (OIQ) and diagnostic confidence level (DCL) were evaluated on 5-point scales. SNRs and CNRs were compared using Tukey's test, and qualitative indexes using the Wilcoxon signed-rank test. RESULTS: SNRs of T1WI and T2WI obtained using CS with DLR were higher than those using CS without DLR or conventional PI (p < 0.010). CNRs of T1WI and T2WI obtained using CS with DLR were higher than those using CS without DLR or conventional PI (p < 0.003). OIQ of T1WI and T2WI obtained using CS with DLR were higher than that using CS without DLR or conventional PI (p < 0.001). DCL of T2WI obtained using CS with DLR was higher than that using conventional PI or CS without DLR (p < 0.001). CONCLUSION: CS with DLR provided better image quality and shorter examination time than those obtainable with PI for female pelvic 1.5-T MRI. RELEVANCE STATEMENT: CS with DLR can be considered effective for attaining better image quality and shorter examination time for female pelvic MRI at 1.5 T compared with those obtainable with PI. KEY POINTS: Patients underwent MRI with T1- and T2-weighted sequences using CS and PI. All CS data was reconstructed with and without DLR. CS with DLR allowed for examination times significantly shorter than those of PI and provided significantly higher signal- and CNRs, as well as OIQ.


Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Magnética , Humanos , Femenino , Imagen por Resonancia Magnética/métodos , Persona de Mediana Edad , Adulto , Anciano , Relación Señal-Ruido , Pelvis/diagnóstico por imagen , Adulto Joven , Anciano de 80 o más Años
12.
Biomed Phys Eng Express ; 10(6)2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39264056

RESUMEN

Objective. Cone beam CT (CBCT) typically has severe image artifacts and inaccurate HU values, which limits its application in radiation medicines. Scholars have proposed the use of cycle consistent generative adversarial network (Cycle-GAN) to address these issues. However, the generation quality of Cycle-GAN needs to be improved. This issue is exacerbated by the inherent size discrepancies between pelvic CT scans from different patients, as well as varying slice positions within the same patient, which introduce a scaling problem during training.Approach. We introduced the Enhanced Edge and Mask (EEM) approach in our structural constraint Cycle-EEM-GAN. This approach is designed to not only solve the scaling problem but also significantly improve the generation quality of the synthetic CT images. Then data from sixty pelvic patients were used to investigate the generation of synthetic CT (sCT) from CBCT.Main results.The mean absolute error (MAE), the root mean square error (RMSE), the peak signal to noise ratio (PSNR), the structural similarity index (SSIM), and spatial nonuniformity (SNU) are used to assess the quality of the sCT generated from CBCT. Compared with CBCT images, the MAE improved from 53.09 to 37.74, RMSE from 185.22 to 146.63, SNU from 0.38 to 0.35, PSNR from 24.68 to 32.33, SSIM from 0.624 to 0.981. Also, the Cycle-EEM-GAN outperformed Cycle-GAN in terms of visual evaluation and loss.Significance.Cycle-EEM-GAN has improved the quality of CBCT images, making the structural details clear while prevents image scaling during the generation process, so that further promotes the application of CBCT in radiotherapy.


Asunto(s)
Algoritmos , Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Relación Señal-Ruido , Pelvis/diagnóstico por imagen , Redes Neurales de la Computación , Artefactos
13.
Rev Sci Instrum ; 95(9)2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39248622

RESUMEN

Ambulatory electrocardiogram (ECG) testing plays a crucial role in the early detection, diagnosis, treatment evaluation, and prevention of cardiovascular diseases. Clear ECG signals are essential for the subsequent analysis of these conditions. However, ECG signals obtained during exercise are susceptible to various noise interferences, including electrode motion artifact, baseline wander, and muscle artifact. These interferences can blur the characteristic ECG waveforms, potentially leading to misjudgment by physicians. To suppress noise in ECG signals more effectively, this paper proposes a novel deep learning-based noise reduction method. This method enhances the diffusion model network by introducing conditional noise, designing a multi-kernel convolutional transformer network structure based on noise prediction, and integrating the diffusion model inverse process to achieve noise reduction. Experiments were conducted on the QT database and MIT-BIH Noise Stress Test Database and compared with the algorithms in other papers to verify the effectiveness of the present method. The results indicate that the proposed method achieves optimal noise reduction performance across both statistical and distance-based evaluation metrics as well as waveform visualization, surpassing eight other state-of-the-art methods. The network proposed in this paper demonstrates stable performance in addressing electrode motion artifact, baseline wander, muscle artifact, and the mixed complex noise of these three types, and it is anticipated to be applied in future noise reduction analysis of clinical dynamic ECG signals.


Asunto(s)
Algoritmos , Artefactos , Humanos , Electrocardiografía Ambulatoria/instrumentación , Electrocardiografía Ambulatoria/métodos , Relación Señal-Ruido , Procesamiento de Señales Asistido por Computador
14.
JASA Express Lett ; 4(9)2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39248676

RESUMEN

A test is proposed to characterize the performance of speech recognition systems. The QuickSIN test is used by audiologists to measure the ability of humans to recognize continuous speech in noise. This test yields the signal-to-noise ratio at which individuals can correctly recognize 50% of the keywords in low-context sentences. It is argued that a metric for automatic speech recognizers will ground the performance of automatic speech-in-noise recognizers to human abilities. Here, it is demonstrated that the performance of modern recognizers, built using millions of hours of unsupervised training data, is anywhere from normal to mildly impaired in noise compared to human participants.


Asunto(s)
Ruido , Relación Señal-Ruido , Percepción del Habla , Software de Reconocimiento del Habla , Humanos , Percepción del Habla/fisiología , Adulto , Masculino , Femenino
15.
Chem Pharm Bull (Tokyo) ; 72(9): 800-803, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39231692

RESUMEN

A noise filter, which is usually attached to a detector for chromatography, was applied for the improvement of a signal-to-noise ratio (S/N) on a chromatogram. The objective of this paper is to elucidate the effect of noise filtering in an UV detector of ultra HPLC (UHPLC) on the statistical reliability of chemometrically evaluated repeatability by the function of mutual information (FUMI) theory. To examine the statistical reliability of chemometrically evaluated repeatability in the UHPLC system associated with noise filtering, the standard deviation (SD) values of the area in baseline fluctuations with peak region k (s(k)) were obtained from six chromatograms with noise filtering. Further, the average of s(k) values (σ̂) was calculated from the s(k) values (n = 6) to be alternatively applied as the population SD. All s(k)/σ̂ values were within the 95% confidence intervals (CIs) at the freedom degree of 50, indicating the chemometrically estimated relative SD (RSD) of a peak area and RSD by repeated measurements of at least 50 times had equivalent reliability.


Asunto(s)
Relación Señal-Ruido , Cromatografía Líquida de Alta Presión , Reproducibilidad de los Resultados , Rayos Ultravioleta , Espectrofotometría Ultravioleta
16.
Med Image Anal ; 98: 103306, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39163786

RESUMEN

Positron emission tomography (PET) imaging is widely used in medical imaging for analyzing neurological disorders and related brain diseases. Usually, full-dose imaging for PET ensures image quality but raises concerns about potential health risks of radiation exposure. The contradiction between reducing radiation exposure and maintaining diagnostic performance can be effectively addressed by reconstructing low-dose PET (L-PET) images to the same high-quality as full-dose (F-PET). This paper introduces the Multi Pareto Generative Adversarial Network (MPGAN) to achieve 3D end-to-end denoising for the L-PET images of human brain. MPGAN consists of two key modules: the diffused multi-round cascade generator (GDmc) and the dynamic Pareto-efficient discriminator (DPed), both of which play a zero-sum game for n(n∈1,2,3) rounds to ensure the quality of synthesized F-PET images. The Pareto-efficient dynamic discrimination process is introduced in DPed to adaptively adjust the weights of sub-discriminators for improved discrimination output. We validated the performance of MPGAN using three datasets, including two independent datasets and one mixed dataset, and compared it with 12 recent competing models. Experimental results indicate that the proposed MPGAN provides an effective solution for 3D end-to-end denoising of L-PET images of the human brain, which meets clinical standards and achieves state-of-the-art performance on commonly used metrics.


Asunto(s)
Encéfalo , Tomografía de Emisión de Positrones , Humanos , Tomografía de Emisión de Positrones/métodos , Encéfalo/diagnóstico por imagen , Relación Señal-Ruido , Dosis de Radiación , Algoritmos , Redes Neurales de la Computación , Imagenología Tridimensional/métodos , Procesamiento de Imagen Asistido por Computador/métodos
17.
Med Image Anal ; 98: 103327, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39191093

RESUMEN

Low-dose computed tomography (LDCT) denoising tasks face significant challenges in practical imaging scenarios. Supervised methods encounter difficulties in real-world scenarios as there are no paired data for training. Moreover, when applied to datasets with varying noise patterns, these methods may experience decreased performance owing to the domain gap. Conversely, unsupervised methods do not require paired data and can be directly trained on real-world data. However, they often exhibit inferior performance compared to supervised methods. To address this issue, it is necessary to leverage the strengths of these supervised and unsupervised methods. In this paper, we propose a novel domain adaptive noise reduction framework (DANRF), which integrates both knowledge transfer and style generalization learning to effectively tackle the domain gap problem. Specifically, an iterative knowledge transfer method with knowledge distillation is selected to train the target model using unlabeled target data and a pre-trained source model trained with paired simulation data. Meanwhile, we introduce the mean teacher mechanism to update the source model, enabling it to adapt to the target domain. Furthermore, an iterative style generalization learning process is also designed to enrich the style diversity of the training dataset. We evaluate the performance of our approach through experiments conducted on multi-source datasets. The results demonstrate the feasibility and effectiveness of our proposed DANRF model in multi-source LDCT image processing tasks. Given its hybrid nature, which combines the advantages of supervised and unsupervised learning, and its ability to bridge domain gaps, our approach is well-suited for improving practical low-dose CT imaging in clinical settings. Code for our proposed approach is publicly available at https://github.com/tyfeiii/DANRF.


Asunto(s)
Tomografía Computarizada por Rayos X , Humanos , Relación Señal-Ruido , Algoritmos , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos
18.
Med Image Anal ; 98: 103325, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39208560

RESUMEN

Recent advances in generative models have paved the way for enhanced generation of natural and medical images, including synthetic brain MRIs. However, the mainstay of current AI research focuses on optimizing synthetic MRIs with respect to visual quality (such as signal-to-noise ratio) while lacking insights into their relevance to neuroscience. To generate high-quality T1-weighted MRIs relevant for neuroscience discovery, we present a two-stage Diffusion Probabilistic Model (called BrainSynth) to synthesize high-resolution MRIs conditionally-dependent on metadata (such as age and sex). We then propose a novel procedure to assess the quality of BrainSynth according to how well its synthetic MRIs capture macrostructural properties of brain regions and how accurately they encode the effects of age and sex. Results indicate that more than half of the brain regions in our synthetic MRIs are anatomically plausible, i.e., the effect size between real and synthetic MRIs is small relative to biological factors such as age and sex. Moreover, the anatomical plausibility varies across cortical regions according to their geometric complexity. As is, the MRIs generated by BrainSynth significantly improve the training of a predictive model to identify accelerated aging effects in an independent study. These results indicate that our model accurately capture the brain's anatomical information and thus could enrich the data of underrepresented samples in a study. The code of BrainSynth will be released as part of the MONAI project at https://github.com/Project-MONAI/GenerativeModels.


Asunto(s)
Imagenología Tridimensional , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Imagenología Tridimensional/métodos , Femenino , Masculino , Metadatos , Encéfalo/diagnóstico por imagen , Adulto , Persona de Mediana Edad , Relación Señal-Ruido
19.
Artículo en Inglés | MEDLINE | ID: mdl-39213274

RESUMEN

EMG filling curve characterizes the EMG filling process and EMG probability density function (PDF) shape change for the entire force range of a muscle. We aim to understand the relation between the physiological and recording variables, and the resulting EMG filling curves. We thereby present an analytical and simulation study to explain how the filling curve patterns relate to specific changes in the motor unit potential (MUP) waveforms and motor unit (MU) firing rates, the two main factors affecting the EMG PDF, but also to recording conditions in terms of noise level. We compare the analytical results with simulated cases verifying a perfect agreement with the analytical model. Finally, we present a set of real EMG filling curves with distinct patterns to explain the information about MUP amplitudes, MU firing rates, and noise level that these patterns provide in the light of the analytical study. Our findings reflect that the filling factor increases when firing rate increases or when newly recruited motor unit have potentials of smaller or equal amplitude than the former ones. On the other hand, the filling factor decreases when newly recruited potentials are larger in amplitude than the previous potentials. Filling curves are shown to be consistent under changes of the MUP waveform, and stretched under MUP amplitude scaling. Our findings also show how additive noise affects the filling curve and can even impede to obtain reliable information from the EMG PDF statistics.


Asunto(s)
Potenciales de Acción , Algoritmos , Simulación por Computador , Electromiografía , Neuronas Motoras , Músculo Esquelético , Relación Señal-Ruido , Electromiografía/métodos , Humanos , Neuronas Motoras/fisiología , Músculo Esquelético/fisiología , Potenciales de Acción/fisiología , Contracción Muscular/fisiología , Reproducibilidad de los Resultados , Reclutamiento Neurofisiológico/fisiología , Modelos Estadísticos
20.
Phys Med Biol ; 69(18)2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39137803

RESUMEN

Objective.Multi-energy CT conducted by photon-counting detector has a wide range of applications, especially in multiple contrast agents imaging. However, static multi-energy (SME) CT imaging suffers from higher statistical noise because of increased energy bins with static energy thresholds. Our team has proposed a dynamic dual-energy (DDE) CT detector model and the corresponding iterative reconstruction algorithm to solve this problem. However, rigorous and detailed analysis of the statistical noise characterization in this DDE CT was lacked.Approach.Starting from the properties of the Poisson random variable, this paper analyzes the noise characterization of the DDE CT and compares it with the SME CT. It is proved that the multi-energy CT projections and reconstruction images calculated from the proposed DDE CT algorithm have less statistical noise than that of the SME CT.Main results.Simulations and experiments verify that the expectations of the multi-energy CT projections calculated from DDE CT are the same as those of the SME projections. Still, the variance of the former is smaller. We further analyze the convergence of the iterative DDE CT algorithm through simulations and prove that the derived noise characterization can be realized under different CT imaging configurations.Significance.The low statistical noise characteristics demonstrate the value of DDE CT imaging technology.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Relación Señal-Ruido , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Fantasmas de Imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA