Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 8.620
Filtrar
1.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39248123

RESUMEN

We present a new method for constructing valid covariance functions of Gaussian processes for spatial analysis in irregular, non-convex domains such as bodies of water. Standard covariance functions based on geodesic distances are not guaranteed to be positive definite on such domains, while existing non-Euclidean approaches fail to respect the partially Euclidean nature of these domains where the geodesic distance agrees with the Euclidean distances for some pairs of points. Using a visibility graph on the domain, we propose a class of covariance functions that preserve Euclidean-based covariances between points that are connected in the domain while incorporating the non-convex geometry of the domain via conditional independence relationships. We show that the proposed method preserves the partially Euclidean nature of the intrinsic geometry on the domain while maintaining validity (positive definiteness) and marginal stationarity of the covariance function over the entire parameter space, properties which are not always fulfilled by existing approaches to construct covariance functions on non-convex domains. We provide useful approximations to improve computational efficiency, resulting in a scalable algorithm. We compare the performance of our method with those of competing state-of-the-art methods using simulation studies on synthetic non-convex domains. The method is applied to data regarding acidity levels in the Chesapeake Bay, showing its potential for ecological monitoring in real-world spatial applications on irregular domains.


Asunto(s)
Algoritmos , Simulación por Computador , Análisis Espacial , Modelos Estadísticos , Distribución Normal , Biometría/métodos
2.
PLoS One ; 19(9): e0306706, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39240820

RESUMEN

In the field of image processing, common noise types include Gaussian noise, salt and pepper noise, speckle noise, uniform noise and pulse noise. Different types of noise require different denoising algorithms and techniques to maintain image quality and fidelity. Traditional image denoising methods not only remove image noise, but also result in the detail loss in the image. It cannot guarantee the clean removal of noise information while preserving the true signal of the image. To address the aforementioned issues, an image denoising method combining an improved threshold function and wavelet transform is proposed in the experiment. Unlike traditional threshold functions, the improved threshold function is a continuous function that can avoid the pseudo Gibbs effect after image denoising and improve image quality. During the process, the output image of the finite ridge wave transform is first combined with the wavelet transform to improve the denoising performance. Then, an improved threshold function is introduced to enhance the quality of the reconstructed image. In addition, to evaluate the performance of different algorithms, different densities of Gaussian noise are added to Lena images of black, white, and color in the experiment. The results showed that when adding 0.010.01 variance Gaussian noise to black and white images, the peak signal-to-noise ratio of the research method increased by 2.58dB in a positive direction. The mean square error decreased by 0.10dB. When using the algorithm for denoising, the research method had a minimum denoising time of only 13ms, which saved 9ms and 3ms compared to the hard threshold algorithm (Hard TA) and soft threshold algorithm (Soft TA), respectively. The research method exhibited higher stability, with an average similarity error fluctuating within 0.89%. The above results indicate that the research method has smaller errors and better system stability in image denoising. It can be applied in the field of digital image denoising, which can effectively promote the positive development of image denoising technology to a certain extent.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Relación Señal-Ruido , Análisis de Ondículas , Procesamiento de Imagen Asistido por Computador/métodos , Distribución Normal
3.
Phys Med Biol ; 69(18)2024 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-39159667

RESUMEN

Objective.Acollinearity of annihilation photons (APA) introduces spatial blur in positron emission tomography (PET) imaging. This phenomenon increases proportionally with the scanner diameter and it has been shown to follow a Gaussian distribution. This last statement can be interpreted in two ways: the magnitude of the acollinearity angle, or the angular deviation of annihilation photons from perfect collinearity. As the former constitutes the partial integral of the latter, a misinterpretation could have significant consequences on the resulting spatial blurring. Previous research investigating the impact of APA in PET imaging has assumed the Gaussian nature of its angular deviation, which is consistent with experimental results. However, a comprehensive analysis of several simulation software packages for PET data acquisition revealed that the magnitude of APA was implemented as a Gaussian distribution.Approach.We quantified the impact of this misinterpretation of APA by comparing simulations obtained with GATE, which is one of these simulation programs, to an in-house modification of GATE that models APA deviation as following a Gaussian distribution.Main results.We show that the APA misinterpretation not only alters the spatial blurring profile in image space, but also considerably underestimates the impact of APA on spatial resolution. For an ideal PET scanner with a diameter of 81 cm, the APA point source response simulated under the first interpretation has a cusp shape with 0.4 mm FWHM. This is significantly different from the expected Gaussian point source response of 2.1 mm FWHM reproduced under the second interpretation.Significance.Although this misinterpretation has been found in several PET simulation tools, it has had a limited impact on the simulated spatial resolution of current PET scanners due to its small magnitude relative to the other factors. However, the inaccuracy it introduces in estimating the overall spatial resolution of PET scanners will increase as the performance of newer devices improves.


Asunto(s)
Método de Montecarlo , Tomografía de Emisión de Positrones , Tomografía de Emisión de Positrones/instrumentación , Procesamiento de Imagen Asistido por Computador/métodos , Fotones , Distribución Normal
4.
J Chem Inf Model ; 64(16): 6623-6635, 2024 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-39143923

RESUMEN

Tunnels are structural conduits in biomolecules responsible for transporting chemical compounds and solvent molecules from the active site. They have been shown to be present in a wide variety of enzymes across all functional and structural classes. However, the study of such pathways is experimentally challenging, because they are typically transient. Computational methods, such as molecular dynamics (MD) simulations, have been successfully proposed to explore tunnels. Conventional MD (cMD) provides structural details to characterize tunnels but suffers from sampling limitations to capture rare tunnel openings on longer time scales. Therefore, in this study, we explored the potential of Gaussian accelerated MD (GaMD) simulations to improve the exploration of complex tunnel networks in enzymes. We used the haloalkane dehalogenase LinB and its two variants with engineered transport pathways, which are not only well-known for their application potential but have also been extensively studied experimentally and computationally regarding their tunnel networks and their importance in multistep catalytic reactions. Our study demonstrates that GaMD efficiently improves tunnel sampling and allows the identification of all known tunnels for LinB and its two mutants. Furthermore, the improved sampling provided insight into a previously unknown transient side tunnel (ST). The extensive conformational landscape explored by GaMD simulations allowed us to investigate in detail the mechanism of ST opening. We determined variant-specific dynamic properties of ST opening, which were previously inaccessible due to limited sampling of cMD. Our comprehensive analysis supports multiple indicators of the functional relevance of the ST, emphasizing its potential significance beyond structural considerations. In conclusion, our research proves that the GaMD method can overcome the sampling limitations of cMD for the effective study of tunnels in enzymes, providing further means for identifying rare tunnels in enzymes with the potential for drug development, precision medicine, and rational protein engineering.


Asunto(s)
Hidrolasas , Simulación de Dinámica Molecular , Hidrolasas/química , Hidrolasas/metabolismo , Conformación Proteica , Distribución Normal , Dominio Catalítico , Proteínas/química , Proteínas/metabolismo
5.
J Chem Inf Model ; 64(17): 6880-6898, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39197061

RESUMEN

Binding of partners and mutations highly affects the conformational dynamics of KRAS4B, which is of significance for deeply understanding its function. Gaussian accelerated molecular dynamics (GaMD) simulations followed by deep learning (DL) and principal component analysis (PCA) were carried out to probe the effect of G12C and binding of three partners NF1, RAF1, and SOS1 on the conformation alterations of KRAS4B. DL reveals that G12C and binding of partners result in alterations in the contacts of key structure domains, such as the switch domains SW1 and SW2 together with the loops L4, L5, and P-loop. Binding of NF1, RAF1, and SOS1 constrains the structural fluctuation of SW1, SW2, L4, and L5; on the contrary, G12C leads to the instability of these four structure domains. The analyses of free energy landscapes (FELs) and PCA also show that binding of partners maintains the stability of the conformational states of KRAS4B while G12C induces greater mobility of the switch domains SW1 and SW2, which produces significant impacts on the interactions of GTP with SW1, L4, and L5. Our findings suggest that partner binding and G12C play important roles in the activity and allosteric regulation of KRAS4B, which may theoretically aid in further understanding the function of KRAS4B.


Asunto(s)
Aprendizaje Profundo , Mutación , Conformación Proteica , Proteínas Proto-Oncogénicas p21(ras) , Humanos , Simulación de Dinámica Molecular , Distribución Normal , Análisis de Componente Principal , Unión Proteica , Proteínas Proto-Oncogénicas p21(ras)/química , Proteínas Proto-Oncogénicas p21(ras)/genética , Proteínas Proto-Oncogénicas p21(ras)/metabolismo
6.
Artículo en Inglés | MEDLINE | ID: mdl-39208037

RESUMEN

Features from EEG microstate models, such as time-domain statistical features and state transition probabilities, are typically manually selected based on experience. However, traditional microstate models assume abrupt transitions between states, and the classification features can vary among individuals due to personal differences. To date, both empirical and theoretical classification results of EEG microstate features have not been entirely satisfactory. Here, we introduce an enhanced feature extraction method that combines Joint label-Common and label-Specific Feature Exploration (JCSFE) with Gaussian Mixture Models (GMM) to explore microstate features. First, GMMs are employed to represent the smooth transitions of EEG spatiotemporal features within microstate models. Second, category-common and category-specific features are identified by applying regularization constraints to linear classifiers. Third, a graph regularizer is used to extract subject-invariant microstate features. Experimental results on publicly available datasets demonstrate that the proposed model effectively encodes microstate features and improves the accuracy of motor imagery recognition across subjects. The primary code is accessible for download from the website: https://github.com/liaoliao3450/GMM-JCSFE.


Asunto(s)
Algoritmos , Electroencefalografía , Imaginación , Humanos , Imaginación/fisiología , Electroencefalografía/métodos , Distribución Normal , Interfaces Cerebro-Computador , Reproducibilidad de los Resultados , Masculino , Femenino
7.
Math Biosci Eng ; 21(6): 6225-6262, 2024 Jun 13.
Artículo en Inglés | MEDLINE | ID: mdl-39176425

RESUMEN

Models intended to describe the time evolution of a gene network must somehow include transcription, the DNA-templated synthesis of RNA, and translation, the RNA-templated synthesis of proteins. In eukaryotes, the DNA template for transcription can be very long, often consisting of tens of thousands of nucleotides, and lengthy pauses may punctuate this process. Accordingly, transcription can last for many minutes, in some cases hours. There is a long history of introducing delays in gene expression models to take the transcription and translation times into account. Here we study a family of detailed transcription models that includes initiation, elongation, and termination reactions. We establish a framework for computing the distribution of transcription times, and work out these distributions for some typical cases. For elongation, a fixed delay is a good model provided elongation is fast compared to initiation and termination, and there are no sites where long pauses occur. The initiation and termination phases of the model then generate a nontrivial delay distribution, and elongation shifts this distribution by an amount corresponding to the elongation delay. When initiation and termination are relatively fast, the distribution of elongation times can be approximated by a Gaussian. A convolution of this Gaussian with the initiation and termination time distributions gives another analytic approximation to the transcription time distribution. If there are long pauses during elongation, because of the modularity of the family of models considered, the elongation phase can be partitioned into reactions generating a simple delay (elongation through regions where there are no long pauses), and reactions whose distribution of waiting times must be considered explicitly (initiation, termination, and motion through regions where long pauses are likely). In these cases, the distribution of transcription times again involves a nontrivial part and a shift due to fast elongation processes.


Asunto(s)
Modelos Genéticos , Transcripción Genética , Redes Reguladoras de Genes , Simulación por Computador , Algoritmos , Distribución Normal , Biosíntesis de Proteínas , ADN/genética , Factores de Tiempo , ARN/genética , Humanos
8.
J R Soc Interface ; 21(217): 20240194, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39173147

RESUMEN

Blood flow reconstruction in the vasculature is important for many clinical applications. However, in clinical settings, the available data are often quite limited. For instance, transcranial Doppler ultrasound is a non-invasive clinical tool that is commonly used in clinical settings to measure blood velocity waveforms at several locations. This amount of data is grossly insufficient for training machine learning surrogate models, such as deep neural networks or Gaussian process regression. In this work, we propose a Gaussian process regression approach based on empirical kernels constructed by data generated from physics-based simulations-enabling near-real-time reconstruction of blood flow in data-poor regimes. We introduce a novel methodology to reconstruct the kernel within the vascular network. The proposed kernel encodes both spatiotemporal and vessel-to-vessel correlations, thus enabling blood flow reconstruction in vessels that lack direct measurements. We demonstrate that any prediction made with the proposed kernel satisfies the conservation of mass principle. The kernel is constructed by running stochastic one-dimensional blood flow simulations, where the stochasticity captures the epistemic uncertainties, such as lack of knowledge about boundary conditions and uncertainties in vasculature geometries. We demonstrate the performance of the model on three test cases, namely, a simple Y-shaped bifurcation, abdominal aorta and the circle of Willis in the brain.


Asunto(s)
Modelos Cardiovasculares , Humanos , Distribución Normal , Velocidad del Flujo Sanguíneo/fisiología , Circulación Cerebrovascular/fisiología
9.
Neural Netw ; 179: 106488, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38991390

RESUMEN

The objective of cross-domain sequential recommendation is to forecast upcoming interactions by leveraging past interactions across diverse domains. Most methods aim to utilize single-domain and cross-domain information as much as possible for personalized preference extraction and effective integration. However, on one hand, most models ignore that cross-domain information is composed of multiple single-domains when generating representations. They still treat cross-domain information the same way as single-domain information, resulting in noisy representation generation. Only by imposing certain constraints on cross-domain information during representation generation can subsequent models minimize interference when considering user preferences. On the other hand, some methods neglect the joint consideration of users' long-term and short-term preferences and reduce the weight of cross-domain user preferences to minimize noise interference. To better consider the mutual promotion of cross-domain and single-domains factors, we propose a novel model (C2DREIF) that utilizes Gaussian graph encoders to handle information, effectively constraining the correlation of information and capturing useful contextual information more accurately. It also employs a Top-down transformer to accurately extract user intents within each domain, taking into account the user's long-term and short-term preferences. Additionally, entropy regularized is applied to enhance contrastive learning and mitigate the impact of randomness caused by negative sample composition.


Asunto(s)
Intención , Humanos , Redes Neurales de la Computación , Algoritmos , Entropía , Distribución Normal
10.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38949889

RESUMEN

The response envelope model proposed by Cook et al. (2010) is an efficient method to estimate the regression coefficient under the context of the multivariate linear regression model. It improves estimation efficiency by identifying material and immaterial parts of responses and removing the immaterial variation. The response envelope model has been investigated only for continuous response variables. In this paper, we propose the multivariate probit model with latent envelope, in short, the probit envelope model, as a response envelope model for multivariate binary response variables. The probit envelope model takes into account relations between Gaussian latent variables of the multivariate probit model by using the idea of the response envelope model. We address the identifiability of the probit envelope model by employing the essential identifiability concept and suggest a Bayesian method for the parameter estimation. We illustrate the probit envelope model via simulation studies and real-data analysis. The simulation studies show that the probit envelope model has the potential to gain efficiency in estimation compared to the multivariate probit model. The real data analysis shows that the probit envelope model is useful for multi-label classification.


Asunto(s)
Teorema de Bayes , Simulación por Computador , Modelos Estadísticos , Análisis Multivariante , Humanos , Modelos Lineales , Biometría/métodos , Distribución Normal
11.
Biomed Phys Eng Express ; 10(5)2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-38955134

RESUMEN

Invasive ductal carcinoma (IDC) in breast specimens has been detected in the quadrant breast area: (I) upper outer, (II) upper inner, (III) lower inner, and (IV) lower outer areas by electrical impedance tomography implemented with Gaussian relaxation-time distribution (EIT-GRTD). The EIT-GRTD consists of two steps which are (1) the optimum frequencyfoptselection and (2) the time constant enhancement of breast imaging reconstruction.foptis characterized by a peak in the majority measurement pair of the relaxation-time distribution functionγ,which indicates the presence of IDC.γrepresents the inverse of conductivity and indicates the response of breast tissues to electrical currents across varying frequencies based on the Voigt circuit model. The EIT-GRTD is quantitatively evaluated by multi-physics simulations using a hemisphere container of mimic breast, consisting of IDC and adipose tissues as normal breast tissue under one condition with known IDC in quadrant breast area II. The simulation results show that EIT-GRTD is able to detect the IDC in four layers atfopt= 30, 170 Hz. EIT-GRTD is applied in the real breast by employed six mastectomy specimens from IDC patients. The placement of the mastectomy specimens in a hemisphere container is an important factor in the success of quadrant breast area reconstruction. In order to perform the evaluation, EIT-GRTD reconstruction images are compared to the CT scan images. The experimental results demonstrate that EIS-GRTD exhibits proficiency in the detection of the IDC in quadrant breast areas while compared qualitatively to CT scan images.


Asunto(s)
Neoplasias de la Mama , Carcinoma Ductal de Mama , Impedancia Eléctrica , Tomografía , Humanos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Tomografía/métodos , Carcinoma Ductal de Mama/diagnóstico por imagen , Distribución Normal , Mama/diagnóstico por imagen , Simulación por Computador , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
12.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39036985

RESUMEN

The dynamics that govern disease spread are hard to model because infections are functions of both the underlying pathogen as well as human or animal behavior. This challenge is increased when modeling how diseases spread between different spatial locations. Many proposed spatial epidemiological models require trade-offs to fit, either by abstracting away theoretical spread dynamics, fitting a deterministic model, or by requiring large computational resources for many simulations. We propose an approach that approximates the complex spatial spread dynamics with a Gaussian process. We first propose a flexible spatial extension to the well-known SIR stochastic process, and then we derive a moment-closure approximation to this stochastic process. This moment-closure approximation yields ordinary differential equations for the evolution of the means and covariances of the susceptibles and infectious through time. Because these ODEs are a bottleneck to fitting our model by MCMC, we approximate them using a low-rank emulator. This approximation serves as the basis for our hierarchical model for noisy, underreported counts of new infections by spatial location and time. We demonstrate using our model to conduct inference on simulated infections from the underlying, true spatial SIR jump process. We then apply our method to model counts of new Zika infections in Brazil from late 2015 through early 2016.


Asunto(s)
Simulación por Computador , Procesos Estocásticos , Infección por el Virus Zika , Humanos , Distribución Normal , Infección por el Virus Zika/epidemiología , Infección por el Virus Zika/transmisión , Modelos Epidemiológicos , Modelos Estadísticos , Cadenas de Markov
13.
Neural Comput ; 36(8): 1449-1475, 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39028957

RESUMEN

Dimension reduction on neural activity paves a way for unsupervised neural decoding by dissociating the measurement of internal neural pattern reactivation from the measurement of external variable tuning. With assumptions only on the smoothness of latent dynamics and of internal tuning curves, the Poisson gaussian-process latent variable model (P-GPLVM; Wu et al., 2017) is a powerful tool to discover the low-dimensional latent structure for high-dimensional spike trains. However, when given novel neural data, the original model lacks a method to infer their latent trajectories in the learned latent space, limiting its ability for estimating the neural reactivation. Here, we extend the P-GPLVM to enable the latent variable inference of new data constrained by previously learned smoothness and mapping information. We also describe a principled approach for the constrained latent variable inference for temporally compressed patterns of activity, such as those found in population burst events during hippocampal sharp-wave ripples, as well as metrics for assessing the validity of neural pattern reactivation and inferring the encoded experience. Applying these approaches to hippocampal ensemble recordings during active maze exploration, we replicate the result that P-GPLVM learns a latent space encoding the animal's position. We further demonstrate that this latent space can differentiate one maze context from another. By inferring the latent variables of new neural data during running, certain neural patterns are observed to reactivate, in accordance with the similarity of experiences encoded by its nearby neural trajectories in the training data manifold. Finally, reactivation of neural patterns can be estimated for neural activity during population burst events as well, allowing the identification for replay events of versatile behaviors and more general experiences. Thus, our extension of the P-GPLVM framework for unsupervised analysis of neural activity can be used to answer critical questions related to scientific discovery.


Asunto(s)
Hipocampo , Modelos Neurológicos , Neuronas , Animales , Distribución Normal , Distribución de Poisson , Neuronas/fisiología , Hipocampo/fisiología , Potenciales de Acción/fisiología , Aprendizaje Automático no Supervisado , Ratas
14.
Molecules ; 29(14)2024 Jul 18.
Artículo en Inglés | MEDLINE | ID: mdl-39064955

RESUMEN

Inhibiting MDM2-p53 interaction is considered an efficient mode of cancer treatment. In our current study, Gaussian-accelerated molecular dynamics (GaMD), deep learning (DL), and binding free energy calculations were combined together to probe the binding mechanism of non-peptide inhibitors K23 and 0Y7 and peptide ones PDI6W and PDI to MDM2. The GaMD trajectory-based DL approach successfully identified significant functional domains, predominantly located at the helixes α2 and α2', as well as the ß-strands and loops between α2 and α2'. The post-processing analysis of the GaMD simulations indicated that inhibitor binding highly influences the structural flexibility and collective motions of MDM2. Calculations of molecular mechanics-generalized Born surface area (MM-GBSA) and solvated interaction energy (SIE) not only suggest that the ranking of the calculated binding free energies is in agreement with that of the experimental results, but also verify that van der Walls interactions are the primary forces responsible for inhibitor-MDM2 binding. Our findings also indicate that peptide inhibitors yield more interaction contacts with MDM2 compared to non-peptide inhibitors. Principal component analysis (PCA) and free energy landscape (FEL) analysis indicated that the piperidinone inhibitor 0Y7 shows the most pronounced impact on the free energy profiles of MDM2, with the piperidinone inhibitor demonstrating higher fluctuation amplitudes along primary eigenvectors. The hot spots of MDM2 revealed by residue-based free energy estimation provide target sites for drug design toward MDM2. This study is expected to provide useful theoretical aid for the development of selective inhibitors of MDM2 family members.


Asunto(s)
Aprendizaje Profundo , Simulación de Dinámica Molecular , Péptidos , Unión Proteica , Proteínas Proto-Oncogénicas c-mdm2 , Proteínas Proto-Oncogénicas c-mdm2/antagonistas & inhibidores , Proteínas Proto-Oncogénicas c-mdm2/química , Proteínas Proto-Oncogénicas c-mdm2/metabolismo , Péptidos/química , Péptidos/farmacología , Humanos , Termodinámica , Sitios de Unión , Distribución Normal
15.
J Phys Chem B ; 128(30): 7332-7340, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39041172

RESUMEN

Predicting protein-peptide interactions is crucial for understanding peptide binding processes and designing peptide drugs. However, traditional computational modeling approaches face challenges in accurately predicting peptide-protein binding structures due to the slow dynamics and high flexibility of the peptides. Here, we introduce a new workflow termed "PepBinding" for predicting peptide binding structures, which combines peptide docking, all-atom enhanced sampling simulations using the Peptide Gaussian accelerated Molecular Dynamics (Pep-GaMD) method, and structural clustering. PepBinding has been demonstrated on seven distinct model peptides. In peptide docking using HPEPDOCK, the peptide backbone root-mean-square deviations (RMSDs) of their bound conformations relative to X-ray structures ranged from 3.8 to 16.0 Å, corresponding to the medium to inaccurate quality models according to the Critical Assessment of PRediction of Interactions (CAPRI) criteria. The Pep-GaMD simulations performed for only 200 ns significantly improved the docking models, resulting in five medium and two acceptable quality models. Therefore, PepBinding is an efficient workflow for predicting peptide binding structures and is publicly available at https://github.com/MiaoLab20/PepBinding.


Asunto(s)
Simulación de Dinámica Molecular , Péptidos , Péptidos/química , Péptidos/metabolismo , Unión Proteica , Simulación del Acoplamiento Molecular , Flujo de Trabajo , Sitios de Unión , Distribución Normal
16.
Neural Netw ; 178: 106482, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38945116

RESUMEN

In practical engineering, obtaining labeled high-quality fault samples poses challenges. Conventional fault diagnosis methods based on deep learning struggle to discern the underlying causes of mechanical faults from a fine-grained perspective, due to the scarcity of annotated data. To tackle those issue, we propose a novel semi-supervised Gaussian Mixed Variational Autoencoder method, SeGMVAE, aimed at acquiring unsupervised representations that can be transferred across fine-grained fault diagnostic tasks, enabling the identification of previously unseen faults using only the small number of labeled samples. Initially, Gaussian mixtures are introduced as a multimodal prior distribution for the Variational Autoencoder. This distribution is dynamically optimized for each task through an expectation-maximization (EM) algorithm, constructing a latent representation of the bridging task and unlabeled samples. Subsequently, a set variational posterior approach is presented to encode each task sample into the latent space, facilitating meta-learning. Finally, semi-supervised EM integrates the posterior of labeled data by acquiring task-specific parameters for diagnosing unseen faults. Results from two experiments demonstrate that SeGMVAE excels in identifying new fine-grained faults and exhibits outstanding performance in cross-domain fault diagnosis across different machines. Our code is available at https://github.com/zhiqan/SeGMVAE.


Asunto(s)
Algoritmos , Distribución Normal , Redes Neurales de la Computación , Aprendizaje Automático Supervisado , Aprendizaje Profundo
17.
Int J Comput Assist Radiol Surg ; 19(9): 1909-1917, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38896405

RESUMEN

PURPOSE: The conventional method to reconstruct the bone level for orbital defects, which is based on mirroring and manual adaptation, is time-consuming and the accuracy highly depends on the expertise of the clinical engineer. The aim of this study is to propose and evaluate an automated reconstruction method utilizing a Gaussian process morphable model (GPMM). METHODS: Sixty-five Computed Tomography (CT) scans of healthy midfaces were used to create a GPMM that can model shape variations of the orbital region. Parameter optimization was performed by evaluating several quantitative metrics inspired on the shape modeling literature, e.g. generalization and specificity. The reconstruction error was estimated by reconstructing artificial defects created in orbits from fifteen CT scans that were not included in the GPMM. The developed algorithms utilize the existing framework of Gaussian process morphable models, as implemented in the Scalismo software. RESULTS: By evaluating the proposed quality metrics, adequate parameters are chosen for non-rigid registration and reconstruction. The resulting median reconstruction error using the GPMM was lower (0.35 ± 0.16 mm) compared to the mirroring method (0.52 ± 0.18 mm). In addition, the GPMM-based reconstruction is automated and can be applied to large bilateral defects with a median reconstruction error of 0.39 ± 0.11 mm. CONCLUSION: The GPMM-based reconstruction proves to be less time-consuming and more accurate than reconstruction by mirroring. Further validation through clinical studies on patients with orbital defects is warranted. Nevertheless, the results underscore the potential of GPMM-based reconstruction as a promising alternative for designing patient-specific implants.


Asunto(s)
Algoritmos , Órbita , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Órbita/diagnóstico por imagen , Órbita/cirugía , Distribución Normal , Imagenología Tridimensional/métodos
18.
J Radiol Prot ; 44(2)2024 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-38834053

RESUMEN

A Monte Carlo (MC) programme was written using the dose point kernel method to calculate doses in the roof zone of a building from nearby releases of radioactive gases. A Gaussian Plume Model (GPM) was parameterised to account for near-field building effects on plume spread and reflection from the roof. Rooftop recirculation zones and building-generated plume spread effects were accounted in a novel Dual Gaussian Plume (DGP) formulation used with the MC model, which allowed for the selection of angle of approach flow, plume release height in relation to the building and position of the release point in relation to the leading edge of the building. Three-dimensional wind tunnel concentration field data were used for the parameterisation. The MC code used the parameterised concentration field to calculate the contributions to effective dose from inhalation, cloud immersion from positron/beta particles, and gamma-ray dose for a wide range of receptor dose positions in the roof zone, including receptor positions at different heights above the roof. Broad trends in predicted radiation dose with angle of approach flow, release position in relation to the building and release height are shown. Alternative approaches for the derivation of the concentration field are discussed.


Asunto(s)
Contaminantes Radiactivos del Aire , Método de Montecarlo , Dosis de Radiación , Distribución Normal , Contaminantes Radiactivos del Aire/análisis , Monitoreo de Radiación/métodos , Contaminación del Aire Interior/análisis , Humanos , Simulación por Computador
19.
Comput Methods Programs Biomed ; 252: 108234, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38823206

RESUMEN

BACKGROUND AND OBJECTIVE: Patient-specific 3D computational fluid dynamics (CFD) models are increasingly being used to understand and predict transarterial radioembolization procedures used for hepatocellular carcinoma treatment. While sensitivity analyses of these CFD models can help to determine the most impactful input parameters, such analyses are computationally costly. Therefore, we aim to use surrogate modelling to allow relatively cheap sensitivity analysis. As an example, we compute Sobol's sensitivity indices for three input waveform shape parameters. METHODS: We extracted three characteristic shape parameters from our input mass flow rate waveform (peak systolic mass flow rate, heart rate, systolic duration) and defined our 3D input parameter space by varying these parameters within 75 %-125 % of their nominal values. To fit our surrogate model with a minimal number of costly CFD simulations, we developed an adaptive design of experiments (ADOE) algorithm. The ADOE uses 100 Latin hypercube sampled points in 3D input space to define the initial design of experiments (DOE). Subsequently, we re-sample input space with 10,000 Latin Hypercube sampled points and cheaply estimate the outputs using the surrogate model. In each of 27 equivolume bins which divide our input space, we determine the most uncertain prediction of the 10,000 points, compute the true outputs using CFD, and add these points to the DOE. For each ADOE iteration, we calculate Sobol's sensitivity indices, and we continue to add batches of 27 samples to the DOE until the Sobol indices have stabilized. RESULTS: We tested our ADOE algorithm on the Ishigami function and showed that we can reliably obtain Sobol's indices with an absolute error <0.1. Applying ADOE to our waveform sensitivity problem, we found that the first-order sensitivity indices were 0.0550, 0.0191 and 0.407 for the peak systolic mass flow rate, heart rate, and the systolic duration, respectively. CONCLUSIONS: Although the current study was an illustrative case, the ADOE allows reliable sensitivity analysis with a limited number of complex model evaluations, and performs well even when the optimal DOE size is a priori unknown. This enables us to identify the highest-impact input parameters of our model, and other novel, costly models in the future.


Asunto(s)
Algoritmos , Carcinoma Hepatocelular , Embolización Terapéutica , Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/radioterapia , Carcinoma Hepatocelular/radioterapia , Embolización Terapéutica/métodos , Distribución Normal , Hígado , Simulación por Computador , Hidrodinámica , Análisis de Regresión , Imagenología Tridimensional
20.
J Math Biol ; 89(2): 19, 2024 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-38916625

RESUMEN

In the study of biological populations, the Allee effect detects a critical density below which the population is severely endangered and at risk of extinction. This effect supersedes the classical logistic model, in which low densities are favorable due to lack of competition, and includes situations related to deficit of genetic pools, inbreeding depression, mate limitations, unavailability of collaborative strategies due to lack of conspecifics, etc. The goal of this paper is to provide a detailed mathematical analysis of the Allee effect. After recalling the ordinary differential equation related to the Allee effect, we will consider the situation of a diffusive population. The dispersal of this population is quite general and can include the classical Brownian motion, as well as a Lévy flight pattern, and also a "mixed" situation in which some individuals perform classical random walks and others adopt Lévy flights (which is also a case observed in nature). We study the existence and nonexistence of stationary solutions, which are an indication of the survival chance of a population at the equilibrium. We also analyze the associated evolution problem, in view of monotonicity in time of the total population, energy consideration, and long-time asymptotics. Furthermore, we also consider the case of an "inverse" Allee effect, in which low density populations may access additional benefits.


Asunto(s)
Ecosistema , Conceptos Matemáticos , Modelos Biológicos , Dinámica Poblacional , Animales , Dinámica Poblacional/estadística & datos numéricos , Evolución Biológica , Densidad de Población , Distribución Normal , Extinción Biológica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA