Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 202
Filtrar
1.
World Psychiatry ; 23(3): 400-410, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39279417

RESUMEN

The concept of ultra-high risk for psychosis (UHR) has been at the forefront of psychiatric research for several decades, with the ultimate goal of preventing the onset of psychotic disorder in high-risk individuals. Orygen (Melbourne, Australia) has led a range of observational and intervention studies in this clinical population. These datasets have now been integrated into the UHR 1000+ cohort, consisting of a sample of 1,245 UHR individuals with a follow-up period ranging from 1 to 16.7 years. This paper describes the cohort, presents a clinical prediction model of transition to psychosis in this cohort, and examines how predictive performance is affected by changes in UHR samples over time. We analyzed transition to psychosis using a Cox proportional hazards model. Clinical predictors for transition to psychosis were investigated in the entire cohort using multiple imputation and Rubin's rule. To assess performance drift over time, data from 1995-2016 were used for initial model fitting, and models were subsequently validated on data from 2017-2020. Over the follow-up period, 220 cases (17.7%) developed a psychotic disorder. Pooled hazard ratio (HR) estimates showed that the Comprehensive Assessment of At-Risk Mental States (CAARMS) Disorganized Speech subscale severity score (HR=1.12, 95% CI: 1.02-1.24, p=0.024), the CAARMS Unusual Thought Content subscale severity score (HR=1.13, 95% CI: 1.03-1.24, p=0.009), the Scale for the Assessment of Negative Symptoms (SANS) total score (HR=1.02, 95% CI: 1.00-1.03, p=0.022), the Social and Occupational Functioning Assessment Scale (SOFAS) score (HR=0.98, 95% CI: 0.97-1.00, p=0.036), and time between onset of symptoms and entry to UHR service (log transformed) (HR=1.10, 95% CI: 1.02-1.19, p=0.013) were predictive of transition to psychosis. UHR individuals who met the brief limited intermittent psychotic symptoms (BLIPS) criteria had a higher probability of transitioning to psychosis than those who met the attenuated psychotic symptoms (APS) criteria (HR=0.48, 95% CI: 0.32-0.73, p=0.001) and those who met the Trait risk criteria (a first-degree relative with a psychotic disorder or a schizotypal personality disorder plus a significant decrease in functioning during the previous year) (HR=0.43, 95% CI: 0.22-0.83, p=0.013). Models based on data from 1995-2016 displayed good calibration at initial model fitting, but showed a drift of 20.2-35.4% in calibration when validated on data from 2017-2020. Large-scale longitudinal data such as those from the UHR 1000+ cohort are required to develop accurate psychosis prediction models. It is critical to assess existing and future risk calculators for temporal drift, that may reduce their utility in clinical practice over time.

2.
Ophthalmol Sci ; 4(6): 100555, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39253549

RESUMEN

Objective: The aim of our research is to enhance the calibration of machine learning models for glaucoma classification through a specialized loss function named Confidence-Calibrated Label Smoothing (CC-LS) loss. This approach is specifically designed to refine model calibration without compromising accuracy by integrating label smoothing and confidence penalty techniques, tailored to the specifics of glaucoma detection. Design: This study focuses on the development and evaluation of a calibrated deep learning model. Participants: The study employs fundus images from both external datasets-the Online Retinal Fundus Image Database for Glaucoma Analysis and Research (482 normal, 168 glaucoma) and the Retinal Fundus Glaucoma Challenge (720 normal, 80 glaucoma)-and an extensive internal dataset (4639 images per category), aiming to bolster the model's generalizability. The model's clinical performance is validated using a comprehensive test set (47 913 normal, 1629 glaucoma) from the internal dataset. Methods: The CC-LS loss function seamlessly integrates label smoothing, which tempers extreme predictions to avoid overfitting, with confidence-based penalties. These penalties deter the model from expressing undue confidence in incorrect classifications. Our study aims at training models using the CC-LS and comparing their performance with those trained using conventional loss functions. Main Outcome Measures: The model's precision is evaluated using metrics like the Brier score, sensitivity, specificity, and the false positive rate, alongside qualitative heatmap analyses for a holistic accuracy assessment. Results: Preliminary findings reveal that models employing the CC-LS mechanism exhibit superior calibration metrics, as evidenced by a Brier score of 0.098, along with notable accuracy measures: sensitivity of 81%, specificity of 80%, and weighted accuracy of 80%. Importantly, these enhancements in calibration are achieved without sacrificing classification accuracy. Conclusions: The CC-LS loss function presents a significant advancement in the pursuit of deploying machine learning models for glaucoma diagnosis. By improving calibration, the CC-LS ensures that clinicians can interpret and trust the predictive probabilities, making artificial intelligence-driven diagnostic tools more clinically viable. From a clinical standpoint, this heightened trust and interpretability can potentially lead to more timely and appropriate interventions, thereby optimizing patient outcomes and safety. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

3.
Fungal Biol ; 128(6): 2022-2031, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39174237

RESUMEN

Understanding species habitat preferences is essential for conservation and management efforts, as it enables the identification of areas with a higher likelihood of species presence. Lactarius deliciosus (L.) Gray, an economically important edible mushroom, is influenced by various environmental variables, yet information regarding its ecological niche remains elusive. Therefore, in this study, we aim to address this gap by modeling the fundamental niche of L. deliciosus. Specifically, we explore its distribution patterns in response to large-scale environmental factors, including long-term temperature averages and topography. We employed 242 presence-only georeferenced points in Europe obtained from the Global Biodiversity Information Facility (GBIF). Utilizing the Kuenm R package, we constructed 210 models incorporating five sets of environmental variables, 14 regularization multiplier values, and three feature class combinations. Evaluation metrics included statistical significance, predictive power, and model complexity. The final model was transferred to Turkiye, with careful consideration of extrapolation risk using MESS (multivariate similarity surface) and MoD (most dissimilar variable) metrics. In alignment with all three evaluation criteria, the algorithm implemented in Kuenm identified the best model as the linear-quadratic combination with a regularization multiplier of 0.2, based on variables selected by the contribution importance method. Results underscore temperature-related variables as critical determinants of L. deliciosus habitat preferences within the calibration area, with solar radiation also playing a significant role in the final model. These results underscored the effectiveness of ecological niche modeling (ENM) in understanding how climatic patterns may alter the distribution of species like L. deliciosus. The findings contribute to the development of informed conservation strategies and decision-making in dynamic environments. Emphasizing a comprehensive approach to ecological modeling is crucial for promoting sustainable forest management.


Asunto(s)
Ecosistema , Europa (Continente) , Basidiomycota/fisiología , Temperatura , Modelos Biológicos
4.
Comput Methods Programs Biomed ; 254: 108299, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38959599

RESUMEN

BACKGROUND AND OBJECTIVE: Data from electro-anatomical mapping (EAM) systems are playing an increasingly important role in computational modeling studies for the patient-specific calibration of digital twin models. However, data exported from commercial EAM systems are challenging to access and parse. Converting to data formats that are easily amenable to be viewed and analyzed with commonly used cardiac simulation software tools such as openCARP remains challenging. We therefore developed an open-source platform, pyCEPS, for parsing and converting clinical EAM data conveniently to standard formats widely adopted within the cardiac modeling community. METHODS AND RESULTS: pyCEPS is an open-source Python-based platform providing the following functions: (i) access and interrogate the EAM data exported from clinical mapping systems; (ii) efficient browsing of EAM data to preview mapping procedures, electrograms (EGMs), and electro-cardiograms (ECGs); (iii) conversion to modeling formats according to the openCARP standard, to be amenable to analysis with standard tools and advanced workflows as used for in silico EAM data. Documentation and training material to facilitate access to this complementary research tool for new users is provided. We describe the technological underpinnings and demonstrate the capabilities of pyCEPS first, and showcase its use in an exemplary modeling application where we use clinical imaging data to build a patient-specific anatomical model. CONCLUSION: With pyCEPS we offer an open-source framework for accessing EAM data, and converting these to cardiac modeling standard formats. pyCEPS provides the core functionality needed to integrate EAM data in cardiac modeling research. We detail how pyCEPS could be integrated into model calibration workflows facilitating the calibration of a computational model based on EAM data.


Asunto(s)
Simulación por Computador , Programas Informáticos , Humanos , Calibración , Electrocardiografía , Modelos Cardiovasculares , Corazón/fisiología , Electrofisiología Cardíaca
5.
Am J Physiol Heart Circ Physiol ; 327(2): H473-H503, 2024 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-38904851

RESUMEN

Computational, or in silico, models are an effective, noninvasive tool for investigating cardiovascular function. These models can be used in the analysis of experimental and clinical data to identify possible mechanisms of (ab)normal cardiovascular physiology. Recent advances in computing power and data management have led to innovative and complex modeling frameworks that simulate cardiovascular function across multiple scales. While commonly used in multiple disciplines, there is a lack of concise guidelines for the implementation of computer models in cardiovascular research. In line with recent calls for more reproducible research, it is imperative that scientists adhere to credible practices when developing and applying computational models to their research. The goal of this manuscript is to provide a consensus document that identifies best practices for in silico computational modeling in cardiovascular research. These guidelines provide the necessary methods for mechanistic model development, model analysis, and formal model calibration using fundamentals from statistics. We outline rigorous practices for computational, mechanistic modeling in cardiovascular research and discuss its synergistic value to experimental and clinical data.


Asunto(s)
Simulación por Computador , Modelos Cardiovasculares , Humanos , Investigación Biomédica/normas , Animales , Fenómenos Fisiológicos Cardiovasculares , Enfermedades Cardiovasculares/fisiopatología , Consenso
6.
Neural Netw ; 178: 106457, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38908166

RESUMEN

This study introduces a novel hyperparameter in the Softmax function to regulate the rate of gradient decay, which is dependent on sample probability. Our theoretical and empirical analyses reveal that both model generalization and calibration are significantly influenced by the gradient decay rate, particularly as confidence probability increases. Notably, the gradient decay varies in a convex or concave manner with rising sample probability. When employing a smaller gradient decay, we observe a curriculum learning sequence. This sequence highlights hard samples only after easy samples are adequately trained, and allows well-separated samples to receive a higher gradient, effectively reducing intra-class distances. However, this approach has a drawback: small gradient decay tends to exacerbate model overconfidence, shedding light on the calibration issues prevalent in modern neural networks. In contrast, a larger gradient decay addresses these issues effectively, surpassing even models that utilize post-calibration methods. Our findings provide substantial evidence that large margin Softmax can influence the local Lipschitz constraint by manipulating the probability-dependent gradient decay rate. This research contributes a fresh perspective and understanding of the interplay between large margin Softmax, curriculum learning, and model calibration through an exploration of gradient decay rates. Additionally, we propose a novel warm-up strategy that dynamically adjusts the gradient decay for a smoother L-constraint in early training, then mitigating overconfidence in the final model.


Asunto(s)
Redes Neurales de la Computación , Calibración , Algoritmos , Probabilidad , Humanos , Aprendizaje Automático
7.
Polymers (Basel) ; 16(12)2024 Jun 10.
Artículo en Inglés | MEDLINE | ID: mdl-38931990

RESUMEN

The prediction of mechanical behavior and fatigue life is of major importance for design and for replacing costly and time-consuming tests. The proposed approach for polymers is a combination of a fatigue model and a governing constitutive model, which is formulated using the Haward-Thackray viscoplastic model (1968) and is capable of capturing large deformations. The fatigue model integrates high- and low-cycle fatigue and is based on the concept of damage evolution and a moving endurance surface in the stress space, therefore memorizing the load history without requesting vague cycle-counting approaches. The proposed approach is applicable for materials in which the fatigue development is ductile, i.e., damage during the formation of microcracks controls most of the fatigue life (up to 90%). Moreover, damage evolution shows a certain asymptote at the ultimate of the low-cycle fatigue, a second asymptote at the ultimate of the high-cycle fatigue (which is near zero), and a curvature of how rapidly the transition between the asymptotes is reached. An interesting matter is that similar to metals, many polymers satisfy these constraints. Therefore, all the model parameters for fatigue can be given in terms of the Basquin and Coffin-Manson model parameters, i.e., satisfying well-defined parameters.

8.
Bioresour Technol ; 403: 130902, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38801955

RESUMEN

This study applied granular activated carbon (GAC) to improve the anaerobic digestion of long-chain fatty acid (LCFA). New kinetics were considered to describe the effect of GAC on the LCFA degradation, including i) The adsorption kinetics of GAC for LCFA, ii) The ß-oxidation pathway of LCFA, iii) The attached biomass improved by direct interspecies electron transfer (DIET). The developed model simulated the anaerobic digestion of stearic acid, palmitic acid, myristic acid, and lauric acid with 1.00 and 2.00 g l-1 of GAC. The simulation results suggested that adding GAC led to the increase of km,CnGAC and km,acGAC. As the concentration of GAC increased, the values of kinetic parameters increased while the accumulated acetate concentration decreased. Thus, GAC improved the kinetic parameters of the attached syntrophic communities.


Asunto(s)
Carbón Orgánico , Ácidos Grasos , Cinética , Anaerobiosis , Ácidos Grasos/metabolismo , Adsorción , Carbón Orgánico/química , Transporte de Electrón , Biomasa , Simulación por Computador , Biodegradación Ambiental
9.
Sensors (Basel) ; 24(9)2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38732969

RESUMEN

The recent scientific literature abounds in proposals of seizure forecasting methods that exploit machine learning to automatically analyze electroencephalogram (EEG) signals. Deep learning algorithms seem to achieve a particularly remarkable performance, suggesting that the implementation of clinical devices for seizure prediction might be within reach. However, most of the research evaluated the robustness of automatic forecasting methods through randomized cross-validation techniques, while clinical applications require much more stringent validation based on patient-independent testing. In this study, we show that automatic seizure forecasting can be performed, to some extent, even on independent patients who have never been seen during the training phase, thanks to the implementation of a simple calibration pipeline that can fine-tune deep learning models, even on a single epileptic event recorded from a new patient. We evaluate our calibration procedure using two datasets containing EEG signals recorded from a large cohort of epileptic subjects, demonstrating that the forecast accuracy of deep learning methods can increase on average by more than 20%, and that performance improves systematically in all independent patients. We further show that our calibration procedure works best for deep learning models, but can also be successfully applied to machine learning algorithms based on engineered signal features. Although our method still requires at least one epileptic event per patient to calibrate the forecasting model, we conclude that focusing on realistic validation methods allows to more reliably compare different machine learning approaches for seizure prediction, enabling the implementation of robust and effective forecasting systems that can be used in daily healthcare practice.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Electroencefalografía , Convulsiones , Humanos , Electroencefalografía/métodos , Convulsiones/diagnóstico , Convulsiones/fisiopatología , Calibración , Procesamiento de Señales Asistido por Computador , Epilepsia/diagnóstico , Epilepsia/fisiopatología , Aprendizaje Automático
10.
Front Plant Sci ; 15: 1346192, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38766470

RESUMEN

Currently the determination of cyanidin 3-rutinoside content in plant petals usually requires chemical assays or high performance liquid chromatography (HPLC), which are time-consuming and laborious. In this study, we aimed to develop a low-cost, high-throughput method to predict cyanidin 3-rutinoside content, and developed a cyanidin 3-rutinoside prediction model using near-infrared (NIR) spectroscopy combined with partial least squares regression (PLSR). We collected spectral data from Michelia crassipes (Magnoliaceae) tepals and used five different preprocessing methods and four variable selection algorithms to calibrate the PLSR model to determine the best prediction model. The results showed that (1) the PLSR model built by combining the blockScale (BS) preprocessing method and the Significance multivariate correlation (sMC) algorithm performed the best; (2) The model has a reliable prediction ability, with a coefficient of determination (R2) of 0.72, a root mean square error (RMSE) of 1.04%, and a residual prediction deviation (RPD) of 2.06. The model can be effectively used to predict the cyanidin 3-rutinoside content of the perianth slices of M. crassipes, providing an efficient method for the rapid determination of cyanidin 3-rutinoside content.

11.
PNAS Nexus ; 3(4): pgae063, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38560526

RESUMEN

Network structures underlie the dynamics of many complex phenomena, from gene regulation and foodwebs to power grids and social media. Yet, as they often cannot be observed directly, their connectivities must be inferred from observations of the dynamics to which they give rise. In this work, we present a powerful computational method to infer large network adjacency matrices from time series data using a neural network, in order to provide uncertainty quantification on the prediction in a manner that reflects both the degree to which the inference problem is underdetermined as well as the noise on the data. This is a feature that other approaches have hitherto been lacking. We demonstrate our method's capabilities by inferring line failure locations in the British power grid from its response to a power cut, providing probability densities on each edge and allowing the use of hypothesis testing to make meaningful probabilistic statements about the location of the cut. Our method is significantly more accurate than both Markov-chain Monte Carlo sampling and least squares regression on noisy data and when the problem is underdetermined, while naturally extending to the case of nonlinear dynamics, which we demonstrate by learning an entire cost matrix for a nonlinear model of economic activity in Greater London. Not having been specifically engineered for network inference, this method in fact represents a general parameter estimation scheme that is applicable to any high-dimensional parameter space.

12.
Sensors (Basel) ; 24(8)2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38676155

RESUMEN

This study aims to enhance diagnostic capabilities for optimising the performance of the anaerobic sewage treatment lagoon at Melbourne Water's Western Treatment Plant (WTP) through a novel machine learning (ML)-based monitoring strategy. This strategy employs ML to make accurate probabilistic predictions of biogas performance by leveraging diverse real-life operational and inspection sensor and other measurement data for asset management, decision making, and structural health monitoring (SHM). The paper commences with data analysis and preprocessing of complex irregular datasets to facilitate efficient learning in an artificial neural network. Subsequently, a Bayesian mixture density neural network model incorporating an attention-based mechanism in bidirectional long short-term memory (BiLSTM) was developed. This probabilistic approach uses a distribution output layer based on the Gaussian mixture model and Monte Carlo (MC) dropout technique in estimating data and model uncertainties, respectively. Furthermore, systematic hyperparameter optimisation revealed that the optimised model achieved a negative log-likelihood (NLL) of 0.074, significantly outperforming other configurations. It achieved an accuracy approximately 9 times greater than the average model performance (NLL = 0.753) and 22 times greater than the worst performing model (NLL = 1.677). Key factors influencing the model's accuracy, such as the input window size and the number of hidden units in the BiLSTM layer, were identified, while the number of neurons in the fully connected layer was found to have no significant impact on accuracy. Moreover, model calibration using the expected calibration error was performed to correct the model's predictive uncertainty. The findings suggest that the inherent data significantly contribute to the overall uncertainty of the model, highlighting the need for more high-quality data to enhance learning. This study lays the groundwork for applying ML in transforming high-value assets into intelligent structures and has broader implications for ML in asset management, SHM applications, and renewable energy sectors.


Asunto(s)
Teorema de Bayes , Biocombustibles , Redes Neurales de la Computación , Anaerobiosis , Calibración , Método de Montecarlo , Aguas del Alcantarillado , Aprendizaje Automático
13.
J Chromatogr A ; 1720: 464805, 2024 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-38471300

RESUMEN

The current landscape of biopharmaceutical production necessitates an ever-growing set of tools to meet the demands for shorter development times and lower production costs. One path towards meeting these demands is the implementation of digital tools in the development stages. Mathematical modelling of process chromatography, one of the key unit operations in the biopharmaceutical downstream process, is one such tool. However, obtaining parameter values for such models is a time-consuming task that grows in complexity with the number of compounds in the mixture being purified. In this study, we tackle this issue by developing an automated model calibration procedure for purification of a multi-component mixture by linear gradient ion exchange chromatography. The procedure was implemented using the Orbit software (Lund University, Department of Chemical Engineering), which both generates a mathematical model structure and performs the experiments necessary to obtain data for model calibration. The procedure was extended to suggest operating points for the purification of one of the components in the mixture by means of multi-objective optimization using three different objectives. The procedure was tested on a three-component protein mixture and was able to generate a calibrated model capable of reproducing the experimental chromatograms to a satisfactory degree, using a total of six assays. An additional seventh experiment was performed to validate the model response under one of the suggested optimum conditions, respecting a 95 % purity requirement. All of the above was automated and set in motion by the push of a button. With these results, we have taken a step towards fully automating model calibration and thus accelerating digitalization in the development stages of new biopharmaceuticals.


Asunto(s)
Modelos Teóricos , Proteínas , Humanos , Calibración , Cromatografía por Intercambio Iónico/métodos , Proteínas/química , Cromatografía Líquida de Alta Presión
14.
Metab Eng Commun ; 18: e00232, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38501051

RESUMEN

This paper reviews the key building blocks needed to develop a mechanistic model for use as an operational production tool. The Chinese Hamster Ovary (CHO) cell, one of the most widely used hosts for antibody production in the pharmaceutical industry, is considered as a case study. CHO cell metabolism is characterized by two main phases, exponential growth followed by a stationary phase with strong protein production. This process presents an appropriate degree of complexity to outline the modeling strategy. The paper is organized into four main steps: (1) CHO systems and data collection; (2) metabolic analysis; (3) formulation of the mathematical model; and finally, (4) numerical solution, calibration, and validation. The overall approach can build a predictive model of target variables. According to the literature, one of the main current modeling challenges lies in understanding and predicting the spontaneous metabolic shift. Possible candidates for the trigger of the metabolic shift include the concentration of lactate and carbon dioxide. In our opinion, ammonium, which is also an inhibiting product, should be further investigated. Finally, the expected progress in the emerging field of hybrid modeling, which combines the best of mechanistic modeling and machine learning, is presented as a fascinating breakthrough. Note that the modeling strategy discussed here is a general framework that can be applied to any bioprocess.

15.
Sensors (Basel) ; 24(3)2024 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-38339637

RESUMEN

Surface electromyogram (sEMG)-based gesture recognition has emerged as a promising avenue for developing intelligent prostheses for upper limb amputees. However, the temporal variations in sEMG have rendered recognition models less efficient than anticipated. By using cross-session calibration and increasing the amount of training data, it is possible to reduce these variations. The impact of varying the amount of calibration and training data on gesture recognition performance for amputees is still unknown. To assess these effects, we present four datasets for the evaluation of calibration data and examine the impact of the amount of training data on benchmark performance. Two amputees who had undergone amputations years prior were recruited, and seven sessions of data were collected for analysis from each of them. Ninapro DB6, a publicly available database containing data from ten healthy subjects across ten sessions, was also included in this study. The experimental results show that the calibration data improved the average accuracy by 3.03%, 6.16%, and 9.73% for the two subjects and Ninapro DB6, respectively, compared to the baseline results. Moreover, it was discovered that increasing the number of training sessions was more effective in improving accuracy than increasing the number of trials. Three potential strategies are proposed in light of these findings to enhance cross-session models further. We consider these findings to be of the utmost importance for the commercialization of intelligent prostheses, as they demonstrate the criticality of gathering calibration and cross-session training data, while also offering effective strategies to maximize the utilization of the entire dataset.


Asunto(s)
Amputados , Miembros Artificiales , Humanos , Electromiografía/métodos , Calibración , Reconocimiento de Normas Patrones Automatizadas/métodos , Extremidad Superior , Algoritmos
16.
Sensors (Basel) ; 24(2)2024 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-38257613

RESUMEN

The use of low-cost sensors (LCSs) for the mobile monitoring of oil and gas emissions is an understudied application of low-cost air quality monitoring devices. To assess the efficacy of low-cost sensors as a screening tool for the mobile monitoring of fugitive methane emissions stemming from well sites in eastern Colorado, we colocated an array of low-cost sensors (XPOD) with a reference grade methane monitor (Aeris Ultra) on a mobile monitoring vehicle from 15 August through 27 September 2023. Fitting our low-cost sensor data with a bootstrap and aggregated random forest model, we found a high correlation between the reference and XPOD CH4 concentrations (r = 0.719) and a low experimental error (RMSD = 0.3673 ppm). Other calibration models, including multilinear regression and artificial neural networks (ANN), were either unable to distinguish individual methane spikes above baseline or had a significantly elevated error (RMSDANN = 0.4669 ppm) when compared to the random forest model. Using out-of-bag predictor permutations, we found that sensors that showed the highest correlation with methane displayed the greatest significance in our random forest model. As we reduced the percentage of colocation data employed in the random forest model, errors did not significantly increase until a specific threshold (50 percent of total calibration data). Using a peakfinding algorithm, we found that our model was able to predict 80 percent of methane spikes above 2.5 ppm throughout the duration of our field campaign, with a false response rate of 35 percent.

17.
J Mech Behav Biomed Mater ; 150: 106194, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38091922

RESUMEN

The study deals with the process of estimation of material parameters from uniaxial test data of arterial tissue and focuses on the role of transverse strains. Two fitting strategies are analyzed and their impact on the predictive and descriptive capabilities of the resulting model is evaluated. The standard fitting procedure (strategy A) based on longitudinal stress-strain curves is compared with the enhanced approach (strategy B) taking also the transverse strain test data into account. The study is performed on a large set of material data adopted from literature and for a variety of constitutive models developed for fibrous soft tissues. The standard procedure (A) ignoring the transverse strain test data is found rather hazardous, leading often to unrealistic predictions of the model exhibiting auxetic behaviour. In contrast, the alternative fitting method (B) ensures a realistic strain response of the model and is proved to be superior since it does not require any significant demands of computational effort or additional testing. The results presented in this paper show that even the artificial transverse strain data (i.e., not measured during testing but generated ex post based on assumed Poisson's ratio) are much less hazardous than total disregard of the transverse strain response.


Asunto(s)
Arterias , Modelos Biológicos
18.
Appl Radiat Isot ; 204: 111135, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38071857

RESUMEN

In this work, a classical approach was used for calibrating the GESPECOR detector model for computing the full-energy peak efficiency of p-type coaxial HPGe detectors that is based on the use of linear least squares optimization. The key element of the work is the multiplicative model developed for approximating the values of the full-energy peak efficiency provided by GESPECOR code. It was linearized using the logarithmic transformation to allow an easy use of the linear least squares optimization. A procedure was also developed to estimate the optimal values of the parameters, describing the p-type coaxial HPGe detectors. Its application to a Canberra detector GC3018 showed that it is possible to determine accurate values of the full-energy peak efficiency computed by GESPECOR code using the optimized parameter values.

19.
Math Biosci Eng ; 20(10): 17625-17645, 2023 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-38052529

RESUMEN

The goal of this study is to develop a mathematical model that captures the interaction between evofosfamide, immunotherapy, and the hypoxic landscape of the tumor in the treatment of tumors. Recently, we showed that evofosfamide, a hypoxia-activated prodrug, can synergistically improve treatment outcomes when combined with immunotherapy, while evofosfamide alone showed no effects in an in vivo syngeneic model of colorectal cancer. However, the mechanisms behind the interaction between the tumor microenvironment in the context of oxygenation (hypoxic, normoxic), immunotherapy, and tumor cells are not fully understood. To begin to understand this issue, we develop a system of ordinary differential equations to simulate the growth and decline of tumors and their vascularization (oxygenation) in response to treatment with evofosfamide and immunotherapy (6 combinations of scenarios). The model is calibrated to data from in vivo experiments on mice implanted with colon adenocarcinoma cells and longitudinally imaged with [18F]-fluoromisonidazole ([18F]FMISO) positron emission tomography (PET) to quantify hypoxia. The results show that evofosfamide is able to rescue the immune response and sensitize hypoxic tumors to immunotherapy. In the hypoxic scenario, evofosfamide reduces tumor burden by $ 45.07 \pm 2.55 $%, compared to immunotherapy alone, as measured by tumor volume. The model accurately predicts the temporal evolution of five different treatment scenarios, including control, hypoxic tumors that received immunotherapy, normoxic tumors that received immunotherapy, evofosfamide alone, and hypoxic tumors that received combination immunotherapy and evofosfamide. The average concordance correlation coefficient (CCC) between predicted and observed tumor volume is $ 0.86 \pm 0.05 $. Interestingly, the model values to fit those five treatment arms was unable to accurately predict the response of normoxic tumors to combination evofosfamide and immunotherapy (CCC = $ -0.064 \pm 0.003 $). However, guided by the sensitivity analysis to rank the most influential parameters on the tumor volume, we found that increasing the tumor death rate due to immunotherapy by a factor of $ 18.6 \pm 9.3 $ increases CCC of $ 0.981 \pm 0.001 $. To the best of our knowledge, this is the first study to mathematically predict and describe the increased efficacy of immunotherapy following evofosfamide.


Asunto(s)
Adenocarcinoma , Neoplasias del Colon , Ratones , Animales , Neoplasias del Colon/diagnóstico por imagen , Neoplasias del Colon/terapia , Hipoxia de la Célula , Adenocarcinoma/diagnóstico por imagen , Adenocarcinoma/terapia , Modelos Animales de Enfermedad , Línea Celular Tumoral , Hipoxia/terapia , Inmunoterapia , Microambiente Tumoral
20.
J Anim Sci ; 1012023 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-37997927

RESUMEN

Constructing dynamic mathematical models of biological systems requires estimating unknown parameters from available experimental data, usually using a statistical fitting procedure. This procedure is usually called parameter identification, parameter estimation, model fitting, or model calibration. In animal science, parameter identification is often performed without analytic considerations on the possibility of determining unique values of the model parameters. These analytical studies are related to the mathematical property of structural identifiability, which refers to the theoretical ability to recover unique values of the model parameters from the measures defined in an experimental setup and use the model structure as the sole basis. The structural identifiability analysis is a powerful tool for model construction because it informs whether the parameter identification problem is well-posed (i.e., the problem has a unique solution). Structural identifiability analysis is helpful to determine which actions (e.g., model reparameterization, choice of new data measurements, and change of the model structure) are needed to render the model parameters identifiable (when possible). The mathematical technicalities associated with structural identifiability analysis are very sophisticated. However, the development of dedicated, freely available software tools enables the application of identifiability analysis without needing to be an expert in mathematics and computer programming. We refer to such a non-expert user as a practitioner for hands-on purposes. However, a practitioner should be familiar with the model construction and software implementation process. In this paper, we propose to adopt a practitioner approach that takes advantage of available software tools to integrate identifiability analysis in the modeling practice in the animal science field. The application of structural identifiability implies switching our regard of the parameter identification problem as a downstream process (after data collection) to an upstream process (before data collection) where experiment design is applied to guarantee identifiability. This upstream approach will substantially improve the workflow of model construction toward robust and valuable models in animal science. Illustrative examples with different levels of complexity support our work. The source codes of the examples were provided for learning purposes and to promote open science practices.


When modeling biological systems, one major step of the modeling exercise is connecting the theory (the model) with the reality (the data). Such a connection passes through the resolution of the parameter identification (model calibration) problem, which aims at finding a set of parameters that best fits the variables predicted by the model to the data. Traditionally, the parameter identification step is often addressed like a downstream process (after data collection). Using this traditional approach, the modeler has minimal room for maneuvering to improve the model's accuracy. This paper discusses the benefits of adopting an upstream approach (before data collection) during the model construction phase. This approach capitalizes on the identifiability analysis, a powerful tool seldom applied in dynamic models of the animal science domain, likely because of the lack of awareness or the specialized mathematical technicalities involved in the identifiability analysis. In this paper, we illustrate that the modeling community in animal science can easily integrate identifiability analysis in their model developments following a practitioner approach taking advantage of a variety of freely available software tools dedicated to identifiability testing.


Asunto(s)
Modelos Biológicos , Modelos Teóricos , Animales , Programas Informáticos , Proyectos de Investigación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA