Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 80
Filtrar
1.
An. psicol ; 40(2): 344-354, May-Sep, 2024. ilus, tab, graf
Artículo en Español | IBECS | ID: ibc-232727

RESUMEN

En los informes meta-analíticos se suelen reportar varios tipos de intervalos, hecho que ha generado cierta confusión a la hora de interpretarlos. Los intervalos de confianza reflejan la incertidumbre relacionada con un número, el tamaño del efecto medio paramétrico. Los intervalos de predicción reflejan el tamaño paramétrico probable en cualquier estudio de la misma clase que los incluidos en un meta-análisis. Su interpretación y aplicaciones son diferentes. En este artículo explicamos su diferente naturaleza y cómo se pueden utilizar para responder preguntas específicas. Se incluyen ejemplos numéricos, así como su cálculo con el paquete metafor en R.(AU)


Several types of intervals are usually employed in meta-analysis, a fact that has generated some confusion when interpreting them. Confidence intervals reflect the uncertainty related to a single number, the parametric mean effect size. Prediction intervals reflect the probable parametric effect size in any study of the same class as those included in a meta-analysis. Its interpretation and applications are different. In this article we explain in de-tail their different nature and how they can be used to answer specific ques-tions. Numerical examples are included, as well as their computation with the metafor Rpackage.(AU)


Asunto(s)
Humanos , Masculino , Femenino , Intervalos de Confianza , Predicción , Interpretación Estadística de Datos
2.
Sensors (Basel) ; 24(13)2024 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-39000999

RESUMEN

This study utilizes artificial neural networks (ANN) to estimate prediction intervals (PI) for seismic performance assessment of buildings subjected to long-term ground motion. To address the uncertainty quantification in structural health monitoring (SHM), the quality-driven lower upper bound estimation (QD-LUBE) has been opted for global probabilistic assessment of damage at local and global levels, unlike traditional methods. A distribution-free machine learning model has been adopted for enhanced reliability in quantifying uncertainty and ensuring robustness in post-earthquake probabilistic assessments and early warning systems. The distribution-free machine learning model is capable of quantifying uncertainty with high accuracy as compared to previous methods such as the bootstrap method, etc. This research demonstrates the efficacy of the QD-LUBE method in complex seismic risk assessment scenarios, thereby contributing significant enhancement in building resilience and disaster management strategies. This study also validates the findings through fragility curve analysis, offering comprehensive insights into structural damage assessment and mitigation strategies.

3.
Biochem Med (Zagreb) ; 34(2): 020101, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38665871

RESUMEN

Monitoring is indispensable for assessing disease prognosis and evaluating the effectiveness of treatment strategies, both of which rely on serial measurements of patients' data. It also plays a critical role in maintaining the stability of analytical systems, which is achieved through serial measurements of quality control samples. Accurate monitoring can be achieved through data collection, following a strict preanalytical and analytical protocol, and the application of a suitable statistical method. In a stable process, future observations can be predicted based on historical data collected during periods when the process was deemed reliable. This can be evaluated using the statistical prediction interval. Statistically, prediction interval gives an "interval" based on historical data where future measurement results can be located with a specified probability such as 95%. Prediction interval consists of two primary components: (i) the set point and (ii) the total variation around the set point which determines the upper and lower limits of the interval. Both can be calculated using the repeated measurement results obtained from the process during its steady-state. In this paper, (i) the theoretical bases of prediction intervals were outlined, and (ii) its practical application was explained through examples, aiming to facilitate the implementation of prediction intervals in laboratory medicine routine practice, as a robust tool for monitoring patients' data and analytical systems.


Asunto(s)
Modelos Estadísticos , Monitoreo Fisiológico , Humanos , Monitoreo Fisiológico/métodos
8.
Res Synth Methods ; 15(2): 354-368, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37940120

RESUMEN

In any meta-analysis, it is critically important to report the dispersion in effects as well as the mean effect. If an intervention has a moderate clinical impact on average we also need to know if the impact is moderate for all relevant populations, or if it varies from trivial in some to major in others. Or indeed, if the intervention is beneficial in some cases but harmful in others. Researchers typically report a series of statistics such as the Q-value, the p-value, and I2 , which are intended to address this issue. Often, they use these statistics to classify the heterogeneity as being low, moderate, or high and then use these classifications when considering the potential utility of the intervention. While this practice is ubiquitous, it is nevertheless incorrect. The statistics mentioned above do not actually tell us how much the effect size varies. Classifications of heterogeneity based on these statistics are uninformative at best, and often misleading. My goal in this paper is to explain what these statistics do tell us, and that none of them tells us how much the effect size varies. Then I will introduce the prediction interval, the statistic that does tell us how much the effect size varies, and that addresses the question we have in mind when we ask about heterogeneity. This paper is adapted from a chapter in "Common Mistakes in Meta-Analysis and How to Avoid Them." A free PDF of the book is available at https://www.Meta-Analysis.com/rsm.

10.
Eur J Popul ; 39(1): 33, 2023 Nov 13.
Artículo en Inglés | MEDLINE | ID: mdl-37955802

RESUMEN

Demographic forecasters must be realistic about how well they can predict future populations, and it is important that they include estimates of uncertainty in their forecasts. Here we focus on the future development of the immigrant population of Norway and their Norwegian-born children ("second generation"), grouped by three categories of country background: 1. West European countries plus the United States, Canada, Australia, and New Zealand; 2. Central and East European countries that are members of the European Union; 3. other countries. We show how to use a probabilistic forecast to assess the reliability of projections of the immigrant population and their children. We employ the method of random shares using data for immigrants and their children for 2000-2021. We model their age- and sex-specific shares relative to the whole population. Relational models are used for the age patterns in these shares, and time series models to extrapolate the parameters of the age patterns. We compute a probabilistic forecast for six population sub-groups with immigration background, and one for non-immigrants. The probabilistic forecast is calibrated against Statistics Norway's official population projection. We find that a few population trends are quite certain: strong increases to 2060 in the size of the immigrant population (more specifically those who belong to country group 3) and of Norwegian-born children of immigrants. However, prediction intervals around the forecasts of immigrants and their children by one-year age groups are so wide that these forecasts are not reliable.

11.
Artículo en Inglés | MEDLINE | ID: mdl-37973293

RESUMEN

For reporting toxicology studies, the presentation of historical control data and the validation of the concurrent control group with respect to historical control limits have become requirements. However, many regulatory guidelines fail to define how such limits should be calculated and what kind of target value(s) they should cover. Hence, this manuscript is aimed to give a brief review on the methods for the calculation of historical control limits that are in use as well as on their theoretical background. Furthermore, this manuscript is aimed to identify open issues for the use of historical control limits that need to be discussed by the community. It seems that, even after 40 years of discussion, more issues remain open than solved, both, with regard to the available methodology as well as its implementation in user-friendly software. Since several of these topics equally apply to several research fields, this manuscript is addressed to all relevant stakeholders who deal with historical control data obtained from toxicological studies, regardless of their background or field of research.


Asunto(s)
Grupos Control , Toxicología
12.
J Clin Med ; 12(20)2023 Oct 11.
Artículo en Inglés | MEDLINE | ID: mdl-37892600

RESUMEN

(1) Background: Chronic kidney disease (CKD) is extremely common against the backdrop of type 2 diabetes (T2D), accounting for nearly 30-40% of cases. The conventional management strategy relie predominantly on metabolic control and the renin-angiotensin-aldosterone system (RAAS) blockage. In the last decade, sodium glucose cotransporter 2 inhibitors (SGLT-2is) have emerged as the leading molecules preventing the development of, as well as retarding, the progression to CKD. Although the evidence in support of SGLT-2is is overwhelming, the definition of renal composite outcome in the trials varied considerably. The aim of the present meta-analysis was to explore the robustness of the renal composite benefits using a uniform definition. (2) Methods: A web-based search was conducted using the Cochrane Library to identify the relevant articles for meta-analysis. RStudio (1 July 2022, Build 554) software was used to conduct the meta-analysis. Hazard ratio (HR) was the effect size used to estimate the renal composite benefit, and prediction interval was used to detect heterogeneity. In view of the differing baseline characteristic of the trials as well as different molecules used, a random effects model was used. (3) Results: There were 12 trials including 78,781 patients, identified using the search strategy, and a five-point Cochrane risk-of-bias was used to assess quality of the publications. In the overall estimation (irrespective of the definition used for the renal composite) the HR was 0.68 (95% CI 0.60-0.76, prediction interval: 0.48-0.95) in favour of SGLT-2is, devoid of heterogeneity. While using a uniform definition of eGFR ≥ 40%decline, ESKD, or renal death, the HR was 0.64 (95% CI 0.53-0.78); using eGFR ≥ 50%decline, ESKD, or renal death the HR was 0.75 (95% CI 0.59-0.97); and with doubling of serum creatinine, renal replacement therapy, or renal death, the HR was 0.67 (95% CI 0.55-0.83) in favour of SGLT-2is. However, significant heterogeneity was encountered with all these three definitions. (4) Conclusion: There is a need to analyse the renal outcomes using a uniform definition in future trials. The presence of heterogeneity might disappear with the pooling of larger number of trials. However, if heterogeneity persists, we need to identify other clinical or laboratory attributes (in addition to SGLT-2is) responsible for the positive renal outcomes.

13.
BMC Bioinformatics ; 24(1): 331, 2023 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-37667175

RESUMEN

BACKGROUND: Over the past several decades, metrics have been defined to assess the quality of various types of models and to compare their performance depending on their capacity to explain the variance found in real-life data. However, available validation methods are mostly designed for statistical regressions rather than for mechanistic models. To our knowledge, in the latter case, there are no consensus standards, for instance for the validation of predictions against real-world data given the variability and uncertainty of the data. In this work, we focus on the prediction of time-to-event curves using as an application example a mechanistic model of non-small cell lung cancer. We designed four empirical methods to assess both model performance and reliability of predictions: two methods based on bootstrapped versions of parametric statistical tests: log-rank and combined weighted log-ranks (MaxCombo); and two methods based on bootstrapped prediction intervals, referred to here as raw coverage and the juncture metric. We also introduced the notion of observation time uncertainty to take into consideration the real life delay between the moment when an event happens, and the moment when it is observed and reported. RESULTS: We highlight the advantages and disadvantages of these methods according to their application context. We have shown that the context of use of the model has an impact on the model validation process. Thanks to the use of several validation metrics we have highlighted the limit of the model to predict the evolution of the disease in the whole population of mutations at the same time, and that it was more efficient with specific predictions in the target mutation populations. The choice and use of a single metric could have led to an erroneous validation of the model and its context of use. CONCLUSIONS: With this work, we stress the importance of making judicious choices for a metric, and how using a combination of metrics could be more relevant, with the objective of validating a given model and its predictions within a specific context of use. We also show how the reliability of the results depends both on the metric and on the statistical comparisons, and that the conditions of application and the type of available information need to be taken into account to choose the best validation strategy.


Asunto(s)
Adenocarcinoma del Pulmón , Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Humanos , Carcinoma de Pulmón de Células no Pequeñas/genética , Reproducibilidad de los Resultados , Incertidumbre , Neoplasias Pulmonares/genética , Adenocarcinoma del Pulmón/genética , Receptores ErbB/genética
14.
Res Synth Methods ; 14(6): 794-806, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37399809

RESUMEN

Network meta-analysis has played an important role in evidence-based medicine for assessing the comparative effectiveness of multiple available treatments. The prediction interval has been one of the standard outputs in recent network meta-analysis as an effective measure that enables simultaneous assessment of uncertainties in treatment effects and heterogeneity among studies. To construct the prediction interval, a large-sample approximating method based on the t-distribution has generally been applied in practice; however, recent studies have shown that similar t-approximation methods for conventional pairwise meta-analyses can substantially underestimate the uncertainty under realistic situations. In this article, we performed simulation studies to assess the validity of the current standard method for network meta-analysis, and we show that its validity can also be violated under realistic situations. To address the invalidity issue, we developed two new methods to construct more accurate prediction intervals through bootstrap and Kenward-Roger-type adjustment. In simulation experiments, the two proposed methods exhibited better coverage performance and generally provided wider prediction intervals than the ordinary t-approximation. We also developed an R package, PINMA (https://cran.r-project.org/web/packages/PINMA/), to perform the proposed methods using simple commands. We illustrate the effectiveness of the proposed methods through applications to two real network meta-analyses.


Asunto(s)
Metaanálisis en Red , Simulación por Computador
15.
Biom J ; 65(7): e2200046, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37078835

RESUMEN

This study compares the performance of statistical methods for predicting age-standardized cancer incidence, including Poisson generalized linear models, age-period-cohort (APC) and Bayesian age-period-cohort (BAPC) models, autoregressive integrated moving average (ARIMA) time series, and simple linear models. The methods are evaluated via leave-future-out cross-validation, and performance is assessed using the normalized root mean square error, interval score, and coverage of prediction intervals. Methods were applied to cancer incidence from the three Swiss cancer registries of Geneva, Neuchatel, and Vaud combined, considering the five most frequent cancer sites: breast, colorectal, lung, prostate, and skin melanoma and bringing all other sites together in a final group. Best overall performance was achieved by ARIMA models, followed by linear regression models. Prediction methods based on model selection using the Akaike information criterion resulted in overfitting. The widely used APC and BAPC models were found to be suboptimal for prediction, particularly in the case of a trend reversal in incidence, as it was observed for prostate cancer. In general, we do not recommend predicting cancer incidence for periods far into the future but rather updating predictions regularly.


Asunto(s)
Modelos Estadísticos , Neoplasias de la Próstata , Masculino , Humanos , Incidencia , Suiza/epidemiología , Teorema de Bayes , Neoplasias de la Próstata/epidemiología
16.
J Am Soc Mass Spectrom ; 34(5): 939-947, 2023 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-37018384

RESUMEN

Semiquantitation of suspect per- and polyfluoroalkyl substances (PFAS) in complex mixtures is challenging due to the increasing number of suspect PFAS. Traditional 1:1 matching strategies require selecting calibrants (target-surrogate standard pairs) based on head group, fluorinated chain length, and retention time, which is time-consuming and requires expert knowledge. Lack of uniformity in calibrant selection for estimating suspect concentrations among different laboratories makes comparing reported suspect concentrations difficult. In this study, a practical approach whereby the area counts for 50 anionic and 5 zwitterionic/cationic target PFAS were ratioed to the average area of their respective stable-isotope labeled surrogates to create "average PFAS calibration curves" for suspects detected in negative- and positive-ionization mode liquid chromatography quadrupole time-of-flight mass spectrometry. The calibration curves were fitted with log-log and weighted linear regression models. The two models were evaluated for their accuracy and prediction interval in predicting the target PFAS concentrations. The average PFAS calibration curves were then used to estimate the suspect PFAS concentration in a well-characterized aqueous film-forming foam. Weighted linear regression resulted in more target PFAS that fell within 70-130% of their known standard value and narrower prediction intervals than the log-log transformation approach. The summed suspect PFAS concentrations calculated by weighted linear regression and log-log transformation were within 8 and 16% of those estimated by a 1:1 matching strategy. The average PFAS calibration curve can be easily expanded and can be applied to any suspect PFAS even if the confidence in the suspect structure is low or unknown.

17.
Molecules ; 29(1)2023 Dec 19.
Artículo en Inglés | MEDLINE | ID: mdl-38202602

RESUMEN

We used the extreme gradient boosting (XGB) algorithm to predict the experimental solubility of chemical compounds in water and organic solvents and to select significant molecular descriptors. The accuracy of prediction of our forward stepwise top-importance XGB (FSTI-XGB) on curated solubility data sets in terms of RMSE was found to be 0.59-0.76 Log(S) for two water data sets, while for organic solvent data sets it was 0.69-0.79 Log(S) for the Methanol data set, 0.65-0.79 for the Ethanol data set, and 0.62-0.70 Log(S) for the Acetone data set. That was the first step. In the second step, we used uncurated and curated AquaSolDB data sets for applicability domain (AD) tests of Drugbank, PubChem, and COCONUT databases and determined that more than 95% of studied ca. 500,000 compounds were within the AD. In the third step, we applied conformal prediction to obtain narrow prediction intervals and we successfully validated them using test sets' true solubility values. With prediction intervals obtained in the last fourth step, we were able to estimate individual error margins and the accuracy class of the solubility prediction for molecules within the AD of three public databases. All that was possible without the knowledge of experimental database solubilities. We find these four steps novel because usually, solubility-related works only study the first step or the first two steps.

18.
Integr Med Res ; 12(4): 101014, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38938910

RESUMEN

In any meta-analysis it is important to report not only the mean effect size but also how the effect size varies across studies. A treatment that has a moderate clinical impact in all studies is very different than a treatment where the impact is moderate on average, but in some studies is large and in others is trivial (or even harmful). A treatment that has no impact in any studies is very different than a treatment that has no impact on average because it is helpful in some studies but harmful in others. The majority of meta-analyses use the I-squared index to quantify heterogeneity. While this practice is common it is nevertheless incorrect. I-squared does not tell us how much the effect size varies (except when I-squared is zero percent). The statistic that does convey this information is the prediction interval. It allows us to report, for example, that a treatment has a clinically trivial or moderate effect in roughly 10 % of studies, a large effect in roughly 50 %, and a very large effect in roughly 40 %. This is the information that researchers or clinicians have in mind when they ask about heterogeneity. It is the information that researchers believe (incorrectly) is provided by I-squared.

19.
F1000Res ; 12: 996, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38273963

RESUMEN

Background: Measurement uncertainty is typically expressed in terms of a symmetric interval y±U, where y denotes the measurement result and U the expanded uncertainty. However, in the case of heteroscedasticity, symmetric uncertainty intervals can be misleading. In this paper, a different approach for the calculation of uncertainty intervals is introduced. Methods: This approach is applicable when a validation study has been conducted with samples with known concentrations. In a first step, test results are obtained at the different known concentration levels. Then, on the basis of precision estimates, a prediction range is calculated. The measurement uncertainty for a given test result can then be obtained by projecting the intersection of the test result with the limits of the prediction range back onto the axis of the known values, now interpreted as representing the measurand. Results: It will be shown how, under certain circumstances, asymmetric uncertainty intervals arise quite naturally and lead to more reliable uncertainty intervals. Conclusions: This article establishes a conceptual framework in which measurement uncertainty can be derived from precision whenever the relationship between the latter and concentration has been characterized. This approach is applicable for different types of distributions. Closed expressions for the limits of the uncertainty interval are provided for the simple case of normally distributed test results and constant relative standard deviation.


Asunto(s)
Reproducibilidad de los Resultados , Incertidumbre
20.
J Pharmacokinet Pharmacodyn ; 49(5): 487-491, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35927373

RESUMEN

Variability and estimation uncertainty are important sources of variation in pharmacometric simulations. Different combinations of uncertainty and the variability components lead to a variety types of simulation intervals, and many realized and unrealized confusions exist among pharmacometricians on their interpretation and usage. This commentary aims to clarify some of the important underlying concepts and provide a convenient guideline on pharmacometric simulation conduct and interpretation.


Asunto(s)
Incertidumbre , Simulación por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA