Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Contemp Clin Trials ; 145: 107646, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39084407

RESUMEN

In medical research, publication bias (PB) poses great challenges to the conclusions from systematic reviews and meta-analyses. The majority of efforts in methodological research related to classic PB have focused on examining the potential suppression of studies reporting effects close to the null or statistically non-significant results. Such suppression is common, particularly when the study outcome concerns the effectiveness of a new intervention. On the other hand, attention has recently been drawn to the so-called inverse publication bias (IPB) within the evidence synthesis community. It can occur when assessing adverse events because researchers may favor evidence showing a similar safety profile regarding an adverse event between a new intervention and a control group. In comparison to the classic PB, IPB is much less recognized in the current literature; methods designed for classic PB may be inaccurately applied to address IPB, potentially leading to entirely incorrect conclusions. This article aims to provide a collection of accessible methods to assess IPB for adverse events. Specifically, we discuss the relevance and differences between classic PB and IPB. We also demonstrate visual assessment through contour-enhanced funnel plots tailored to adverse events and popular quantitative methods, including Egger's regression test, Peters' regression test, and the trim-and-fill method for such cases. Three real-world examples are presented to illustrate the bias in various scenarios, and the implementations are illustrated with statistical code. We hope this article offers valuable insights for evaluating IPB in future systematic reviews of adverse events.


Asunto(s)
Sesgo de Publicación , Humanos , Metaanálisis como Asunto , Proyectos de Investigación
2.
BMC Med ; 21(1): 112, 2023 03 29.
Artículo en Inglés | MEDLINE | ID: mdl-36978059

RESUMEN

BACKGROUND: Studies included in a meta-analysis are often heterogeneous. The traditional random-effects models assume their true effects to follow a normal distribution, while it is unclear if this critical assumption is practical. Violations of this between-study normality assumption could lead to problematic meta-analytical conclusions. We aimed to empirically examine if this assumption is valid in published meta-analyses. METHODS: In this cross-sectional study, we collected meta-analyses available in the Cochrane Library with at least 10 studies and with between-study variance estimates > 0. For each extracted meta-analysis, we performed the Shapiro-Wilk (SW) test to quantitatively assess the between-study normality assumption. For binary outcomes, we assessed between-study normality for odds ratios (ORs), relative risks (RRs), and risk differences (RDs). Subgroup analyses based on sample sizes and event rates were used to rule out the potential confounders. In addition, we obtained the quantile-quantile (Q-Q) plot of study-specific standardized residuals for visually assessing between-study normality. RESULTS: Based on 4234 eligible meta-analyses with binary outcomes and 3433 with non-binary outcomes, the proportion of meta-analyses that had statistically significant non-normality varied from 15.1 to 26.2%. RDs and non-binary outcomes led to more frequent non-normality issues than ORs and RRs. For binary outcomes, the between-study non-normality was more frequently found in meta-analyses with larger sample sizes and event rates away from 0 and 100%. The agreements of assessing the normality between two independent researchers based on Q-Q plots were fair or moderate. CONCLUSIONS: The between-study normality assumption is commonly violated in Cochrane meta-analyses. This assumption should be routinely assessed when performing a meta-analysis. When it may not hold, alternative meta-analysis methods that do not make this assumption should be considered.


Asunto(s)
Estudios Transversales , Humanos , Tamaño de la Muestra , Oportunidad Relativa
3.
Artículo en Inglés | MEDLINE | ID: mdl-33801771

RESUMEN

Bayesian methods are an important set of tools for performing meta-analyses. They avoid some potentially unrealistic assumptions that are required by conventional frequentist methods. More importantly, meta-analysts can incorporate prior information from many sources, including experts' opinions and prior meta-analyses. Nevertheless, Bayesian methods are used less frequently than conventional frequentist methods, primarily because of the need for nontrivial statistical coding, while frequentist approaches can be implemented via many user-friendly software packages. This article aims at providing a practical review of implementations for Bayesian meta-analyses with various prior distributions. We present Bayesian methods for meta-analyses with the focus on odds ratio for binary outcomes. We summarize various commonly used prior distribution choices for the between-studies heterogeneity variance, a critical parameter in meta-analyses. They include the inverse-gamma, uniform, and half-normal distributions, as well as evidence-based informative log-normal priors. Five real-world examples are presented to illustrate their performance. We provide all of the statistical code for future use by practitioners. Under certain circumstances, Bayesian methods can produce markedly different results from those by frequentist methods, including a change in decision on statistical significance. When data information is limited, the choice of priors may have a large impact on meta-analytic results, in which case sensitivity analyses are recommended. Moreover, the algorithm for implementing Bayesian analyses may not converge for extremely sparse data; caution is needed in interpreting respective results. As such, convergence should be routinely examined. When select statistical assumptions that are made by conventional frequentist methods are violated, Bayesian methods provide a reliable alternative to perform a meta-analysis.


Asunto(s)
Algoritmos , Programas Informáticos , Teorema de Bayes
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA