Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 119
Filtrar
1.
BMC Med Res Methodol ; 24(1): 195, 2024 Sep 07.
Artículo en Inglés | MEDLINE | ID: mdl-39244581

RESUMEN

The inability to correctly account for unmeasured confounding can lead to bias in parameter estimates, invalid uncertainty assessments, and erroneous conclusions. Sensitivity analysis is an approach to investigate the impact of unmeasured confounding in observational studies. However, the adoption of this approach has been slow given the lack of accessible software. An extensive review of available R packages to account for unmeasured confounding list deterministic sensitivity analysis methods, but no R packages were listed for probabilistic sensitivity analysis. The R package unmconf implements the first available package for probabilistic sensitivity analysis through a Bayesian unmeasured confounding model. The package allows for normal, binary, Poisson, or gamma responses, accounting for one or two unmeasured confounders from the normal or binomial distribution. The goal of unmconf is to implement a user friendly package that performs Bayesian modeling in the presence of unmeasured confounders, with simple commands on the front end while performing more intensive computation on the back end. We investigate the applicability of this package through novel simulation studies. The results indicate that credible intervals will have near nominal coverage probability and smaller bias when modeling the unmeasured confounder(s) for varying levels of internal/external validation data across various combinations of response-unmeasured confounder distributional families.


Asunto(s)
Teorema de Bayes , Factores de Confusión Epidemiológicos , Programas Informáticos , Humanos , Simulación por Computador , Modelos Estadísticos , Algoritmos , Sesgo , Análisis de Regresión
3.
Stat Med ; 2024 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-39075028

RESUMEN

Principal stratification has become a popular tool to address a broad class of causal inference questions, particularly in dealing with non-compliance and truncation by death problems. The causal effects within principal strata, which are determined by joint potential values of the intermediate variable, also known as the principal causal effects, are often of interest in these studies. The analysis of principal causal effects from observational studies mostly relies on the ignorability assumption of treatment assignment, which requires practitioners to accurately measure as many covariates as possible so that all potential sources of confounders are captured. However, in practice, collecting all potential confounding factors can be challenging and costly, rendering the ignorability assumption questionable. In this paper, we consider the identification and estimation of causal effects when treatment and principal stratification are confounded by unmeasured confounding. Specifically, we establish the nonparametric identification of principal causal effects using a pair of negative controls to mitigate unmeasured confounding, requiring they have no direct effect on the outcome variable. We also provide an estimation method for principal causal effects. Extensive simulations and a leukemia study are employed for illustration.

4.
Am J Epidemiol ; 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39030722

RESUMEN

Confounding by indication is a key challenge for pharmacoepidemiologists. Although self-controlled study designs address time-invariant confounding, indications sometimes vary over time. For example, infection might act as a time-varying confounder in a study of antibiotics and uveitis, because it is time-limited and a direct cause both of receiving antibiotics and uveitis. Methods for incorporating active comparators in self-controlled studies to address such time-varying confounding by indication have only recently been developed. In this paper we formalize these methods, and provide a detailed description for how the active comparator rate ratio can be derived in a self-controlled case series (SCCS): either by explicitly comparing the regression coefficients for a drug of interest and an active comparator under certain circumstances using a simple ratio approach, or through the use of a nested regression model. The approaches are compared in two case studies, one examining the association between thiazolidinediones and fractures, and one examining the association between fluoroquinolones and uveitis using the UK Clinical Practice Research DataLink. Finally, we provide recommendations for the use of these methods, which we hope will support the design, execution and interpretation of SCCS using active comparators and thereby increase the robustness of pharmacoepidemiological studies.

5.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38884127

RESUMEN

The marginal structure quantile model (MSQM) provides a unique lens to understand the causal effect of a time-varying treatment on the full distribution of potential outcomes. Under the semiparametric framework, we derive the efficiency influence function for the MSQM, from which a new doubly robust estimator is proposed for point estimation and inference. We show that the doubly robust estimator is consistent if either of the models associated with treatment assignment or the potential outcome distributions is correctly specified, and is semiparametric efficient if both models are correct. To implement the doubly robust MSQM estimator, we propose to solve a smoothed estimating equation to facilitate efficient computation of the point and variance estimates. In addition, we develop a confounding function approach to investigate the sensitivity of several MSQM estimators when the sequential ignorability assumption is violated. Extensive simulations are conducted to examine the finite-sample performance characteristics of the proposed methods. We apply the proposed methods to the Yale New Haven Health System Electronic Health Record data to study the effect of antihypertensive medications to patients with severe hypertension and assess the robustness of the findings to unmeasured baseline and time-varying confounding.


Asunto(s)
Simulación por Computador , Hipertensión , Modelos Estadísticos , Humanos , Hipertensión/tratamiento farmacológico , Antihipertensivos/uso terapéutico , Registros Electrónicos de Salud/estadística & datos numéricos , Biometría/métodos
6.
Am J Epidemiol ; 2024 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-38825336

RESUMEN

BACKGROUND: Unmeasured confounding is often raised as a source of potential bias during the design of non-randomized studies but quantifying such concerns is challenging. METHODS: We developed a simulation-based approach to assess the potential impact of unmeasured confounding during the study design stage. The approach involved generation of hypothetical individual-level cohorts using realistic parameters including a binary treatment (prevalence 25%), a time-to-event outcome (incidence 5%), 13 measured covariates, a binary unmeasured confounder (u1, 10%), and a binary measured 'proxy' variable (p1) correlated with u1. Strength of unmeasured confounding and correlations between u1 and p1 were varied in simulation scenarios. Treatment effects were estimated with, a) no adjustment, b) adjustment for measured confounders (Level 1), c) adjustment for measured confounders and their proxy (Level 2). We computed absolute standardized mean differences in u1 and p1 and relative bias with each level of adjustment. RESULTS: Across all scenarios, Level 2 adjustment led to improvement in balance of u1, but this improvement was highly dependent on the correlation between u1 and p1. Level 2 adjustments also had lower relative bias than Level 1 adjustments (in strong u1 scenarios: relative bias of 9.2%, 12.2%, 13.5% at correlations 0.7, 0.5, and 0.3, respectively versus 16.4%, 15.8%, 15.0% for Level 1, respectively). CONCLUSION: An approach using simulated individual-level data was useful to explicitly convey the potential for bias due to unmeasured confounding while designing non-randomized studies and can be helpful in informing design choices.

7.
Struct Equ Modeling ; 31(1): 132-150, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38706777

RESUMEN

Parallel process latent growth curve mediation models (PP-LGCMMs) are frequently used to longitudinally investigate the mediation effects of treatment on the level and change of outcome through the level and change of mediator. An important but often violated assumption in empirical PP-LGCMM analysis is the absence of omitted confounders of the relationships among treatment, mediator, and outcome. In this study, we analytically examined how omitting pretreatment confounders impacts the inference of mediation from the PP-LGCMM. Using the analytical results, we developed three sensitivity analysis approaches for the PP-LGCMM, including the frequentist, Bayesian, and Monte Carlo approaches. The three approaches help investigate different questions regarding the robustness of mediation results from the PP-LGCMM, and handle the uncertainty in the sensitivity parameters differently. Applications of the three sensitivity analyses are illustrated using a real-data example. A user-friendly Shiny web application is developed to conduct the sensitivity analyses.

8.
Biometrics ; 80(2)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38646999

RESUMEN

Negative control variables are sometimes used in nonexperimental studies to detect the presence of confounding by hidden factors. A negative control outcome (NCO) is an outcome that is influenced by unobserved confounders of the exposure effects on the outcome in view, but is not causally impacted by the exposure. Tchetgen Tchetgen (2013) introduced the Control Outcome Calibration Approach (COCA) as a formal NCO counterfactual method to detect and correct for residual confounding bias. For identification, COCA treats the NCO as an error-prone proxy of the treatment-free counterfactual outcome of interest, and involves regressing the NCO on the treatment-free counterfactual, together with a rank-preserving structural model, which assumes a constant individual-level causal effect. In this work, we establish nonparametric COCA identification for the average causal effect for the treated, without requiring rank-preservation, therefore accommodating unrestricted effect heterogeneity across units. This nonparametric identification result has important practical implications, as it provides single-proxy confounding control, in contrast to recently proposed proximal causal inference, which relies for identification on a pair of confounding proxies. For COCA estimation we propose 3 separate strategies: (i) an extended propensity score approach, (ii) an outcome bridge function approach, and (iii) a doubly-robust approach. Finally, we illustrate the proposed methods in an application evaluating the causal impact of a Zika virus outbreak on birth rate in Brazil.


Asunto(s)
Puntaje de Propensión , Humanos , Factores de Confusión Epidemiológicos , Infección por el Virus Zika/epidemiología , Causalidad , Modelos Estadísticos , Sesgo , Brasil/epidemiología , Simulación por Computador , Femenino , Embarazo
9.
J Comp Eff Res ; 13(5): e230085, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38567965

RESUMEN

Aim: The first objective is to compare the performance of two-stage residual inclusion (2SRI), two-stage least square (2SLS) with the multivariable generalized linear model (GLM) in terms of the reducing unmeasured confounding bias. The second objective is to demonstrate the ability of 2SRI and 2SPS in alleviating unmeasured confounding when noncollapsibility exists. Materials & methods: This study comprises a simulation study and an empirical example from a real-world UK population health dataset (Clinical Practice Research Datalink). The instrumental variable (IV) used is based on physicians' prescribing preferences (defined by prescribing history). Results: The percent bias of 2SRI in terms of treatment effect estimates to be lower than GLM and 2SPS and was less than 15% in most scenarios. Further, 2SRI was found to be robust to mild noncollapsibility with the percent bias less than 50%. As the level of unmeasured confounding increased, the ability to alleviate the noncollapsibility decreased. Strong IVs tended to be more robust to noncollapsibility than weak IVs. Conclusion: 2SRI tends to be less biased than GLM and 2SPS in terms of estimating treatment effect. It can be robust to noncollapsibility in the case of the mild unmeasured confounding effect.


Asunto(s)
Factores de Confusión Epidemiológicos , Pautas de la Práctica en Medicina , Humanos , Pautas de la Práctica en Medicina/estadística & datos numéricos , Sesgo , Modelos Lineales , Análisis de los Mínimos Cuadrados , Reino Unido , Simulación por Computador
10.
HGG Adv ; 5(1): 100245, 2024 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-37817410

RESUMEN

Mendelian randomization has been widely used to assess the causal effect of a heritable exposure variable on an outcome of interest, using genetic variants as instrumental variables. In practice, data on the exposure variable can be incomplete due to high cost of measurement and technical limits of detection. In this paper, we propose a valid and efficient method to handle both unmeasured and undetectable values of the exposure variable in one-sample Mendelian randomization analysis with individual-level data. We estimate the causal effect of the exposure variable on the outcome using maximum likelihood estimation and develop an expectation maximization algorithm for the computation of the estimator. Simulation studies show that the proposed method performs well in making inference on the causal effect. We apply our method to the Hispanic Community Health Study/Study of Latinos, a community-based prospective cohort study, and estimate the causal effect of several metabolites on phenotypes of interest.


Asunto(s)
Análisis de la Aleatorización Mendeliana , Salud Pública , Humanos , Análisis de la Aleatorización Mendeliana/métodos , Estudios Prospectivos , Causalidad , Hispánicos o Latinos/genética
11.
Am J Epidemiol ; 193(3): 426-453, 2024 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-37851862

RESUMEN

Uses of real-world data in drug safety and effectiveness studies are often challenged by various sources of bias. We undertook a systematic search of the published literature through September 2020 to evaluate the state of use and utility of negative controls to address bias in pharmacoepidemiologic studies. Two reviewers independently evaluated study eligibility and abstracted data. Our search identified 184 eligible studies for inclusion. Cohort studies (115, 63%) and administrative data (114, 62%) were, respectively, the most common study design and data type used. Most studies used negative control outcomes (91, 50%), and for most studies the target source of bias was unmeasured confounding (93, 51%). We identified 4 utility domains of negative controls: 1) bias detection (149, 81%), 2) bias correction (16, 9%), 3) P-value calibration (8, 4%), and 4) performance assessment of different methods used in drug safety studies (31, 17%). The most popular methodologies used were the 95% confidence interval and P-value calibration. In addition, we identified 2 reference sets with structured steps to check the causality assumption of the negative control. While negative controls are powerful tools in bias detection, we found many studies lacked checking the underlying assumptions. This article is part of a Special Collection on Pharmacoepidemiology.


Asunto(s)
Farmacoepidemiología , Humanos , Sesgo , Farmacoepidemiología/métodos
12.
J Clin Epidemiol ; 166: 111228, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38040387

RESUMEN

OBJECTIVES: Negative controls are considered an important tool to mitigate biases in observational studies. The aim of this scoping review was to summarize current methodologies of negative controls (both negative control exposure [NCE] and negative control outcome [NCO]). STUDY DESIGN AND SETTING: We searched PubMed, Web of Science, Embase, and Cochrane Library (up to March 9, 2023) for articles on methodologies of negative controls. Two reviewers selected eligible studies and collected relevant data independently and in duplicate. We reported total numbers and percentages, and summarized methodologies narratively. RESULTS: A total of 37 relevant methodological articles were included in our review. These publications covered NCE (n = 11, 29.8%), NCO (n = 13, 35.1%), or both (n = 13, 35.1%), with most focused on bias detection (n = 14, 37.8%), bias correction (n = 16, 43.3%), and P value or confidence interval (CI) calibration (n = 5, 13.5%). For the two remaining articles (5.4%), one discussed bias detection and P value or CI calibration and the other covered all the three functions. For bias detection, the existence of an association between the NCE (NCO) and outcome (exposure) variables of interest simply indicates that results may suffer from confounding bias, selection bias and/or information bias. For bias correction, however, the algorithms of negative control methods need more stringent assumptions such as rank preservation, monotonicity, and linearity. CONCLUSION: Negative controls can be leveraged for bias detection, P value or CI calibration, and bias correction, among which bias correction has been the most studied methodologically. The current available methods need some stringent assumptions to detect or remove bias. More methodological research is needed to optimize the use of negative controls.


Asunto(s)
Sesgo , Grupos Control , Proyectos de Investigación , Sesgo de Selección
13.
Int J Epidemiol ; 53(1)2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38110565

RESUMEN

BACKGROUND: The sibling comparison analysis is used to deal with unmeasured confounding. It has previously been shown that in the presence of non-shared unmeasured confounding, the sibling comparison analysis may introduce substantial bias depending on the sharedness of the unmeasured confounder and the sharedness of the exposure. We aimed to improve the awareness of this challenge of the sibling comparison analysis. METHODS: First, we simulated sibling pairs with an exposure, a confounder and an outcome. We simulated sibling pairs with no effect of the exposure on the outcome and with positive confounding. For varying degrees of sharedness of the confounder and the exposure and for varying prevalence of the exposure, we calculated the sibling comparison odds ratio (OR). Second, we provided measures for sharedness of selected treatments based on Danish health data. RESULTS: The confounded sibling comparison OR was visualized for varying degrees of sharedness of the confounder and the exposure and for varying prevalence of the exposure. The confounded sibling comparison OR was seen to increase with increasing sharedness of the exposure and the confounded sibling comparison OR decreased with an increasing prevalence of exposure. Measures for sharedness of treatments based on Danish health data showed that treatments of chronic diseases have the highest sharedness and treatments of non-chronic diseases have the lowest sharedness. CONCLUSIONS: Researchers should be aware of the challenge regarding non-shared unmeasured confounding in the sibling comparison analysis, before applying the analysis in non-randomized studies. Otherwise, the sibling comparison analysis may lead to substantial bias.


Asunto(s)
Hermanos , Humanos , Factores de Confusión Epidemiológicos , Sesgo , Oportunidad Relativa
14.
Stat Med ; 42(24): 4349-4376, 2023 10 30.
Artículo en Inglés | MEDLINE | ID: mdl-37828812

RESUMEN

Medical cost data often consist of zero values as well as extremely right-skewed positive values. A two-part model is a popular choice for analyzing medical cost data, where the first part models the probability of a positive cost using logistic regression and the second part models the positive cost using a lognormal or Gamma distribution. To address the unmeasured confounding in studies on cost outcome under two-part models, two instrumental variable (IV) methods, two-stage residual inclusion (2SRI) and two-stage prediction substitution (2SPS) are widely applied. However, previous literature demonstrated that both the 2SRI and the 2SPS could fail to consistently estimate the causal effect among compliers under standard IV assumptions for binary and survival outcomes. Our simulation studies confirmed that it continued to be the case for a two-part model, which is another nonlinear model. In this article, we develop a model-based IV approach, Instrumental Variable with Two-Part model (IV2P), to obtain a consistent estimate of the causal effect among compliers for cost outcome under standard IV assumptions. In addition, we develop sensitivity analysis approaches to allow the evaluation of the sensitivity of the causal conclusions to potential quantified violations of the exclusion restriction assumption and the randomization of IV assumption. We apply our method to a randomized cash incentive study to evaluate the effect of a primary care visit on medical cost among low-income adults newly covered by a primary care program.


Asunto(s)
Atención Primaria de Salud , Humanos , Adulto , Simulación por Computador , Modelos Logísticos , Probabilidad , Causalidad
15.
Ann Am Thorac Soc ; 20(11): 1642-1653, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37579136

RESUMEN

Rationale: Many advocate the application of propensity-matching methods to real-world data to answer key questions around obstructive sleep apnea (OSA) management. One such question is whether identifying undiagnosed OSA impacts mortality in high-risk populations, such as those with chronic obstructive pulmonary disease (COPD). Objectives: Assess the association of sleep testing with mortality among patients with COPD and a high likelihood of undiagnosed OSA. Methods: We identified patients with COPD and a high likelihood of undiagnosed OSA. We then distinguished those receiving sleep testing within 90 days of index COPD encounters. We calculated propensity scores for testing based on 37 variables and compared long-term mortality in matched groups. In sensitivity analyses, we compared mortality using inverse propensity weighting and instrumental variable methods. We also compared the incidence of nonfatal events including adverse outcomes (hospitalizations and COPD exacerbations) and routine services that are regularly indicated in COPD (influenza vaccination and pulmonary function testing). We compared the incidence of each nonfatal event as a composite outcome with death and separately compared the marginal probability of each nonfatal event independently, with death as a competing risk. Results: Among 135,958 patients, 1,957 (1.4%) received sleep testing. We propensity matched all patients with sleep testing to an equal number without testing, achieving excellent balance on observed confounders, with standardized differences < 0.10. We observed lower mortality risk among patients with sleep testing (incidence rate ratio, 0.88; 95% confidence interval [CI], 0.79-0.99) and similar results using inverse propensity weighting and instrumental variable methods. Contrary to mortality, we found that sleep testing was associated with a similar or greater risk for nonfatal adverse events, including inpatient COPD exacerbations (subhazard ratio, 1.29; 95% CI, 1.02-1.62) and routine services like influenza vaccination (subhazard ratio, 1.26; 95% CI, 1.17-1.36). Conclusions: Our disparate findings can be interpreted in multiple ways. Sleep testing may indeed cause both reduced mortality and greater incidence of nonfatal adverse outcomes and routine services. However, it is also possible that our findings stem from residual confounding by patients' likelihood of accessing care. Given the limitations of propensity-based analyses, we cannot confidently distinguish these two possibilities. This uncertainty highlights the limitations of using propensity-based analyses to guide patient care and policy decisions.


Asunto(s)
Gripe Humana , Enfermedad Pulmonar Obstructiva Crónica , Apnea Obstructiva del Sueño , Humanos , Factores de Riesgo , Apnea Obstructiva del Sueño/complicaciones , Apnea Obstructiva del Sueño/epidemiología , Apnea Obstructiva del Sueño/diagnóstico , Sueño
16.
Accid Anal Prev ; 191: 107144, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37473524

RESUMEN

INTRODUCTION: Unmeasured confounding can lead to biased interpretations of empirical findings. This paper aimed to assess the magnitude of suspected unmeasured confounding due to driving mileage and simulate the statistical power required to detect a discrepancy in the effect of polypharmacy on road traffic crashes (RTCs) among older adults. METHODS: Based on Monte Carlo Simulation (MCS) approach, we estimated 1) the magnitude of confounding of driving mileage on the association of polypharmacy and RTCs and 2) the statistical power of to detect a discrepancy from no adjusted effect. A total of 1000 studies, each of 500000 observations, were simulated. RESULTS: Under the assumption of a modest adjusted exposure-outcome odds ratio of 1.35, the magnitude of confounding bias by driving mileage was estimated to be 16% higher with a statistical power of 50%. Only an adjusted odds ratio of at least 1.60 would be associated with a statistical power of about 80% CONCLUSION: This applied probabilistic bias analysis showed that not adjusting for driving mileage as a confounder can lead to an overestimation of the effect of polypharmacy on RTCs in older adults. Even considering a large sample, small to moderate adjusted exposure effects were difficult to be detected.


Asunto(s)
Accidentes de Tránsito , Humanos , Anciano , Simulación por Computador , Sesgo , Oportunidad Relativa
17.
Stat Methods Med Res ; 32(8): 1576-1587, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37338976

RESUMEN

Unmeasured confounding is a well-known obstacle in causal inference. In recent years, negative controls have received increasing attention as a important tool to address concerns about the problem. The literature on the topic has expanded rapidly and several authors have advocated the more routine use of negative controls in epidemiological practice. In this article, we review concepts and methodologies based on negative controls for detection and correction of unmeasured confounding bias. We argue that negative controls may lack both specificity and sensitivity to detect unmeasured confounding and that proving the null hypothesis of a null negative control association is impossible. We focus our discussion on the control outcome calibration approach, the difference-in-difference approach, and the double-negative control approach as methods for confounding correction. For each of these methods, we highlight their assumptions and illustrate the potential impact of violations thereof. Given the potentially large impact of assumption violations, it may sometimes be desirable to replace strong conditions for exact identification with weaker, easily verifiable conditions, even when these imply at most partial identification of unmeasured confounding. Future research in this area may broaden the applicability of negative controls and in turn make them better suited for routine use in epidemiological practice. At present, however, the applicability of negative controls should be carefully judged on a case-by-case basis.


Asunto(s)
Factores de Confusión Epidemiológicos , Sesgo , Causalidad
18.
Stat Med ; 42(21): 3838-3859, 2023 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-37345519

RESUMEN

Unmeasured confounding is a major obstacle to reliable causal inference based on observational studies. Instrumented difference-in-differences (iDiD), a novel idea connecting instrumental variable and standard DiD, ameliorates the above issue by explicitly leveraging exogenous randomness in an exposure trend. In this article, we utilize the above idea of iDiD, and propose a novel group sequential testing method that provides valid inference even in the presence of unmeasured confounders. At each time point, we estimate the average or conditional average treatment effect under iDiD setting using the data accumulated up to that time point, and test the significance of the treatment effect. We derive the joint distribution of the test statistics under the null using the asymptotic properties of M-estimation, and the group sequential boundaries are obtained using the α $$ \alpha $$ -spending functions. The performance of our proposed approach is evaluated on both synthetic data and Clinformatics Data Mart Database (OptumInsight, Eden Prairie, MN) to examine the association between rofecoxib and acute myocardial infarction, and our method detects significant adverse effect of rofecoxib much earlier than the time when it was finally withdrawn from the market.


Asunto(s)
Sesgo , Estadística como Asunto , Humanos , Infarto del Miocardio , Retirada de Medicamento por Seguridad
19.
BMC Med Res Methodol ; 23(1): 111, 2023 05 04.
Artículo en Inglés | MEDLINE | ID: mdl-37142961

RESUMEN

BACKGROUND: Failure to appropriately account for unmeasured confounding may lead to erroneous conclusions. Quantitative bias analysis (QBA) can be used to quantify the potential impact of unmeasured confounding or how much unmeasured confounding would be needed to change a study's conclusions. Currently, QBA methods are not routinely implemented, partly due to a lack of knowledge about accessible software. Also, comparisons of QBA methods have focused on analyses with a binary outcome. METHODS: We conducted a systematic review of the latest developments in QBA software published between 2011 and 2021. Our inclusion criteria were software that did not require adaption (i.e., code changes) before application, was still available in 2022, and accompanied by documentation. Key properties of each software tool were identified. We provide a detailed description of programs applicable for a linear regression analysis, illustrate their application using two data examples and provide code to assist researchers in future use of these programs. RESULTS: Our review identified 21 programs with [Formula: see text] created post 2016. All are implementations of a deterministic QBA with [Formula: see text] available in the free software R. There are programs applicable when the analysis of interest is a regression of binary, continuous or survival outcomes, and for matched and mediation analyses. We identified five programs implementing differing QBAs for a continuous outcome: treatSens, causalsens, sensemakr, EValue, and konfound. When applied to one of our illustrative examples, causalsens incorrectly indicated sensitivity to unmeasured confounding whereas the other four programs indicated robustness. sensemakr performs the most detailed QBA and includes a benchmarking feature for multiple unmeasured confounders. CONCLUSIONS: Software is now available to implement a QBA for a range of different analyses. However, the diversity of methods, even for the same analysis of interest, presents challenges to their widespread uptake. Provision of detailed QBA guidelines would be highly beneficial.


Asunto(s)
Programas Informáticos , Humanos , Factores de Confusión Epidemiológicos , Sesgo , Modelos Lineales , Análisis de Regresión
20.
Stat Med ; 42(16): 2855-2872, 2023 07 20.
Artículo en Inglés | MEDLINE | ID: mdl-37186394

RESUMEN

The augmented randomized controlled trial (RCT) with hybrid control arm includes a randomized treatment group (RT), a smaller randomized control group (RC), and a large synthetic control (SC) group from real-world data. This kind of trial is useful when there is logistics and ethics hurdle to conduct a fully powered RCT with equal allocation, or when it is necessary to increase the power of the RCT by incorporating real-world data. A difficulty in the analysis of augmented RCT is that the SC and RC may be systematically different in the distribution of observed and unmeasured confounding factors, causing bias when the two control groups are analyzed together as hybrid controls. We propose to use propensity score (PS) analysis to balance the observed confounders between SC and RC. The possible bias caused by unmeasured confounders can be estimated and tested by analyzing propensity score adjusted outcomes from SC and RC. We also propose a partial bias correction (PBC) procedure to reduce bias from unmeasured confounding. Extensive simulation studies show that the proposed PS + PBC procedures can improve the efficiency and statistical power by effectively incorporating the SC into the RCT data analysis, while still control the estimation bias and Type I error inflation that might arise from unmeasured confounding. We illustrate the proposed statistical procedures with data from an augmented RCT in oncology.


Asunto(s)
Simulación por Computador , Humanos , Sesgo , Puntaje de Propensión
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA