Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
Am J Epidemiol ; 2024 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-38879744

RESUMEN

Studies often report estimates of the average treatment effect (ATE). While the ATE summarizes the effect of a treatment on average, it does not provide any information about the effect of treatment within any individual. A treatment strategy that uses an individual's information to tailor treatment to maximize benefit is known as an optimal dynamic treatment rule (ODTR). Treatment, however, is typically not limited to a single point in time; consequently, learning an optimal rule for a time-varying treatment may involve not just learning the extent to which the comparative treatments' benefits vary across the characteristics of individuals, but also learning the extent to which the comparative treatments' benefits vary as relevant circumstances evolve within an individual. The goal of this paper is to provide a tutorial for estimating ODTR from longitudinal observational and clinical trial data for applied researchers. We describe an approach that uses a doubly-robust unbiased transformation of the conditional average treatment effect. We then learn a time-varying ODTR for when to increase buprenorphine-naloxone (BUP-NX) dose to minimize return-to-regular-opioid-use among patients with opioid use disorder. Our analysis highlights the utility of ODTRs in the context of sequential decision making: the learned ODTR outperforms a clinically defined strategy.

2.
Qual Life Res ; 33(4): 1085-1094, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38240915

RESUMEN

PURPOSE: Many studies on cancer patients investigate the impact of treatment on health-related quality of life (QoL). Typically, QoL is measured longitudinally, at baseline and at predefined timepoints thereafter. The question is whether, at a given timepoint, patients who return their questionnaire (available cases, AC) have a different QoL than those who do not return their questionnaire (non-AC). METHODS: We employed augmented inverse probability weighting (AIPW) to estimate the average QoL of non-AC in two studies on advanced-stage cancer patients. The AIPW estimator assumed data to be missing at random (MAR) and used machine learning (ML)-based methods to estimate answering probabilities of individuals at given timepoints as well as their reported QoL, as a function of auxiliary variables. These auxiliary variables were selected by medical oncologists based on domain expertise. We aggregated results both by timepoint and by time until death and compared AIPW estimates to the AC averages. Additionally, we used a pattern mixture model (PMM) to check sensitivity of our AIPW estimates against violation of the MAR assumption. RESULTS: Our study included 1927 patients with advanced pancreatic and 797 patients with advanced breast cancer. The AIPW estimate for average QoL of non-AC was below the average QoL of AC when aggregated by timepoint. The difference vanished when aggregated by time until death. PMM estimates were below AIPW estimates. CONCLUSIONS: Our results indicate that non-AC have a lower average QoL than AC. However, estimates for QoL of non-AC are subject to unverifiable assumptions about the missingness mechanism.


Asunto(s)
Neoplasias de la Mama , Calidad de Vida , Humanos , Femenino , Calidad de Vida/psicología , Encuestas y Cuestionarios , Sesgo
3.
Front Psychol ; 14: 1266447, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37809287

RESUMEN

Despite discussions about the replicability of findings in psychological research, two issues have been largely ignored: selection mechanisms and model assumptions. Both topics address the same fundamental question: Does the chosen statistical analysis tool adequately model the data generation process? In this article, we address both issues and show, in a first step, that in the face of selective samples and contrary to common practice, the validity of inferences, even when based on experimental designs, can be claimed without further justification and adaptation of standard methods only in very specific situations. We then broaden our perspective to discuss consequences of violated assumptions in linear models in the context of psychological research in general and in generalized linear mixed models as used in item response theory. These types of misspecification are oftentimes ignored in the psychological research literature. It is emphasized that the above problems cannot be overcome by strategies such as preregistration, large samples, replications, or a ban on testing null hypotheses. To avoid biased conclusions, we briefly discuss tools such as model diagnostics, statistical methods to compensate for selectivity and semi- or non-parametric estimation. At a more fundamental level, however, a twofold strategy seems indispensable: (1) iterative, cumulative theory development based on statistical methods with theoretically justified assumptions, and (2) empirical research on variables that affect (self-) selection into the observed part of the sample and the use of this information to compensate for selectivity.

4.
Biometrics ; 79(2): 1239-1253, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-35583919

RESUMEN

Functional principal component analysis (FPCA) has been widely used to capture major modes of variation and reduce dimensions in functional data analysis. However, standard FPCA based on the sample covariance estimator does not work well if the data exhibits heavy-tailedness or outliers. To address this challenge, a new robust FPCA approach based on a functional pairwise spatial sign (PASS) operator, termed PASS FPCA, is introduced. We propose robust estimation procedures for eigenfunctions and eigenvalues. Theoretical properties of the PASS operator are established, showing that it adopts the same eigenfunctions as the standard covariance operator and also allows recovering ratios between eigenvalues. We also extend the proposed procedure to handle functional data measured with noise. Compared to existing robust FPCA approaches, the proposed PASS FPCA requires weaker distributional assumptions to conserve the eigenspace of the covariance function. Specifically, existing work are often built upon a class of functional elliptical distributions, which requires inherently symmetry. In contrast, we introduce a class of distributions called the weakly functional coordinate symmetry (weakly FCS), which allows for severe asymmetry and is much more flexible than the functional elliptical distribution family. The robustness of the PASS FPCA is demonstrated via extensive simulation studies, especially its advantages in scenarios with nonelliptical distributions. The proposed method was motivated by and applied to analysis of accelerometry data from the Objective Physical Activity and Cardiovascular Health Study, a large-scale epidemiological study to investigate the relationship between objectively measured physical activity and cardiovascular health among older women.


Asunto(s)
Análisis de Componente Principal , Anciano , Femenino , Humanos , Acelerometría , Ejercicio Físico , Sistema Cardiovascular
5.
Eval Health Prof ; 45(4): 362-376, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-35994023

RESUMEN

Time-series intervention designs that include two or more phases have been widely discussed in the healthcare literature for many years. A convenient model for the analysis of these designs has a linear model part (to measure changes in level and trend) plus a second part that measures the random error structure; the error structure is assumed to follow an autoregressive time-series process. Traditional generalized linear model approaches widely used to estimate this model are less than satisfactory because they tend to provide substantially biased intervention tests and confidence intervals. We describe an updated version of the original double bootstrap approach that was developed by McKnight et al. (2000) to correct for this problem. This updated analysis and a new robust version were recently implemented in an R package (McKean & Zhang, 2018). The robust method is insensitive to outliers and problems associated with common departures from normality in the error distribution. Monte Carlo studies as well as published data are used to demonstrate the properties of both versions. The R code required to perform the analyses is provided and illustrated.


Asunto(s)
Proyectos de Investigación , Humanos , Análisis de los Mínimos Cuadrados , Método de Montecarlo , Factores de Tiempo
6.
Financ Innov ; 8(1): 47, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35535250

RESUMEN

Most financial signals show time dependency that, combined with noisy and extreme events, poses serious problems in the parameter estimations of statistical models. Moreover, when addressing asset pricing, portfolio selection, and investment strategies, accurate estimates of the relationship among assets are as necessary as are delicate in a time-dependent context. In this regard, fundamental tools that increasingly attract research interests are precision matrix and graphical models, which are able to obtain insights into the joint evolution of financial quantities. In this paper, we present a robust divergence estimator for a time-varying precision matrix that can manage both the extreme events and time-dependency that affect financial time series. Furthermore, we provide an algorithm to handle parameter estimations that uses the "maximization-minimization" approach. We apply the methodology to synthetic data to test its performances. Then, we consider the cryptocurrency market as a real data application, given its remarkable suitability for the proposed method because of its volatile and unregulated nature.

7.
Transfusion ; 62(6): 1261-1268, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35383944

RESUMEN

BACKGROUND: Blood supply chain management requires estimates about the demand of blood products. The more accurate these estimates are, the less wastage and fewer shortages occur. While the current literature demonstrates tangible benefits from statistical forecasting approaches, it highlights issues that discourage their use in blood supply chain optimization: there is no single approach that works everywhere, and there are no guarantees that any favorable method performance continues into the future. STUDY DESIGN AND METHODS: We design a novel autonomous forecasting system to solve the aforementioned issues. We show how possible changes in blood demand could affect prediction performance using partly synthetic demand data. We use these data then to investigate the performances of different method selection heuristics. Finally, the performances of the heuristics and single method approaches were compared using historical demand data from Finland and the Netherlands. The development code is publicly accessible. RESULTS: We find that a shift in the demand signal behavior from stochastic to seasonal would affect the relative performances of the methods. Our autonomous system outperforms all examined individual methods when forecasting the synthetic demand series, exhibiting meaningful robustness. When forecasting with real data, the most accurate methods in Finland and in the Netherlands are the autonomous system and the method average, respectively. DISCUSSION: Optimal use of method selection heuristics, as with our autonomous system, may overcome the need to constantly supervise forecasts in anticipation of changes in demand while being sufficiently accurate in the absence of such changes.


Asunto(s)
Predicción , Finlandia , Humanos , Países Bajos
8.
Behav Res Methods ; 54(3): 1291-1305, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34590287

RESUMEN

Growth mixture modeling is a common tool for longitudinal data analysis. One of the key assumptions of traditional growth mixture modeling is that repeated measures within each class are normally distributed. When this normality assumption is violated, traditional growth mixture modeling may provide misleading model estimation results and suffer from nonconvergence. In this article, we propose a robust approach to growth mixture modeling based on conditional medians and use Bayesian methods for model estimation and inferences. A simulation study is conducted to evaluate the performance of this approach. It is found that the new approach has a higher convergence rate and less biased parameter estimation than the traditional growth mixture modeling approach when data are skewed or have outliers. An empirical data analysis is also provided to illustrate how the proposed method can be applied in practice.


Asunto(s)
Modelos Estadísticos , Proyectos de Investigación , Teorema de Bayes , Simulación por Computador , Humanos
9.
Stat Med ; 41(2): 407-432, 2022 01 30.
Artículo en Inglés | MEDLINE | ID: mdl-34713468

RESUMEN

The main purpose of many medical studies is to estimate the effects of a treatment or exposure on an outcome. However, it is not always possible to randomize the study participants to a particular treatment, therefore observational study designs may be used. There are major challenges with observational studies; one of which is confounding. Controlling for confounding is commonly performed by direct adjustment of measured confounders; although, sometimes this approach is suboptimal due to modeling assumptions and misspecification. Recent advances in the field of causal inference have dealt with confounding by building on classical standardization methods. However, these recent advances have progressed quickly with a relative paucity of computational-oriented applied tutorials contributing to some confusion in the use of these methods among applied researchers. In this tutorial, we show the computational implementation of different causal inference estimators from a historical perspective where new estimators were developed to overcome the limitations of the previous estimators (ie, nonparametric and parametric g-formula, inverse probability weighting, double-robust, and data-adaptive estimators). We illustrate the implementation of different methods using an empirical example from the Connors study based on intensive care medicine, and most importantly, we provide reproducible and commented code in Stata, R, and Python for researchers to adapt in their own observational study. The code can be accessed at https://github.com/migariane/Tutorial_Computational_Causal_Inference_Estimators.


Asunto(s)
Modelos Estadísticos , Proyectos de Investigación , Causalidad , Simulación por Computador , Humanos , Probabilidad , Puntaje de Propensión
10.
Br J Math Stat Psychol ; 75(1): 46-58, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-33950536

RESUMEN

Consider a two-way ANOVA design. Generally, interactions are characterized by the difference between two measures of effect size. Typically the measure of effect size is based on the difference between measures of location, with the difference between means being the most common choice. This paper deals with extending extant results to two robust, heteroscedastic measures of effect size. The first is a robust, heteroscedastic analogue of Cohen's d. The second characterizes effect size in terms of the quantiles of the null distribution. Simulation results indicate that a percentile bootstrap method yields reasonably accurate confidence intervals. Data from an actual study are used to illustrate how these measures of effect size can add perspective when comparing groups.


Asunto(s)
Proyectos de Investigación , Análisis de Varianza , Simulación por Computador
11.
Front Psychiatry ; 12: 622562, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33897488

RESUMEN

In the face of the COVID-19 pandemic, the swift response of mental health research funders and institutions, service providers, and academics enabled progress toward understanding the mental health consequences. Nevertheless, there remains an urgent need to understand the true extent of the short- and long-term effects of the COVID-19 pandemic on mental health, necessitating ongoing research. Although the speed with which mental health researchers have mobilized to respond to the pandemic so far is to be commended, there are valid concerns as to whether speed may have compromised the quality of our work. As the pandemic continues to evolve, we must take time to reflect on our initial research response and collectively consider how we can use this to strengthen ensuing COVID-19 mental health research and our response to future crises. Here, we offer our reflections as members of the UK mental health research community to discuss the continuing progress and persisting challenges of our COVID-19 response, which we hope can encourage reflection and discussion among the wider research community. We conclude that (1) Fragmentation in our infrastructure has challenged the efficient, effective and equitable deployment of resources, (2) In responding quickly, we may have overlooked the role of experts by experience, (3) Robust and open methods may have been compromised by speedy responses, and (4) This pandemic may exacerbate existing issues of inequality in our workforce.

12.
J Multivar Anal ; 1832021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33518826

RESUMEN

Canonical correlation analysis (CCA) is a common method used to estimate the associations between two different sets of variables by maximizing the Pearson correlation between linear combinations of the two sets of variables. We propose a version of CCA for transelliptical distributions with an elliptical copula using pairwise Kendall's tau to estimate a latent scatter matrix. Because Kendall's tau relies only on the ranks of the data this method does not make any assumptions about the marginal distributions of the variables, and is valid when moments do not exist. We establish consistency and asymptotic normality for canonical directions and correlations estimated using Kendall's tau. Simulations indicate that this estimator outperforms standard CCA for data generated from heavy tailed elliptical distributions. Our method also identifies more meaningful relationships when the marginal distributions are skewed. We also propose a method for testing for non-zero canonical correlations using bootstrap methods. This testing procedure does not require any assumptions on the joint distribution of the variables and works for all elliptical copulas. This is in contrast to permutation tests which are only valid when data are generated from a distribution with a Gaussian copula. This method's practical utility is shown in an analysis of the association between radial diffusivity in white matter tracts and cognitive tests scores for six-year-old children from the Early Brain Development Study at UNC-Chapel Hill. An R package implementing this method is available at github.com/blangworthy/transCCA.

13.
Biom J ; 63(4): 859-874, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33555041

RESUMEN

In this paper, we extend the linear M-quantile random intercept model (MQRE) to discrete data and use the proposed model to evaluate the effect of selected covariates on two count responses: the number of generic medical examinations and the number of specialised examinations for health districts in three regions of central Italy. The new approach represents an outlier-robust alternative to the generalised linear mixed model with Gaussian random effects and it allows estimating the effect of the covariates at various quantiles of the conditional distribution of the target variable. Results from a simulation experiment, as well as from real data, confirm that the method proposed here presents good robustness properties and can be in certain cases more efficient than other approaches.


Asunto(s)
Modelos Estadísticos , Médicos , Humanos , Modelos Lineales , Distribución Normal , Análisis de Regresión
14.
Int J Epidemiol ; 50(4): 1335-1349, 2021 08 30.
Artículo en Inglés | MEDLINE | ID: mdl-33393617

RESUMEN

BACKGROUND: Previous studies have often evaluated methods for Mendelian randomization (MR) analysis based on simulations that do not adequately reflect the data-generating mechanisms in genome-wide association studies (GWAS) and there are often discrepancies in the performance of MR methods in simulations and real data sets. METHODS: We use a simulation framework that generates data on full GWAS for two traits under a realistic model for effect-size distribution coherent with the heritability, co-heritability and polygenicity typically observed for complex traits. We further use recent data generated from GWAS of 38 biomarkers in the UK Biobank and performed down sampling to investigate trends in estimates of causal effects of these biomarkers on the risk of type 2 diabetes (T2D). RESULTS: Simulation studies show that weighted mode and MRMix are the only two methods that maintain the correct type I error rate in a diverse set of scenarios. Between the two methods, MRMix tends to be more powerful for larger GWAS whereas the opposite is true for smaller sample sizes. Among the other methods, random-effect IVW (inverse-variance weighted method), MR-Robust and MR-RAPS (robust adjust profile score) tend to perform best in maintaining a low mean-squared error when the InSIDE assumption is satisfied, but can produce large bias when InSIDE is violated. In real-data analysis, some biomarkers showed major heterogeneity in estimates of their causal effects on the risk of T2D across the different methods and estimates from many methods trended in one direction with increasing sample size with patterns similar to those observed in simulation studies. CONCLUSION: The relative performance of different MR methods depends heavily on the sample sizes of the underlying GWAS, the proportion of valid instruments and the validity of the InSIDE assumption. Down-sampling analysis can be used in large GWAS for the possible detection of bias in the MR methods.


Asunto(s)
Diabetes Mellitus Tipo 2 , Análisis de la Aleatorización Mendeliana , Biomarcadores , Causalidad , Diabetes Mellitus Tipo 2/epidemiología , Diabetes Mellitus Tipo 2/genética , Estudio de Asociación del Genoma Completo , Humanos , Polimorfismo de Nucleótido Simple
15.
Br J Math Stat Psychol ; 74(1): 90-98, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-32369607

RESUMEN

Recently, a multiple comparisons procedure was derived with the goal of determining whether it is reasonable to make a decision about which of J independent groups has the largest robust measure of location. This was done by testing hypotheses aimed at comparing the group with the largest estimate to the remaining J - 1 groups. It was demonstrated that for the goal of controlling the familywise error rate, meaning the probability of one or more Type I errors, well-known improvements on the Bonferroni method can perform poorly. A technique for dealing with this issue was suggested and found to perform well in simulations. However, when dealing with dependent groups, the method is unsatisfactory. This note suggests an alternative method that is designed for dependent groups.


Asunto(s)
Proyectos de Investigación , Probabilidad
16.
Br J Math Stat Psychol ; 74(2): 286-312, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-32926414

RESUMEN

Growth curve models have been widely used to analyse longitudinal data in social and behavioural sciences. Although growth curve models with normality assumptions are relatively easy to estimate, practical data are rarely normal. Failing to account for non-normal data may lead to unreliable model estimation and misleading statistical inference. In this work, we propose a robust approach for growth curve modelling using conditional medians that are less sensitive to outlying observations. Bayesian methods are applied for model estimation and inference. Based on the existing work on Bayesian quantile regression using asymmetric Laplace distributions, we use asymmetric Laplace distributions to convert the problem of estimating a median growth curve model into a problem of obtaining the maximum likelihood estimator for a transformed model. Monte Carlo simulation studies have been conducted to evaluate the numerical performance of the proposed approach with data containing outliers or leverage observations. The results show that the proposed approach yields more accurate and efficient parameter estimates than traditional growth curve modelling. We illustrate the application of our robust approach using conditional medians based on a real data set from the Virginia Cognitive Aging Project.


Asunto(s)
Teorema de Bayes , Simulación por Computador , Método de Montecarlo
17.
J Appl Stat ; 47(7): 1144-1167, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-35707025

RESUMEN

Outlier detection can be seen as a pre-processing step for locating data points in a data sample, which do not conform to the majority of observations. Various techniques and methods for outlier detection can be found in the literature dealing with different types of data. However, many data sets are inflated by true zeros and, in addition, some components/variables might be of compositional nature. Important examples of such data sets are the Structural Earnings Survey, the Structural Business Statistics, the European Statistics on Income and Living Conditions, tax data or - as in this contribution - household expenditure data which are used, for example, to estimate the Purchase Power Parity of a country. In this work, robust univariate and multivariate outlier detection methods are compared by a complex simulation study that considers various challenges included in data sets, namely structural (true) zeros, missing values, and compositional variables. These circumstances make it difficult or impossible to flag true outliers and influential observations by well-known outlier detection methods. Our aim is to assess the performance of outlier detection methods in terms of their effectiveness to identify outliers when applied to challenging data sets such as the household expenditures data surveyed all over the world. Moreover, different methods are evaluated through a close-to-reality simulation study. Differences in performance of univariate and multivariate robust techniques for outlier detection and their shortcomings are reported. We found that robust multivariate methods outperform robust univariate methods. The best performing methods in finding the outliers and in providing a low false discovery rate were found to be the generalized S estimators (GSE), the BACON-EEM algorithm and a compositional method (CoDa-Cov). In addition, these methods performed also best when the outliers are imputed based on the corresponding outlier detection method and indicators are estimated from the data sets.

18.
J Appl Biomech ; 34(4): 258-261, 2018 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-30045651

RESUMEN

The paper reviews advances and insights relevant to comparing groups when the sample sizes are small. There are conditions under which conventional, routinely used techniques are satisfactory. But major insights regarding outliers, skewed distributions, and unequal variances (heteroscedasticity) make it clear that under general conditions they provide poor control over the type I error probability and can have relatively poor power. In practical terms, important differences among groups can be missed and poorly characterized. Many new and improved methods have been derived that are aimed at dealing with the shortcomings of classic methods. To provide a conceptual basis for understanding the practical importance of modern methods, the paper reviews some modern insights related to why methods based on means can perform poorly. Then some strategies for dealing with nonnormal distributions and unequal variances are described. For brevity, the focus is on comparing 2 independent groups or 2 dependent groups based on the usual difference scores. The paper concludes with comments on issues to consider when choosing from among the methods reviewed in the paper.


Asunto(s)
Análisis de Datos , Tamaño de la Muestra , Análisis de Varianza , Modelos Estadísticos
19.
Stat Sin ; 28(4): 2389-2407, 2018 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31263346

RESUMEN

This paper develops a hybrid likelihood (HL) method based on a compromise between parametric and nonparametric likelihoods. Consider the setting of a parametric model for the distribution of an observation Y with parameter θ. Suppose there is also an estimating function m(·, µ) identifying another parameter µ via Em(Y, µ) = 0, at the outset defined independently of the parametric model. To borrow strength from the parametric model while obtaining a degree of robustness from the empirical likelihood method, we formulate inference about θ in terms of the hybrid likelihood function Hn (θ) = Ln (θ)1-a Rn (µ(θ)) a . Here a ∈ [0,1) represents the extent of the compromise, Ln is the ordinary parametric likelihood for θ, Rn is the empirical likelihood function, and µ is considered through the lens of the parametric model. We establish asymptotic normality of the corresponding HL estimator and a version of the Wilks theorem. We also examine extensions of these results under misspecification of the parametric model, and propose methods for selecting the balance parameter a.

20.
Eur J Epidemiol ; 32(5): 377-389, 2017 05.
Artículo en Inglés | MEDLINE | ID: mdl-28527048

RESUMEN

Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.


Asunto(s)
Interpretación Estadística de Datos , Pleiotropía Genética , Variación Genética , Análisis de la Aleatorización Mendeliana/métodos , Modelos Biológicos , Humanos , Distribución Aleatoria , Factores de Riesgo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA