Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 164
Filtrar
1.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39177025

RESUMEN

Interval-censored failure time data frequently arise in various scientific studies where each subject experiences periodical examinations for the occurrence of the failure event of interest, and the failure time is only known to lie in a specific time interval. In addition, collected data may include multiple observed variables with a certain degree of correlation, leading to severe multicollinearity issues. This work proposes a factor-augmented transformation model to analyze interval-censored failure time data while reducing model dimensionality and avoiding multicollinearity elicited by multiple correlated covariates. We provide a joint modeling framework by comprising a factor analysis model to group multiple observed variables into a few latent factors and a class of semiparametric transformation models with the augmented factors to examine their and other covariate effects on the failure event. Furthermore, we propose a nonparametric maximum likelihood estimation approach and develop a computationally stable and reliable expectation-maximization algorithm for its implementation. We establish the asymptotic properties of the proposed estimators and conduct simulation studies to assess the empirical performance of the proposed method. An application to the Alzheimer's Disease Neuroimaging Initiative (ADNI) study is provided. An R package ICTransCFA is also available for practitioners. Data used in preparation of this article were obtained from the ADNI database.


Asunto(s)
Enfermedad de Alzheimer , Simulación por Computador , Modelos Estadísticos , Humanos , Funciones de Verosimilitud , Algoritmos , Neuroimagen , Análisis Factorial , Interpretación Estadística de Datos , Factores de Tiempo
2.
Stat Methods Med Res ; : 9622802241262523, 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39053572

RESUMEN

An important task in health research is to characterize time-to-event outcomes such as disease onset or mortality in terms of a potentially high-dimensional set of risk factors. For example, prospective cohort studies of Alzheimer's disease (AD) typically enroll older adults for observation over several decades to assess the long-term impact of genetic and other factors on cognitive decline and mortality. The accelerated failure time model is particularly well-suited to such studies, structuring covariate effects as "horizontal" changes to the survival quantiles that conceptually reflect shifts in the outcome distribution due to lifelong exposures. However, this modeling task is complicated by the enrollment of adults at differing ages, and intermittent follow-up visits leading to interval-censored outcome information. Moreover, genetic and clinical risk factors are not only high-dimensional, but characterized by underlying grouping structures, such as by function or gene location. Such grouped high-dimensional covariates require shrinkage methods that directly acknowledge this structure to facilitate variable selection and estimation. In this paper, we address these considerations directly by proposing a Bayesian accelerated failure time model with a group-structured lasso penalty, designed for left-truncated and interval-censored time-to-event data. We develop an R package with a Markov chain Monte Carlo sampler for estimation. We present a simulation study examining the performance of this method relative to an ordinary lasso penalty and apply the proposed method to identify groups of predictive genetic and clinical risk factors for AD in the Religious Orders Study and Memory and Aging Project prospective cohort studies of AD and dementia.

3.
BMC Infect Dis ; 24(1): 555, 2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38831419

RESUMEN

BACKGROUND: Estimation of the SARS-CoV-2 incubation time distribution is hampered by incomplete data about infection. We discuss two biases that may result from incorrect handling of such data. Notified cases may recall recent exposures more precisely (differential recall). This creates bias if the analysis is restricted to observations with well-defined exposures, as longer incubation times are more likely to be excluded. Another bias occurred in the initial estimates based on data concerning travellers from Wuhan. Only individuals who developed symptoms after their departure were included, leading to under-representation of cases with shorter incubation times (left truncation). This issue was not addressed in the analyses performed in the literature. METHODS: We performed simulations and provide a literature review to investigate the amount of bias in estimated percentiles of the SARS-CoV-2 incubation time distribution. RESULTS: Depending on the rate of differential recall, restricting the analysis to a subset of narrow exposure windows resulted in underestimation in the median and even more in the 95th percentile. Failing to account for left truncation led to an overestimation of multiple days in both the median and the 95th percentile. CONCLUSION: We examined two overlooked sources of bias concerning exposure information that the researcher engaged in incubation time estimation needs to be aware of.


Asunto(s)
Sesgo , COVID-19 , Periodo de Incubación de Enfermedades Infecciosas , SARS-CoV-2 , Humanos , COVID-19/epidemiología , Simulación por Computador
4.
Am J Epidemiol ; 2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38717330

RESUMEN

Quantitative bias analysis (QBA) permits assessment of the expected impact of various imperfections of the available data on the results and conclusions of a particular real-world study. This article extends QBA methodology to multivariable time-to-event analyses with right-censored endpoints, possibly including time-varying exposures or covariates. The proposed approach employs data-driven simulations, which preserve important features of the data at hand while offering flexibility in controlling the parameters and assumptions that may affect the results. First, the steps required to perform data-driven simulations are described, and then two examples of real-world time-to-event analyses illustrate their implementation and the insights they may offer. The first example focuses on the omission of an important time-invariant predictor of the outcome in a prognostic study of cancer mortality, and permits separating the expected impact of confounding bias from non-collapsibility. The second example assesses how imprecise timing of an interval-censored event - ascertained only at sparse times of clinic visits - affects its estimated association with a time-varying drug exposure. The simulation results also provide a basis for comparing the performance of two alternative strategies for imputing the unknown event times in this setting. The R scripts that permit the reproduction of our examples are provided.

5.
Ther Innov Regul Sci ; 58(4): 721-729, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38598082

RESUMEN

BACKGROUND: Progression-free survival (PFS) is used to evaluate treatment effects in cancer clinical trials. Disease progression (DP) in patients is typically determined by radiological testing at several scheduled tumor-assessment time points. This produces a discrepancy between the true progression time and the observed progression time. When the observed progression time is considered as the true progression time, a positively biased PFS is obtained for some patients, and the estimated survival function derived by the Kaplan-Meier method is also biased. METHODS: While the midpoint imputation method is available and replaces interval-censored data with midpoint data, it unrealistically assumes that several DPs occur at the same time point when several DPs are observed within the same tumor-assessment interval. We enhanced the midpoint imputation method by replacing interval-censored data with equally spaced timepoint data based on the number of observed interval-censored data within the same tumor-assessment interval. RESULTS: The root mean square error of the median of the enhanced method is almost always smaller than that of the midpoint imputation regardless of the tumor-assessment frequency. The coverage probability of the enhanced method is close to the nominal confidence level of 95% in most scenarios. CONCLUSION: We believe that the enhanced method, which builds upon the midpoint imputation method, is more effective than the midpoint imputation method itself.


Asunto(s)
Neoplasias , Humanos , Neoplasias/mortalidad , Neoplasias/tratamiento farmacológico , Supervivencia sin Progresión , Progresión de la Enfermedad , Estimación de Kaplan-Meier , Factores de Tiempo , Ensayos Clínicos como Asunto
6.
Stat Med ; 43(12): 2452-2471, 2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38599784

RESUMEN

Many longitudinal studies are designed to monitor participants for major events related to the progression of diseases. Data arising from such longitudinal studies are usually subject to interval censoring since the events are only known to occur between two monitoring visits. In this work, we propose a new method to handle interval-censored multistate data within a proportional hazards model framework where the hazard rate of events is modeled by a nonparametric function of time and the covariates affect the hazard rate proportionally. The main idea of this method is to simplify the likelihood functions of a discrete-time multistate model through an approximation and the application of data augmentation techniques, where the assumed presence of censored information facilitates a simpler parameterization. Then the expectation-maximization algorithm is used to estimate the parameters in the model. The performance of the proposed method is evaluated by numerical studies. Finally, the method is employed to analyze a dataset on tracking the advancement of coronary allograft vasculopathy following heart transplantation.


Asunto(s)
Algoritmos , Trasplante de Corazón , Modelos de Riesgos Proporcionales , Humanos , Funciones de Verosimilitud , Trasplante de Corazón/estadística & datos numéricos , Estudios Longitudinales , Simulación por Computador , Modelos Estadísticos , Interpretación Estadística de Datos
7.
Front Public Health ; 12: 1203631, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38450147

RESUMEN

Introduction: To examine if perceptions of harmfulness and addictiveness of hookah and cigarettes impact the age of initiation of hookah and cigarettes, respectively, among US youth. Youth (12-17 years old) users and never users of hookah and cigarettes during their first wave of PATH participation were analyzed by each tobacco product (TP) independently. The effect of perceptions of (i) harmfulness and (ii) addictiveness at the first wave of PATH participation on the age of initiation of ever use of hookah was estimated using interval-censoring Cox proportional hazards models. Methods: Users and never users of hookah at their first wave of PATH participation were balanced by multiplying the sampling weight and the 100 balance repeated replicate weights with the inverse probability weight (IPW). The IPW was based on the probability of being a user in their first wave of PATH participation. A Fay's factor of 0.3 was included for variance estimation. Crude hazard ratios (HR) and 95% confidence intervals (CIs) are reported. A similar process was repeated for cigarettes. Results: Compared to youth who perceived each TP as "a lot of harm", youth who reported perceived "some harm" had younger ages of initiation of these tobacco products, HR: 2.53 (95% CI: 2.87-4.34) for hookah and HR: 2.35 (95% CI: 2.10-2.62) for cigarettes. Similarly, youth who perceived each TP as "no/little harm" had an earlier age of initiation of these TPs compared to those who perceived them as "a lot of harm", with an HR: 2.23 (95% CI: 1.82, 2.71) for hookah and an HR: 1.85 (95% CI: 1.72, 1.98) for cigarettes. Compared to youth who reported each TP as "somewhat/very likely" as their perception of addictiveness, youth who reported "neither likely nor unlikely" and "very/somewhat unlikely" as their perception of addictiveness of hookah had an older age of initiation, with an HR: 0.75 (95% CI: 0.67-0.83) and an HR: 0.55 (95% CI: 0.47, 0.63) respectively. Discussion: Perceptions of the harmfulness and addictiveness of these tobacco products (TPs) should be addressed in education campaigns for youth to prevent early ages of initiation of cigarettes and hookah.


Asunto(s)
Conducta Adictiva , Productos de Tabaco , Adolescente , Humanos , Niño , Cognición , Probabilidad , Escolaridad
8.
Stat Med ; 43(9): 1708-1725, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38382112

RESUMEN

In studies that assess disease status periodically, time of disease onset is interval censored between visits. Participants who die between two visits may have unknown disease status after their last visit. In this work, we consider an additional scenario where diagnosis requires two consecutive positive tests, such that disease status can also be unknown at the last visit preceding death. We show that this impacts the choice of censoring time for those who die without an observed disease diagnosis. We investigate two classes of models that quantify the effect of risk factors on disease outcome: a Cox proportional hazards model with death as a competing risk and an illness death model that treats disease as a possible intermediate state. We also consider four censoring strategies: participants without observed disease are censored at death (Cox model only), the last visit, the last visit with a negative test, or the second last visit. We evaluate the performance of model and censoring strategy combinations on simulated data with a binary risk factor and illustrate with a real data application. We find that the illness death model with censoring at the second last visit shows the best performance in all simulation settings. Other combinations show bias that varies in magnitude and direction depending on the differential mortality between diseased and disease-free subjects, the gap between visits, and the choice of the censoring time.


Asunto(s)
Modelos de Riesgos Proporcionales , Humanos , Simulación por Computador , Factores de Riesgo
9.
Biometrics ; 80(1)2024 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-38364799

RESUMEN

Multivariate panel count data arise when there are multiple types of recurrent events, and the observation for each study subject consists of the number of recurrent events of each type between two successive examinations. We formulate the effects of potentially time-dependent covariates on multiple types of recurrent events through proportional rates models, while leaving the dependence structures of the related recurrent events completely unspecified. We employ nonparametric maximum pseudo-likelihood estimation under the working assumptions that all types of events are independent and each type of event is a nonhomogeneous Poisson process, and we develop a simple and stable EM-type algorithm. We show that the resulting estimators of the regression parameters are consistent and asymptotically normal, with a covariance matrix that can be estimated consistently by a sandwich estimator. In addition, we develop a class of graphical and numerical methods for checking the adequacy of the fitted model. Finally, we evaluate the performance of the proposed methods through simulation studies and analysis of a skin cancer clinical trial.


Asunto(s)
Neoplasias Cutáneas , Humanos , Simulación por Computador , Modelos Estadísticos , Neoplasias Cutáneas/epidemiología , Ensayos Clínicos como Asunto
10.
Stat Med ; 42(30): 5596-5615, 2023 12 30.
Artículo en Inglés | MEDLINE | ID: mdl-37867199

RESUMEN

Panel count data and interval-censored data are two types of incomplete data that often occur in event history studies. Almost all existing statistical methods are developed for their separate analysis. In this paper, we investigate a more general situation where a recurrent event process and an interval-censored failure event occur together. To intuitively and clearly explain the relationship between the recurrent current process and failure event, we propose a failure time-dependent mean model through a completely unspecified link function. To overcome the challenges arising from the blending of nonparametric components and parametric regression coefficients, we develop a two-stage conditional expected likelihood-based estimation procedure. We establish the consistency, the convergence rate and the asymptotic normality of the proposed two-stage estimator. Furthermore, we construct a class of two-sample tests for comparison of mean functions from different groups. The proposed methods are evaluated by extensive simulation studies and are illustrated with the skin cancer data that motivated this study.


Asunto(s)
Neoplasias Cutáneas , Humanos , Funciones de Verosimilitud , Análisis de Regresión , Simulación por Computador , Tiempo
11.
Stat Med ; 42(28): 5113-5134, 2023 12 10.
Artículo en Inglés | MEDLINE | ID: mdl-37706586

RESUMEN

In this article, a competitive risk survival model is considered in which the initial number of risks, assumed to follow a negative binomial distribution, is subject to a destructive mechanism. Assuming the population of interest to have a cure component, the form of the data as interval-censored, and considering both the number of initial risks and risks remaining active after destruction to be missing data, we develop two distinct estimation algorithms for this model. Making use of the conditional distributions of the missing data, we develop an expectation maximization (EM) algorithm, in which the conditional expected complete log-likelihood function is decomposed into simpler functions which are then maximized independently. A variation of the EM algorithm, called the stochastic EM (SEM) algorithm, is also developed with the goal of avoiding the calculation of complicated expectations and improving performance at parameter recovery. A Monte Carlo simulation study is carried out to evaluate the performance of both estimation methods through calculated bias, root mean square error, and coverage probability of the asymptotic confidence interval. We demonstrate the proposed SEM algorithm as the preferred estimation method through simulation and further illustrate the advantage of the SEM algorithm, as well as the use of a destructive model, with data from a children's mortality study.


Asunto(s)
Algoritmos , Modelos Estadísticos , Niño , Humanos , Funciones de Verosimilitud , Simulación por Computador , Método de Montecarlo
12.
Stat Med ; 42(26): 4886-4896, 2023 11 20.
Artículo en Inglés | MEDLINE | ID: mdl-37652042

RESUMEN

The approximate Bernstein polynomial model, a mixture of beta distributions, is applied to obtain maximum likelihood estimates of the regression coefficients, the baseline density and the survival functions in an accelerated failure time model based on interval censored data including current status data. The estimators of the regression coefficients and the underlying baseline density function are shown to be consistent with almost parametric rates of convergence under some conditions for uncensored and/or interval censored data. Simulation shows that the proposed method is better than its competitors. The proposed method is illustrated by fitting the Breast Cosmetic and the HIV infection time data using the accelerated failure time model.


Asunto(s)
Infecciones por VIH , Humanos , Funciones de Verosimilitud , Infecciones por VIH/tratamiento farmacológico , Modelos Estadísticos , Simulación por Computador , Factores de Tiempo
13.
BMC Pregnancy Childbirth ; 23(1): 586, 2023 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-37582776

RESUMEN

BACKGROUND: The impact of pre-pregnancy weight and the rate of gestational weight gain (GWG) together on the risk of early GDM (< 24 weeks gestation; eGDM) has not been studied in the Indian context. We aimed to study the influence of (1) pre-pregnancy weight on the risk of eGDM diagnosed in two time intervals; and (2) in addition, the rate of GWG by 12 weeks on the risk of eGDM diagnosed in 19-24 weeks. METHOD: Our study utilized real-world clinical data on pregnant women routinely collected at an antenatal care clinic at a private tertiary hospital, in Pune, India. Women registering before 12 weeks of gestation (v1), with a singleton pregnancy, and having a follow-up visit between 19-24 weeks (v2) were included (n = 600). The oral glucose tolerance test was conducted universally as per Indian guidelines (DIPSI) at v1 and v2 for diagnosing eGDM. The data on the onset time of eGDM were interval censored; hence, we modeled the risk of eGDM using binomial regression to assess the influence of pre-pregnancy weight on the risk of eGDM in the two intervals. The rate of GWG by 12 weeks was added to assess its impact on the risk of eGDM diagnosed in v2. RESULT: Overall, 89 (14.8%) women (age 32 ± 4 years) were diagnosed with eGDM by 24 weeks, of which 59 (9.8%) were diagnosed before 12 weeks and 30 of 541 (5.5%) women were diagnosed between 19-24 weeks. Two-thirds (66%) of eGDM were diagnosed before 12 weeks of gestation. Women's pre-pregnancy weight was positively associated with the risk of GDM in both time intervals though the lower confidence limit was below zero in v1. The rate of GWG by 12 weeks was not observed to be associated with the risk of eGDM diagnosed between 19-24 weeks of gestation. These associations were independent of age, height, and parity. CONCLUSION: Health workers may focus on pre-pregnancy weight, a modifiable risk factor for eGDM. A larger community-based study measuring weight and GDM status more frequently may be warranted to deepen the understanding of the role of GWG as a risk factor for GDM.


Asunto(s)
Diabetes Gestacional , Ganancia de Peso Gestacional , Femenino , Humanos , Masculino , Embarazo , Índice de Masa Corporal , Diabetes Gestacional/diagnóstico , Diabetes Gestacional/epidemiología , India/epidemiología , Paridad , Resultado del Embarazo , Centros de Atención Terciaria , Recién Nacido , Adulto
14.
Biometrika ; 110(3): 815-830, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37601305

RESUMEN

Multivariate interval-censored data arise when there are multiple types of events or clusters of study subjects, such that the event times are potentially correlated and when each event is only known to occur over a particular time interval. We formulate the effects of potentially time-varying covariates on the multivariate event times through marginal proportional hazards models while leaving the dependence structures of the related event times unspecified. We construct the nonparametric pseudolikelihood under the working assumption that all event times are independent, and we provide a simple and stable EM-type algorithm. The resulting nonparametric maximum pseudolikelihood estimators for the regression parameters are shown to be consistent and asymptotically normal, with a limiting covariance matrix that can be consistently estimated by a sandwich estimator under arbitrary dependence structures for the related event times. We evaluate the performance of the proposed methods through extensive simulation studies and present an application to data from the Atherosclerosis Risk in Communities Study.

15.
Stat Med ; 42(21): 3877-3891, 2023 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-37402505

RESUMEN

Two large-scale randomized clinical trials compared fenofibrate and placebo in diabetic patients with pre-existing retinopathy (FIELD study) or risk factors (ACCORD trial) on an intention-to-treat basis and reported a significant reduction in the progression of diabetic retinopathy in the fenofibrate arms. However, their analyses involved complications due to intercurrent events, that is, treatment-switching and interval-censoring. This article addresses these problems involved in estimation of causal effects of long-term use of fibrates in a cohort study that followed patients with type 2 diabetes for 8 years. We propose structural nested mean models (SNMMs) of time-varying treatment effects and pseudo-observation estimators for interval-censored data. The first estimator for SNMMs uses a nonparametric maximum likelihood estimator (MLE) as a pseudo-observation, while the second estimator is based on MLE under a parametric piecewise exponential distribution. Through numerical studies with real and simulated datasets, the pseudo-observations estimators of causal effects using the nonparametric Wellner-Zhan estimator perform well even under dependent interval-censoring. Its application to the diabetes study revealed that the use of fibrates in the first 4 years reduced the risk of diabetic retinopathy but did not support its efficacy beyond 4 years.


Asunto(s)
Diabetes Mellitus Tipo 2 , Retinopatía Diabética , Fenofibrato , Humanos , Estudios de Cohortes , Fenofibrato/uso terapéutico , Retinopatía Diabética/tratamiento farmacológico , Diabetes Mellitus Tipo 2/complicaciones , Diabetes Mellitus Tipo 2/tratamiento farmacológico , Causalidad
16.
Clin Trials ; 20(5): 507-516, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37243355

RESUMEN

BACKGROUND: Composite time-to-event endpoints are beneficial for assessing related outcomes jointly in clinical trials, but components of the endpoint may have different censoring mechanisms. For example, in the PRagmatic EValuation of evENTs And Benefits of Lipid-lowering in oldEr adults (PREVENTABLE) trial, the composite outcome contains one endpoint that is right censored (all-cause mortality) and two endpoints that are interval censored (dementia and persistent disability). Although Cox regression is an established method for time-to-event outcomes, it is unclear how models perform under differing component-wise censoring schemes for large clinical trial data. The goal of this article is to conduct a simulation study to investigate the performance of Cox models under different scenarios for composite endpoints with component-wise censoring. METHODS: We simulated data by varying the strength and direction of the association between treatment and outcome for the two component types, the proportion of events arising from the components of the outcome (right censored and interval censored), and the method for including the interval-censored component in the Cox model (upper value and midpoint of the interval). Under these scenarios, we compared the treatment effect estimate bias, confidence interval coverage, and power. RESULTS: Based on the simulation study, Cox models generally have adequate power to achieve statistical significance for comparing treatments for composite outcomes with component-wise censoring. In our simulation study, we did not observe substantive bias for scenarios under the null hypothesis or when the treatment has a similar relative effect on each component outcome. Performance was similar regardless of if the upper value or midpoint of the interval-censored part of the composite outcome was used. CONCLUSION: Cox regression is a suitable method for analysis of clinical trial data with composite time-to-event endpoints subject to different component-wise censoring mechanisms.


Asunto(s)
Modelos Estadísticos , Humanos , Anciano , Ensayos Clínicos Controlados Aleatorios como Asunto , Modelos de Riesgos Proporcionales , Simulación por Computador
17.
Stat Sin ; 33(2): 685-704, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37234206

RESUMEN

In this paper, we consider a class of partially linear transformation models with interval-censored competing risks data. Under a semiparametric generalized odds rate specification for the cause-specific cumulative incidence function, we obtain optimal estimators of the large number of parametric and nonparametric model components via maximizing the likelihood function over a joint B-spline and Bernstein polynomial spanned sieve space. Our specification considers a relatively simpler finite-dimensional parameter space, approximating the infinite-dimensional parameter space as n → ∞, thereby allowing us to study the almost sure consistency, and rate of convergence for all parameters, and the asymptotic distributions and efficiency of the finite-dimensional components. We study the finite sample performance of our method through simulation studies under a variety of scenarios. Furthermore, we illustrate our methodology via application to a dataset on HIV-infected individuals from sub-Saharan Africa.

18.
Lifetime Data Anal ; 29(4): 752-768, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37210470

RESUMEN

The Nun study is a well-known longitudinal epidemiology study of aging and dementia that recruited elderly nuns who were not yet diagnosed with dementia (i.e., incident cohort) and who had dementia prior to entry (i.e., prevalent cohort). In such a natural history of disease study, multistate modeling of the combined data from both incident and prevalent cohorts is desirable to improve the efficiency of inference. While important, the multistate modeling approaches for the combined data have been scarcely used in practice because prevalent samples do not provide the exact date of disease onset and do not represent the target population due to left-truncation. In this paper, we demonstrate how to adequately combine both incident and prevalent cohorts to examine risk factors for every possible transition in studying the natural history of dementia. We adapt a four-state nonhomogeneous Markov model to characterize all transitions between different clinical stages, including plausible reversible transitions. The estimating procedure using the combined data leads to efficiency gains for every transition compared to those from the incident cohort data only.


Asunto(s)
Demencia , Monjas , Humanos , Anciano , Estudios Longitudinales , Factores de Riesgo , Demencia/epidemiología
19.
BMC Med Res Methodol ; 23(1): 82, 2023 04 04.
Artículo en Inglés | MEDLINE | ID: mdl-37016341

RESUMEN

BACKGROUND: Failure time data frequently occur in many medical studies and often accompany with various types of censoring. In some applications, left truncation may occur and can induce biased sampling, which makes the practical data analysis become more complicated. The existing analysis methods for left-truncated data have some limitations in that they either focus only on a special type of censored data or fail to flexibly utilize the distribution information of the truncation times for inference. Therefore, it is essential to develop a reliable and efficient method for the analysis of left-truncated failure time data with various types of censoring. METHOD: This paper concerns regression analysis of left-truncated failure time data with the proportional hazards model under various types of censoring mechanisms, including right censoring, interval censoring and a mixture of them. The proposed pairwise pseudo-likelihood estimation method is essentially built on a combination of the conditional likelihood and the pairwise likelihood that eliminates the nuisance truncation distribution function or avoids its estimation. To implement the presented method, a flexible EM algorithm is developed by utilizing the idea of self-consistent estimating equation. A main feature of the algorithm is that it involves closed-form estimators of the large-dimensional nuisance parameters and is thus computationally stable and reliable. In addition, an R package LTsurv is developed. RESULTS: The numerical results obtained from extensive simulation studies suggest that the proposed pairwise pseudo-likelihood method performs reasonably well in practical situations and is obviously more efficient than the conditional likelihood approach as expected. The analysis results of the MHCPS data with the proposed pairwise pseudo-likelihood method indicate that males have significantly higher risk of losing active life than females. In contrast, the conditional likelihood method recognizes this effect as non-significant, which is because the conditional likelihood method often loses some estimation efficiency compared with the proposed method. CONCLUSIONS: The proposed method provides a general and helpful tool to conduct the Cox's regression analysis of left-truncated failure time data under various types of censoring.


Asunto(s)
Funciones de Verosimilitud , Humanos , Interpretación Estadística de Datos , Modelos de Riesgos Proporcionales , Análisis de Regresión , Simulación por Computador
20.
Stat Methods Med Res ; 32(6): 1100-1123, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37039362

RESUMEN

There are few computational and methodological tools available for the analysis of general multi-state models with interval censoring. Here, we propose a general framework for parametric inference with interval censored multi-state data. Our framework can accommodate any parametric model for the transition times, and covariates may be included in various ways. We present a general method for constructing the likelihood, which we have implemented in a ready-to-use R package, smms, available on GitHub. The R package also computes the required high-dimensional integrals in an efficient manner. Further, we explore connections between our modelling framework and existing approaches: our models fall under the class of semi-Markovian multi-state models, but with a different, and sparser parameterisation than what is often seen. We illustrate our framework through a dataset monitoring heart transplant patients. Finally, we investigate the effect of some forms of misspecification of the model assumptions through simulations.


Asunto(s)
Modelos Estadísticos , Humanos , Probabilidad , Interpretación Estadística de Datos , Modelos de Riesgos Proporcionales , Análisis de Supervivencia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA