Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 5.590
Filtrar
3.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39253987

RESUMEN

Meta-analysis is a powerful tool to synthesize findings from multiple studies. The normal-normal random-effects model is widely used to account for between-study heterogeneity. However, meta-analyses of sparse data, which may arise when the event rate is low for binary or count outcomes, pose a challenge to the normal-normal random-effects model in the accuracy and stability in inference since the normal approximation in the within-study model may not be good. To reduce bias arising from data sparsity, the generalized linear mixed model can be used by replacing the approximate normal within-study model with an exact model. Publication bias is one of the most serious threats in meta-analysis. Several quantitative sensitivity analysis methods for evaluating the potential impacts of selective publication are available for the normal-normal random-effects model. We propose a sensitivity analysis method by extending the likelihood-based sensitivity analysis with the $t$-statistic selection function of Copas to several generalized linear mixed-effects models. Through applications of our proposed method to several real-world meta-analyses and simulation studies, the proposed method was proven to outperform the likelihood-based sensitivity analysis based on the normal-normal model. The proposed method would give useful guidance to address publication bias in the meta-analysis of sparse data.


Asunto(s)
Simulación por Computador , Metaanálisis como Asunto , Sesgo de Publicación , Sesgo de Publicación/estadística & datos numéricos , Humanos , Funciones de Verosimilitud , Modelos Lineales , Interpretación Estadística de Datos , Modelos Estadísticos , Sensibilidad y Especificidad , Biometría/métodos
4.
J Refract Surg ; 40(9): e635-e644, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39254245

RESUMEN

PURPOSE: To investigate the impact of back-to-front corneal radius ratio (B/F ratio) and posterior keratometry (PK) on the accuracy of intraocular lens power calculation formulas in eyes after myopic laser in situ keratomileusis (LASIK)/photorefractive keratectomy (PRK) surgery. METHODS: A retrospective, consecutive case series study included 101 patients (132 eyes) with cataract after myopic LASIK/PRK. Mean prediction error (PE), mean absolute PE (MAE), median absolute error (MedAE), and the percentage of eyes within ±0.25, ±0.50, and ±1.00 diopters (D) of PE were determined. RESULTS: The Barrett True K-TK formula exhibited the lowest MAE (0.59 D) and MedAE (0.48 D) and the highest percentage of eyes within ±0.50 D of PE (54.55%) in total. In eyes with a B/F ratio of 0.70 or less and PK of -5.70 D or greater, the Potvin-Hill formula displayed the lowest MAE (0.46 to 0.67 D). CONCLUSIONS: The Barrett True-TK exhibited the highest prediction accuracy in eyes after myopic LASIK/PRK overall. However, for eyes with a low B/F ratio and flat PK, the Potvin-Hill performed best. [J Refract Surg. 2024;40(9):e635-e644.].


Asunto(s)
Biometría , Córnea , Queratomileusis por Láser In Situ , Láseres de Excímeros , Implantación de Lentes Intraoculares , Lentes Intraoculares , Miopía , Queratectomía Fotorrefractiva , Refracción Ocular , Agudeza Visual , Humanos , Miopía/cirugía , Miopía/fisiopatología , Queratomileusis por Láser In Situ/métodos , Estudios Retrospectivos , Queratectomía Fotorrefractiva/métodos , Femenino , Masculino , Córnea/patología , Córnea/cirugía , Refracción Ocular/fisiología , Adulto , Persona de Mediana Edad , Láseres de Excímeros/uso terapéutico , Agudeza Visual/fisiología , Biometría/métodos , Óptica y Fotónica , Topografía de la Córnea , Reproducibilidad de los Resultados , Adulto Joven , Facoemulsificación
5.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39248120

RESUMEN

Prior distributions, which represent one's belief in the distributions of unknown parameters before observing the data, impact Bayesian inference in a critical and fundamental way. With the ability to incorporate external information from expert opinions or historical datasets, the priors, if specified appropriately, can improve the statistical efficiency of Bayesian inference. In survival analysis, based on the concept of unit information (UI) under parametric models, we propose the unit information Dirichlet process (UIDP) as a new class of nonparametric priors for the underlying distribution of time-to-event data. By deriving the Fisher information in terms of the differential of the cumulative hazard function, the UIDP prior is formulated to match its prior UI with the weighted average of UI in historical datasets and thus can utilize both parametric and nonparametric information provided by historical datasets. With a Markov chain Monte Carlo algorithm, simulations and real data analysis demonstrate that the UIDP prior can adaptively borrow historical information and improve statistical efficiency in survival analysis.


Asunto(s)
Teorema de Bayes , Simulación por Computador , Cadenas de Markov , Modelos Estadísticos , Método de Montecarlo , Análisis de Supervivencia , Humanos , Algoritmos , Biometría/métodos , Interpretación Estadística de Datos
6.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39248123

RESUMEN

We present a new method for constructing valid covariance functions of Gaussian processes for spatial analysis in irregular, non-convex domains such as bodies of water. Standard covariance functions based on geodesic distances are not guaranteed to be positive definite on such domains, while existing non-Euclidean approaches fail to respect the partially Euclidean nature of these domains where the geodesic distance agrees with the Euclidean distances for some pairs of points. Using a visibility graph on the domain, we propose a class of covariance functions that preserve Euclidean-based covariances between points that are connected in the domain while incorporating the non-convex geometry of the domain via conditional independence relationships. We show that the proposed method preserves the partially Euclidean nature of the intrinsic geometry on the domain while maintaining validity (positive definiteness) and marginal stationarity of the covariance function over the entire parameter space, properties which are not always fulfilled by existing approaches to construct covariance functions on non-convex domains. We provide useful approximations to improve computational efficiency, resulting in a scalable algorithm. We compare the performance of our method with those of competing state-of-the-art methods using simulation studies on synthetic non-convex domains. The method is applied to data regarding acidity levels in the Chesapeake Bay, showing its potential for ecological monitoring in real-world spatial applications on irregular domains.


Asunto(s)
Algoritmos , Simulación por Computador , Análisis Espacial , Modelos Estadísticos , Distribución Normal , Biometría/métodos
7.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39222026

RESUMEN

Testing multiple hypotheses of conditional independence with provable error rate control is a fundamental problem with various applications. To infer conditional independence with family-wise error rate (FWER) control when only summary statistics of marginal dependence are accessible, we adopt GhostKnockoff to directly generate knockoff copies of summary statistics and propose a new filter to select features conditionally dependent on the response. In addition, we develop a computationally efficient algorithm to greatly reduce the computational cost of knockoff copies generation without sacrificing power and FWER control. Experiments on simulated data and a real dataset of Alzheimer's disease genetics demonstrate the advantage of the proposed method over existing alternatives in both statistical power and computational efficiency.


Asunto(s)
Algoritmos , Enfermedad de Alzheimer , Simulación por Computador , Humanos , Enfermedad de Alzheimer/genética , Modelos Estadísticos , Interpretación Estadística de Datos , Biometría/métodos
8.
Biom J ; 66(6): e202300387, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39223907

RESUMEN

Meta-analyses are commonly performed based on random-effects models, while in certain cases one might also argue in favor of a common-effect model. One such case may be given by the example of two "study twins" that are performed according to a common (or at least very similar) protocol. Here we investigate the particular case of meta-analysis of a pair of studies, for example, summarizing the results of two confirmatory clinical trials in phase III of a clinical development program. Thereby, we focus on the question of to what extent homogeneity or heterogeneity may be discernible and include an empirical investigation of published ("twin") pairs of studies. A pair of estimates from two studies only provide very little evidence of homogeneity or heterogeneity of effects, and ad hoc decision criteria may often be misleading.


Asunto(s)
Biometría , Biometría/métodos , Humanos , Metaanálisis como Asunto , Estudios en Gemelos como Asunto , Modelos Estadísticos
9.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39282732

RESUMEN

We develop a methodology for valid inference after variable selection in logistic regression when the responses are partially observed, that is, when one observes a set of error-prone testing outcomes instead of the true values of the responses. Aiming at selecting important covariates while accounting for missing information in the response data, we apply the expectation-maximization algorithm to compute maximum likelihood estimators subject to LASSO penalization. Subsequent to variable selection, we make inferences on the selected covariate effects by extending post-selection inference methodology based on the polyhedral lemma. Empirical evidence from our extensive simulation study suggests that our post-selection inference results are more reliable than those from naive inference methods that use the same data to perform variable selection and inference without adjusting for variable selection.


Asunto(s)
Algoritmos , Simulación por Computador , Funciones de Verosimilitud , Humanos , Modelos Logísticos , Interpretación Estadística de Datos , Biometría/métodos , Modelos Estadísticos
10.
Invest Ophthalmol Vis Sci ; 65(11): 2, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39226049

RESUMEN

Purpose: We aimed to examine the normative profile of crystalline lens power (LP) and its associations with ocular biometric parameters including age, axial length (AL), spherical equivalent refraction (SE), corneal radius (CR), lens thickness, anterior chamber depth, and AL/CR ratio among a cynomolgus monkey colony. Methods: This population-based cross-sectional Non-human Primate Eye Study recruited middle-aged subjects in South China. All included macaques underwent a detailed ophthalmic examination. LP was calculated using the modified Bennett's formula, with biometry data from an autorefractometer and A-scan. SPSS version 25.0 was used for statistical analysis. Results: A total of 301 macaques with an average age of 18.75 ± 2.95 years were collected in this study. The mean LP was 25.40 ± 2.96 D. Greater LP was independently associated with younger age, longer AL, and lower SE (P = 0.028, P = 0.025, and P = 0.034, respectively). LP showed a positive correlation with age, SE, CR, AL, lens thickness, and anterior chamber depth, whereas no correlation was observed between LP and AL/CR ratio. Conclusions: Our results suggested the LP distribution in the nonhuman primate colony and indicated that AL and SE strongly influenced the rate of LP. Therefore, this study contributed to a deeper understanding of the relative significance of the LP on the optics of the crystalline lens study.


Asunto(s)
Longitud Axial del Ojo , Biometría , Cristalino , Macaca fascicularis , Refracción Ocular , Animales , Cristalino/anatomía & histología , Estudios Transversales , Refracción Ocular/fisiología , Masculino , Femenino , Biometría/métodos , Cámara Anterior/anatomía & histología , Córnea/anatomía & histología
11.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39271117

RESUMEN

In randomized controlled trials, adjusting for baseline covariates is commonly used to improve the precision of treatment effect estimation. However, covariates often have missing values. Recently, Zhao and Ding studied two simple strategies, the single imputation method and missingness-indicator method (MIM), to handle missing covariates and showed that both methods can provide an efficiency gain compared to not adjusting for covariates. To better understand and compare these two strategies, we propose and investigate a novel theoretical imputation framework termed cross-world imputation (CWI). This framework includes both single imputation and MIM as special cases, facilitating the comparison of their efficiency. Through the lens of CWI, we show that MIM implicitly searches for the optimal CWI values and thus achieves optimal efficiency. We also derive conditions under which the single imputation method, by searching for the optimal single imputation values, can achieve the same efficiency as the MIM. We illustrate our findings through simulation studies and a real data analysis based on the Childhood Adenotonsillectomy Trial. We conclude by discussing the practical implications of our findings.


Asunto(s)
Simulación por Computador , Modelos Estadísticos , Ensayos Clínicos Controlados Aleatorios como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Humanos , Interpretación Estadística de Datos , Niño , Biometría/métodos , Adenoidectomía/estadística & datos numéricos , Tonsilectomía/estadística & datos numéricos
12.
BMC Res Notes ; 17(1): 263, 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39272141

RESUMEN

A biometric system is essential in improving security and authentication processes across a variety of fields. Due to multiple criteria and alternatives, selecting the most suitable biometric system is a complex decision. We employ a hybrid approach in this study, combining the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) with the Analytic Hierarchical Process (AHP). Biometric technologies are ranked using the TOPSIS method according to the relative weights that AHP determines. By applying the neutrosophic set theory, this approach effectively handles the ambiguity and vagueness inherent in decision-making. Fingerprint, face, Iris, Voice, Hand Veins, Hand geometry and signature are the seven biometric technologies that are incorporated in the framework. Seven essential characteristics are accuracy, security, acceptability, speed and efficiency, ease of collection, universality, distinctiveness used to evaluate these technologies. The model seeks to determine which biometric technology is best suited for a particular application or situation by taking these factors into account. This technique may be applied in other domains in the future.


Asunto(s)
Biometría , Humanos , Biometría/métodos , Identificación Biométrica/métodos , Algoritmos , Lógica Difusa
13.
Nat Commun ; 15(1): 8003, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39266523

RESUMEN

Decoupling dynamic touch signals in the optical tactile sensors is highly desired for behavioral tactile applications yet challenging because typical optical sensors mostly measure only static normal force and use imprecise multi-image averaging for dynamic force sensing. Here, we report a highly sensitive upconversion nanocrystals-based behavioral biometric optical tactile sensor that instantaneously and quantitatively decomposes dynamic touch signals into individual components of vertical normal and lateral shear force from a single image in real-time. By mimicking the sensory architecture of human skin, the unique luminescence signal obtained is axisymmetric for static normal forces and non-axisymmetric for dynamic shear forces. Our sensor demonstrates high spatio-temporal screening of small objects and recognizes fingerprints for authentication with high spatial-temporal resolution. Using a dynamic force discrimination machine learning framework, we realized a Braille-to-Speech translation system and a next-generation dynamic biometric recognition system for handwriting.


Asunto(s)
Tacto , Humanos , Tacto/fisiología , Dermatoglifia , Biometría/métodos , Biometría/instrumentación , Aprendizaje Automático , Nanopartículas/química , Identificación Biométrica/métodos , Identificación Biométrica/instrumentación
14.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39253988

RESUMEN

The US Food and Drug Administration launched Project Optimus to reform the dose optimization and dose selection paradigm in oncology drug development, calling for the paradigm shift from finding the maximum tolerated dose to the identification of optimal biological dose (OBD). Motivated by a real-world drug development program, we propose a master-protocol-based platform trial design to simultaneously identify OBDs of a new drug, combined with standards of care or other novel agents, in multiple indications. We propose a Bayesian latent subgroup model to accommodate the treatment heterogeneity across indications, and employ Bayesian hierarchical models to borrow information within subgroups. At each interim analysis, we update the subgroup membership and dose-toxicity and -efficacy estimates, as well as the estimate of the utility for risk-benefit tradeoff, based on the observed data across treatment arms to inform the arm-specific decision of dose escalation and de-escalation and identify the OBD for each arm of a combination partner and an indication. The simulation study shows that the proposed design has desirable operating characteristics, providing a highly flexible and efficient way for dose optimization. The design has great potential to shorten the drug development timeline, save costs by reducing overlapping infrastructure, and speed up regulatory approval.


Asunto(s)
Antineoplásicos , Teorema de Bayes , Simulación por Computador , Relación Dosis-Respuesta a Droga , Dosis Máxima Tolerada , Humanos , Antineoplásicos/administración & dosificación , Desarrollo de Medicamentos/métodos , Desarrollo de Medicamentos/estadística & datos numéricos , Modelos Estadísticos , Estados Unidos , United States Food and Drug Administration , Neoplasias/tratamiento farmacológico , Proyectos de Investigación , Biometría/métodos
15.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39248122

RESUMEN

The geometric median, which is applicable to high-dimensional data, can be viewed as a generalization of the univariate median used in 1-dimensional data. It can be used as a robust estimator for identifying the location of multi-dimensional data and has a wide range of applications in real-world scenarios. This paper explores the problem of high-dimensional multivariate analysis of variance (MANOVA) using the geometric median. A maximum-type statistic that relies on the differences between the geometric medians among various groups is introduced. The distribution of the new test statistic is derived under the null hypothesis using Gaussian approximations, and its consistency under the alternative hypothesis is established. To approximate the distribution of the new statistic in high dimensions, a wild bootstrap algorithm is proposed and theoretically justified. Through simulation studies conducted across a variety of dimensions, sample sizes, and data-generating models, we demonstrate the finite-sample performance of our geometric median-based MANOVA method. Additionally, we implement the proposed approach to analyze a breast cancer gene expression dataset.


Asunto(s)
Algoritmos , Neoplasias de la Mama , Simulación por Computador , Humanos , Análisis Multivariante , Neoplasias de la Mama/genética , Modelos Estadísticos , Femenino , Interpretación Estadística de Datos , Perfilación de la Expresión Génica/estadística & datos numéricos , Tamaño de la Muestra , Biometría/métodos
16.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39248121

RESUMEN

Recent years have witnessed a rise in the popularity of information integration without sharing of raw data. By leveraging and incorporating summary information from external sources, internal studies can achieve enhanced estimation efficiency and prediction accuracy. However, a noteworthy challenge in utilizing summary-level information is accommodating the inherent heterogeneity across diverse data sources. In this study, we delve into the issue of prior probability shift between two cohorts, wherein the difference of two data distributions depends on the outcome. We introduce a novel semi-parametric constrained optimization-based approach to integrate information within this framework, which has not been extensively explored in existing literature. Our proposed method tackles the prior probability shift by introducing the outcome-dependent selection function and effectively addresses the estimation uncertainty associated with summary information from the external source. Our approach facilitates valid inference even in the absence of a known variance-covariance estimate from the external source. Through extensive simulation studies, we observe the superiority of our method over existing ones, showcasing minimal estimation bias and reduced variance for both binary and continuous outcomes. We further demonstrate the utility of our method through its application in investigating risk factors related to essential hypertension, where the reduced estimation variability is observed after integrating summary information from an external data.


Asunto(s)
Simulación por Computador , Hipertensión Esencial , Probabilidad , Humanos , Modelos Estadísticos , Factores de Riesgo , Hipertensión , Interpretación Estadística de Datos , Biometría/métodos
17.
JAMA ; 332(8): 649-657, 2024 08 27.
Artículo en Inglés | MEDLINE | ID: mdl-39088200

RESUMEN

Importance: Accurate assessment of gestational age (GA) is essential to good pregnancy care but often requires ultrasonography, which may not be available in low-resource settings. This study developed a deep learning artificial intelligence (AI) model to estimate GA from blind ultrasonography sweeps and incorporated it into the software of a low-cost, battery-powered device. Objective: To evaluate GA estimation accuracy of an AI-enabled ultrasonography tool when used by novice users with no prior training in sonography. Design, Setting, and Participants: This prospective diagnostic accuracy study enrolled 400 individuals with viable, single, nonanomalous, first-trimester pregnancies in Lusaka, Zambia, and Chapel Hill, North Carolina. Credentialed sonographers established the "ground truth" GA via transvaginal crown-rump length measurement. At random follow-up visits throughout gestation, including a primary evaluation window from 14 0/7 weeks' to 27 6/7 weeks' gestation, novice users obtained blind sweeps of the maternal abdomen using the AI-enabled device (index test) and credentialed sonographers performed fetal biometry with a high-specification machine (study standard). Main Outcomes and Measures: The primary outcome was the mean absolute error (MAE) of the index test and study standard, which was calculated by comparing each method's estimate to the previously established GA and considered equivalent if the difference fell within a prespecified margin of ±2 days. Results: In the primary evaluation window, the AI-enabled device met criteria for equivalence to the study standard, with an MAE (SE) of 3.2 (0.1) days vs 3.0 (0.1) days (difference, 0.2 days [95% CI, -0.1 to 0.5]). Additionally, the percentage of assessments within 7 days of the ground truth GA was comparable (90.7% for the index test vs 92.5% for the study standard). Performance was consistent in prespecified subgroups, including the Zambia and North Carolina cohorts and those with high body mass index. Conclusions and Relevance: Between 14 and 27 weeks' gestation, novice users with no prior training in ultrasonography estimated GA as accurately with the low-cost, point-of-care AI tool as credentialed sonographers performing standard biometry on high-specification machines. These findings have immediate implications for obstetrical care in low-resource settings, advancing the World Health Organization goal of ultrasonography estimation of GA for all pregnant people. Trial Registration: ClinicalTrials.gov Identifier: NCT05433519.


Asunto(s)
Inteligencia Artificial , Edad Gestacional , Ultrasonografía Prenatal , Adulto , Femenino , Humanos , Embarazo , Biometría/métodos , Largo Cráneo-Cadera , Sistemas de Atención de Punto/economía , Primer Trimestre del Embarazo , Estudios Prospectivos , Programas Informáticos , Ultrasonografía Prenatal/economía , Ultrasonografía Prenatal/instrumentación , Ultrasonografía Prenatal/métodos , Zambia
18.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39101548

RESUMEN

We consider the setting where (1) an internal study builds a linear regression model for prediction based on individual-level data, (2) some external studies have fitted similar linear regression models that use only subsets of the covariates and provide coefficient estimates for the reduced models without individual-level data, and (3) there is heterogeneity across these study populations. The goal is to integrate the external model summary information into fitting the internal model to improve prediction accuracy. We adapt the James-Stein shrinkage method to propose estimators that are no worse and are oftentimes better in the prediction mean squared error after information integration, regardless of the degree of study population heterogeneity. We conduct comprehensive simulation studies to investigate the numerical performance of the proposed estimators. We also apply the method to enhance a prediction model for patella bone lead level in terms of blood lead level and other covariates by integrating summary information from published literature.


Asunto(s)
Simulación por Computador , Humanos , Modelos Lineales , Biometría/métodos , Plomo/sangre , Rótula , Modelos Estadísticos , Interpretación Estadística de Datos
19.
J Refract Surg ; 40(8): e562-e568, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39120018

RESUMEN

PURPOSE: To evaluate the impact of anterior chamber phakic intraocular lens (pIOL) on swept-source optical coherence tomography (SS-OCT) biometric measurements and IOL power calculation. METHODS: This retrospective analysis of 67 eyes of 49 patients with previous anterior chamber pIOL implantation analyzed the accuracy of automatic segmentation of the anterior surface of the crystalline lens and its impact on anterior chamber depth (ACD, measured from epithelium to lens), lens thickness measurements, and IOL power calculation. The sample was divided into two groups: correct detection of the anterior surface of the crystalline lens and inaccurate detection. Segmentation of eyes from the inaccurate detection group was manually corrected and ACD and lens thickness were calculated using ImageJ software. IOL power was calculated using 7 formulas for both measurements. RESULTS: The anterior surface of the crystalline lens was mis-identified in 13 (19.4%) eyes. ACD was underestimated (Δ -0.85 ± 0.33 mm, P < .001) and lens thickness was overestimated (Δ +0.81 ± 0.25 mm, P < .001). Manual correction changed the target spherical equivalent only in the Haigis formula (P = .009). After correction for segmentation bias, the Pearl DGS, Cooke K6, and EVO 2.0 formulas showed the lowest prediction error, with the Pearl DGS showing greatest accuracy within ±1.00 diopters of prediction error range (81.0%). CONCLUSIONS: SS-OCT biometry misidentifies the anterior surface of the crystalline lens in a significant proportion, resulting in significant IOL power calculation error in the Haigis formula. Manual proofing of segmentation is mandatory in every patient with anterior chamber pIOL implantation. After correct segmentation, the Pearl DGS, Cooke K6, and EVO seem to be the best formulas. [J Refract Surg. 2024;40(8):e562-e568.].


Asunto(s)
Cámara Anterior , Biometría , Implantación de Lentes Intraoculares , Lentes Intraoculares Fáquicas , Tomografía de Coherencia Óptica , Humanos , Tomografía de Coherencia Óptica/métodos , Biometría/métodos , Estudios Retrospectivos , Masculino , Adulto , Femenino , Cámara Anterior/diagnóstico por imagen , Refracción Ocular/fisiología , Persona de Mediana Edad , Óptica y Fotónica , Cristalino/diagnóstico por imagen , Miopía/cirugía , Miopía/fisiopatología , Agudeza Visual/fisiología , Adulto Joven
20.
Biom J ; 66(6): e202300271, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39132909

RESUMEN

Many clinical trials assess time-to-event endpoints. To describe the difference between groups in terms of time to event, we often employ hazard ratios. However, the hazard ratio is only informative in the case of proportional hazards (PHs) over time. There exist many other effect measures that do not require PHs. One of them is the average hazard ratio (AHR). Its core idea is to utilize a time-dependent weighting function that accounts for time variation. Though propagated in methodological research papers, the AHR is rarely used in practice. To facilitate its application, we unfold approaches for sample size calculation of an AHR test. We assess the reliability of the sample size calculation by extensive simulation studies covering various survival and censoring distributions with proportional as well as nonproportional hazards (N-PHs). The findings suggest that a simulation-based sample size calculation approach can be useful for designing clinical trials with N-PHs. Using the AHR can result in increased statistical power to detect differences between groups with more efficient sample sizes.


Asunto(s)
Modelos de Riesgos Proporcionales , Tamaño de la Muestra , Humanos , Ensayos Clínicos como Asunto , Biometría/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA