Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 18.411
Filtrar
3.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39253987

RESUMEN

Meta-analysis is a powerful tool to synthesize findings from multiple studies. The normal-normal random-effects model is widely used to account for between-study heterogeneity. However, meta-analyses of sparse data, which may arise when the event rate is low for binary or count outcomes, pose a challenge to the normal-normal random-effects model in the accuracy and stability in inference since the normal approximation in the within-study model may not be good. To reduce bias arising from data sparsity, the generalized linear mixed model can be used by replacing the approximate normal within-study model with an exact model. Publication bias is one of the most serious threats in meta-analysis. Several quantitative sensitivity analysis methods for evaluating the potential impacts of selective publication are available for the normal-normal random-effects model. We propose a sensitivity analysis method by extending the likelihood-based sensitivity analysis with the $t$-statistic selection function of Copas to several generalized linear mixed-effects models. Through applications of our proposed method to several real-world meta-analyses and simulation studies, the proposed method was proven to outperform the likelihood-based sensitivity analysis based on the normal-normal model. The proposed method would give useful guidance to address publication bias in the meta-analysis of sparse data.


Asunto(s)
Simulación por Computador , Metaanálisis como Asunto , Sesgo de Publicación , Sesgo de Publicación/estadística & datos numéricos , Humanos , Funciones de Verosimilitud , Modelos Lineales , Interpretación Estadística de Datos , Modelos Estadísticos , Sensibilidad y Especificidad , Biometría/métodos
4.
J Refract Surg ; 40(9): e635-e644, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39254245

RESUMEN

PURPOSE: To investigate the impact of back-to-front corneal radius ratio (B/F ratio) and posterior keratometry (PK) on the accuracy of intraocular lens power calculation formulas in eyes after myopic laser in situ keratomileusis (LASIK)/photorefractive keratectomy (PRK) surgery. METHODS: A retrospective, consecutive case series study included 101 patients (132 eyes) with cataract after myopic LASIK/PRK. Mean prediction error (PE), mean absolute PE (MAE), median absolute error (MedAE), and the percentage of eyes within ±0.25, ±0.50, and ±1.00 diopters (D) of PE were determined. RESULTS: The Barrett True K-TK formula exhibited the lowest MAE (0.59 D) and MedAE (0.48 D) and the highest percentage of eyes within ±0.50 D of PE (54.55%) in total. In eyes with a B/F ratio of 0.70 or less and PK of -5.70 D or greater, the Potvin-Hill formula displayed the lowest MAE (0.46 to 0.67 D). CONCLUSIONS: The Barrett True-TK exhibited the highest prediction accuracy in eyes after myopic LASIK/PRK overall. However, for eyes with a low B/F ratio and flat PK, the Potvin-Hill performed best. [J Refract Surg. 2024;40(9):e635-e644.].


Asunto(s)
Biometría , Córnea , Queratomileusis por Láser In Situ , Láseres de Excímeros , Implantación de Lentes Intraoculares , Lentes Intraoculares , Miopía , Queratectomía Fotorrefractiva , Refracción Ocular , Agudeza Visual , Humanos , Miopía/cirugía , Miopía/fisiopatología , Queratomileusis por Láser In Situ/métodos , Estudios Retrospectivos , Queratectomía Fotorrefractiva/métodos , Femenino , Masculino , Córnea/patología , Córnea/cirugía , Refracción Ocular/fisiología , Adulto , Persona de Mediana Edad , Láseres de Excímeros/uso terapéutico , Agudeza Visual/fisiología , Biometría/métodos , Óptica y Fotónica , Topografía de la Córnea , Reproducibilidad de los Resultados , Adulto Joven , Facoemulsificación
5.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39248120

RESUMEN

Prior distributions, which represent one's belief in the distributions of unknown parameters before observing the data, impact Bayesian inference in a critical and fundamental way. With the ability to incorporate external information from expert opinions or historical datasets, the priors, if specified appropriately, can improve the statistical efficiency of Bayesian inference. In survival analysis, based on the concept of unit information (UI) under parametric models, we propose the unit information Dirichlet process (UIDP) as a new class of nonparametric priors for the underlying distribution of time-to-event data. By deriving the Fisher information in terms of the differential of the cumulative hazard function, the UIDP prior is formulated to match its prior UI with the weighted average of UI in historical datasets and thus can utilize both parametric and nonparametric information provided by historical datasets. With a Markov chain Monte Carlo algorithm, simulations and real data analysis demonstrate that the UIDP prior can adaptively borrow historical information and improve statistical efficiency in survival analysis.


Asunto(s)
Teorema de Bayes , Simulación por Computador , Cadenas de Markov , Modelos Estadísticos , Método de Montecarlo , Análisis de Supervivencia , Humanos , Algoritmos , Biometría/métodos , Interpretación Estadística de Datos
6.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39248123

RESUMEN

We present a new method for constructing valid covariance functions of Gaussian processes for spatial analysis in irregular, non-convex domains such as bodies of water. Standard covariance functions based on geodesic distances are not guaranteed to be positive definite on such domains, while existing non-Euclidean approaches fail to respect the partially Euclidean nature of these domains where the geodesic distance agrees with the Euclidean distances for some pairs of points. Using a visibility graph on the domain, we propose a class of covariance functions that preserve Euclidean-based covariances between points that are connected in the domain while incorporating the non-convex geometry of the domain via conditional independence relationships. We show that the proposed method preserves the partially Euclidean nature of the intrinsic geometry on the domain while maintaining validity (positive definiteness) and marginal stationarity of the covariance function over the entire parameter space, properties which are not always fulfilled by existing approaches to construct covariance functions on non-convex domains. We provide useful approximations to improve computational efficiency, resulting in a scalable algorithm. We compare the performance of our method with those of competing state-of-the-art methods using simulation studies on synthetic non-convex domains. The method is applied to data regarding acidity levels in the Chesapeake Bay, showing its potential for ecological monitoring in real-world spatial applications on irregular domains.


Asunto(s)
Algoritmos , Simulación por Computador , Análisis Espacial , Modelos Estadísticos , Distribución Normal , Biometría/métodos
7.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39222026

RESUMEN

Testing multiple hypotheses of conditional independence with provable error rate control is a fundamental problem with various applications. To infer conditional independence with family-wise error rate (FWER) control when only summary statistics of marginal dependence are accessible, we adopt GhostKnockoff to directly generate knockoff copies of summary statistics and propose a new filter to select features conditionally dependent on the response. In addition, we develop a computationally efficient algorithm to greatly reduce the computational cost of knockoff copies generation without sacrificing power and FWER control. Experiments on simulated data and a real dataset of Alzheimer's disease genetics demonstrate the advantage of the proposed method over existing alternatives in both statistical power and computational efficiency.


Asunto(s)
Algoritmos , Enfermedad de Alzheimer , Simulación por Computador , Humanos , Enfermedad de Alzheimer/genética , Modelos Estadísticos , Interpretación Estadística de Datos , Biometría/métodos
8.
Biom J ; 66(6): e202300387, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39223907

RESUMEN

Meta-analyses are commonly performed based on random-effects models, while in certain cases one might also argue in favor of a common-effect model. One such case may be given by the example of two "study twins" that are performed according to a common (or at least very similar) protocol. Here we investigate the particular case of meta-analysis of a pair of studies, for example, summarizing the results of two confirmatory clinical trials in phase III of a clinical development program. Thereby, we focus on the question of to what extent homogeneity or heterogeneity may be discernible and include an empirical investigation of published ("twin") pairs of studies. A pair of estimates from two studies only provide very little evidence of homogeneity or heterogeneity of effects, and ad hoc decision criteria may often be misleading.


Asunto(s)
Biometría , Biometría/métodos , Humanos , Metaanálisis como Asunto , Estudios en Gemelos como Asunto , Modelos Estadísticos
9.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39282732

RESUMEN

We develop a methodology for valid inference after variable selection in logistic regression when the responses are partially observed, that is, when one observes a set of error-prone testing outcomes instead of the true values of the responses. Aiming at selecting important covariates while accounting for missing information in the response data, we apply the expectation-maximization algorithm to compute maximum likelihood estimators subject to LASSO penalization. Subsequent to variable selection, we make inferences on the selected covariate effects by extending post-selection inference methodology based on the polyhedral lemma. Empirical evidence from our extensive simulation study suggests that our post-selection inference results are more reliable than those from naive inference methods that use the same data to perform variable selection and inference without adjusting for variable selection.


Asunto(s)
Algoritmos , Simulación por Computador , Funciones de Verosimilitud , Humanos , Modelos Logísticos , Interpretación Estadística de Datos , Biometría/métodos , Modelos Estadísticos
10.
Invest Ophthalmol Vis Sci ; 65(11): 2, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39226049

RESUMEN

Purpose: We aimed to examine the normative profile of crystalline lens power (LP) and its associations with ocular biometric parameters including age, axial length (AL), spherical equivalent refraction (SE), corneal radius (CR), lens thickness, anterior chamber depth, and AL/CR ratio among a cynomolgus monkey colony. Methods: This population-based cross-sectional Non-human Primate Eye Study recruited middle-aged subjects in South China. All included macaques underwent a detailed ophthalmic examination. LP was calculated using the modified Bennett's formula, with biometry data from an autorefractometer and A-scan. SPSS version 25.0 was used for statistical analysis. Results: A total of 301 macaques with an average age of 18.75 ± 2.95 years were collected in this study. The mean LP was 25.40 ± 2.96 D. Greater LP was independently associated with younger age, longer AL, and lower SE (P = 0.028, P = 0.025, and P = 0.034, respectively). LP showed a positive correlation with age, SE, CR, AL, lens thickness, and anterior chamber depth, whereas no correlation was observed between LP and AL/CR ratio. Conclusions: Our results suggested the LP distribution in the nonhuman primate colony and indicated that AL and SE strongly influenced the rate of LP. Therefore, this study contributed to a deeper understanding of the relative significance of the LP on the optics of the crystalline lens study.


Asunto(s)
Longitud Axial del Ojo , Biometría , Cristalino , Macaca fascicularis , Refracción Ocular , Animales , Cristalino/anatomía & histología , Estudios Transversales , Refracción Ocular/fisiología , Masculino , Femenino , Biometría/métodos , Cámara Anterior/anatomía & histología , Córnea/anatomía & histología
11.
Food Res Int ; 195: 114973, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39277239

RESUMEN

Beyond sensory quality, food-evoked emotions play a crucial role in consumers acceptance and willingness to try, which are essential for product development. The link between fermented coffee sensory characteristics and elicited emotional responses from consumers is underexplored. This study aimed to evaluate consumers' acceptability of spontaneously fermented and unfermented roasted coffee through self-reported sensory evaluation and biometrics assessment. Self-reported liking in 15-cm non-structured scale, multiple choice of negative, neutral, and positive emojis, and subconscious emotional responses from 85 regular coffee consumers were analysed. Their relationship with the pattern of volatile aromatic compounds were also investigated. Fermented (F) and unfermented (UF) coffee beans with light- (L), dark- (D), and commercial dark (C) roasting levels were brewed and evaluated along with gas chromatography-mass spectrometry measurement. Multivariate data analysis was conducted to explore the inner relationships among volatile compounds, self-reported liking, and biometrics. Unfermented-dark roasted coffee (UFD) had highest overall consumer liking response ± standard error (8.68 ± 0.40), followed by the fermented-dark roasted (FD) at 7.73 ± 0.43 with no significant differences (p > 0.05). Fermented light-roasted coffee was associated with lower liking scores and negative emotional responses. In contrast, dark roasted coffee, which was linked to positive emojis and emotional responses, exhibited less detected peak area of volatile compounds contributing fruity and vegetative aromas, such as benzaldehyde, furfuryl acetate, 2-acetyl-1-methyl pyrrole, and isovaleric acid, potentially as negative drivers of consumer liking. Findings from this study could guide coffee manufacturers in developing specialty coffee if spontaneous fermentation is offered.


Asunto(s)
Café , Comportamiento del Consumidor , Emociones , Fermentación , Gusto , Compuestos Orgánicos Volátiles , Humanos , Masculino , Femenino , Adulto , Compuestos Orgánicos Volátiles/análisis , Adulto Joven , Café/química , Coffea/química , Odorantes/análisis , Culinaria/métodos , Biometría , Semillas/química , Cromatografía de Gases y Espectrometría de Masas , Persona de Mediana Edad , Preferencias Alimentarias , Manipulación de Alimentos/métodos
12.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39271117

RESUMEN

In randomized controlled trials, adjusting for baseline covariates is commonly used to improve the precision of treatment effect estimation. However, covariates often have missing values. Recently, Zhao and Ding studied two simple strategies, the single imputation method and missingness-indicator method (MIM), to handle missing covariates and showed that both methods can provide an efficiency gain compared to not adjusting for covariates. To better understand and compare these two strategies, we propose and investigate a novel theoretical imputation framework termed cross-world imputation (CWI). This framework includes both single imputation and MIM as special cases, facilitating the comparison of their efficiency. Through the lens of CWI, we show that MIM implicitly searches for the optimal CWI values and thus achieves optimal efficiency. We also derive conditions under which the single imputation method, by searching for the optimal single imputation values, can achieve the same efficiency as the MIM. We illustrate our findings through simulation studies and a real data analysis based on the Childhood Adenotonsillectomy Trial. We conclude by discussing the practical implications of our findings.


Asunto(s)
Simulación por Computador , Modelos Estadísticos , Ensayos Clínicos Controlados Aleatorios como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Humanos , Interpretación Estadística de Datos , Niño , Biometría/métodos , Adenoidectomía/estadística & datos numéricos , Tonsilectomía/estadística & datos numéricos
13.
BMC Res Notes ; 17(1): 263, 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39272141

RESUMEN

A biometric system is essential in improving security and authentication processes across a variety of fields. Due to multiple criteria and alternatives, selecting the most suitable biometric system is a complex decision. We employ a hybrid approach in this study, combining the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) with the Analytic Hierarchical Process (AHP). Biometric technologies are ranked using the TOPSIS method according to the relative weights that AHP determines. By applying the neutrosophic set theory, this approach effectively handles the ambiguity and vagueness inherent in decision-making. Fingerprint, face, Iris, Voice, Hand Veins, Hand geometry and signature are the seven biometric technologies that are incorporated in the framework. Seven essential characteristics are accuracy, security, acceptability, speed and efficiency, ease of collection, universality, distinctiveness used to evaluate these technologies. The model seeks to determine which biometric technology is best suited for a particular application or situation by taking these factors into account. This technique may be applied in other domains in the future.


Asunto(s)
Biometría , Humanos , Biometría/métodos , Identificación Biométrica/métodos , Algoritmos , Lógica Difusa
14.
Nat Commun ; 15(1): 8003, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39266523

RESUMEN

Decoupling dynamic touch signals in the optical tactile sensors is highly desired for behavioral tactile applications yet challenging because typical optical sensors mostly measure only static normal force and use imprecise multi-image averaging for dynamic force sensing. Here, we report a highly sensitive upconversion nanocrystals-based behavioral biometric optical tactile sensor that instantaneously and quantitatively decomposes dynamic touch signals into individual components of vertical normal and lateral shear force from a single image in real-time. By mimicking the sensory architecture of human skin, the unique luminescence signal obtained is axisymmetric for static normal forces and non-axisymmetric for dynamic shear forces. Our sensor demonstrates high spatio-temporal screening of small objects and recognizes fingerprints for authentication with high spatial-temporal resolution. Using a dynamic force discrimination machine learning framework, we realized a Braille-to-Speech translation system and a next-generation dynamic biometric recognition system for handwriting.


Asunto(s)
Tacto , Humanos , Tacto/fisiología , Dermatoglifia , Biometría/métodos , Biometría/instrumentación , Aprendizaje Automático , Nanopartículas/química , Identificación Biométrica/métodos , Identificación Biométrica/instrumentación
15.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39253988

RESUMEN

The US Food and Drug Administration launched Project Optimus to reform the dose optimization and dose selection paradigm in oncology drug development, calling for the paradigm shift from finding the maximum tolerated dose to the identification of optimal biological dose (OBD). Motivated by a real-world drug development program, we propose a master-protocol-based platform trial design to simultaneously identify OBDs of a new drug, combined with standards of care or other novel agents, in multiple indications. We propose a Bayesian latent subgroup model to accommodate the treatment heterogeneity across indications, and employ Bayesian hierarchical models to borrow information within subgroups. At each interim analysis, we update the subgroup membership and dose-toxicity and -efficacy estimates, as well as the estimate of the utility for risk-benefit tradeoff, based on the observed data across treatment arms to inform the arm-specific decision of dose escalation and de-escalation and identify the OBD for each arm of a combination partner and an indication. The simulation study shows that the proposed design has desirable operating characteristics, providing a highly flexible and efficient way for dose optimization. The design has great potential to shorten the drug development timeline, save costs by reducing overlapping infrastructure, and speed up regulatory approval.


Asunto(s)
Antineoplásicos , Teorema de Bayes , Simulación por Computador , Relación Dosis-Respuesta a Droga , Dosis Máxima Tolerada , Humanos , Antineoplásicos/administración & dosificación , Desarrollo de Medicamentos/métodos , Desarrollo de Medicamentos/estadística & datos numéricos , Modelos Estadísticos , Estados Unidos , United States Food and Drug Administration , Neoplasias/tratamiento farmacológico , Proyectos de Investigación , Biometría/métodos
16.
Invest Ophthalmol Vis Sci ; 65(11): 14, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39250121

RESUMEN

Purpose: The purpose of this study was to define the normal range of peripapillary retinal nerve fiber layer (pRNFL), macular ganglion cell layer (mGCL), and macular inner plexiform layer (mIPL) thickness in cynomolgus macaques, and explore their inter-relationship and correlation with age, refractive errors, and axial length (AL). Methods: In this cross-sectional study, we measured biometric and refractive parameters, and pRNFL/mGCL/mIPL thickness in 357 healthy cynomolgus macaques. Monkeys were divided into groups by age and spherical equivalent (SE). Correlation and regression analyses were used to explore the relationship between pRNFL and mGCL/mIPL thickness, and their correlation with the above parameters. Results: The mean age, SE, and AL were 14.46 ± 6.70 years, -0.96 ± 3.23 diopters (D), and 18.39 ± 1.02 mm, respectively. The mean global pRNFL thickness was 95.06 ± 9.42 µm (range = 54-116 µm), with highest values in the inferior quadrant, followed by the superior, temporal, and nasal quadrants (P < 0.001). Temporal pRNFL thickness correlated positively with age (r = 0.218, P < 0.001) and AL (r = 0.364, P < 0.001), and negatively with SE (r = -0.270, P < 0.001). In other quadrants, pRNFL thickness correlated negatively with age and AL, but positively with SE. In the multivariable linear regression model, adjusted for sex and AL, age (ß = -0.350, P < 0.001), and SE (ß = 0.206, P < 0.001) showed significant associations with global pRNFL thickness. After adjusting for age, sex, SE, and AL, pRNFL thickness positively correlated with mGCL (ß = 0.433, P < 0.001) and mIPL thickness (ß = 0.465, P < 0.001). Conclusions: The pRNFL/mGCL/mIPL thickness distribution and relationship with age, AL, and SE in cynomolgus macaques were highly comparable to those in humans, suggesting that cynomolgus monkeys are valuable animal models in ophthalmic research.


Asunto(s)
Macaca fascicularis , Fibras Nerviosas , Células Ganglionares de la Retina , Tomografía de Coherencia Óptica , Animales , Células Ganglionares de la Retina/citología , Masculino , Estudios Transversales , Tomografía de Coherencia Óptica/métodos , Femenino , Disco Óptico/anatomía & histología , Disco Óptico/diagnóstico por imagen , Longitud Axial del Ojo/anatomía & histología , Valores de Referencia , Biometría , Errores de Refracción/fisiopatología
17.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39248122

RESUMEN

The geometric median, which is applicable to high-dimensional data, can be viewed as a generalization of the univariate median used in 1-dimensional data. It can be used as a robust estimator for identifying the location of multi-dimensional data and has a wide range of applications in real-world scenarios. This paper explores the problem of high-dimensional multivariate analysis of variance (MANOVA) using the geometric median. A maximum-type statistic that relies on the differences between the geometric medians among various groups is introduced. The distribution of the new test statistic is derived under the null hypothesis using Gaussian approximations, and its consistency under the alternative hypothesis is established. To approximate the distribution of the new statistic in high dimensions, a wild bootstrap algorithm is proposed and theoretically justified. Through simulation studies conducted across a variety of dimensions, sample sizes, and data-generating models, we demonstrate the finite-sample performance of our geometric median-based MANOVA method. Additionally, we implement the proposed approach to analyze a breast cancer gene expression dataset.


Asunto(s)
Algoritmos , Neoplasias de la Mama , Simulación por Computador , Humanos , Análisis Multivariante , Neoplasias de la Mama/genética , Modelos Estadísticos , Femenino , Interpretación Estadística de Datos , Perfilación de la Expresión Génica/estadística & datos numéricos , Tamaño de la Muestra , Biometría/métodos
18.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39248121

RESUMEN

Recent years have witnessed a rise in the popularity of information integration without sharing of raw data. By leveraging and incorporating summary information from external sources, internal studies can achieve enhanced estimation efficiency and prediction accuracy. However, a noteworthy challenge in utilizing summary-level information is accommodating the inherent heterogeneity across diverse data sources. In this study, we delve into the issue of prior probability shift between two cohorts, wherein the difference of two data distributions depends on the outcome. We introduce a novel semi-parametric constrained optimization-based approach to integrate information within this framework, which has not been extensively explored in existing literature. Our proposed method tackles the prior probability shift by introducing the outcome-dependent selection function and effectively addresses the estimation uncertainty associated with summary information from the external source. Our approach facilitates valid inference even in the absence of a known variance-covariance estimate from the external source. Through extensive simulation studies, we observe the superiority of our method over existing ones, showcasing minimal estimation bias and reduced variance for both binary and continuous outcomes. We further demonstrate the utility of our method through its application in investigating risk factors related to essential hypertension, where the reduced estimation variability is observed after integrating summary information from an external data.


Asunto(s)
Simulación por Computador , Hipertensión Esencial , Probabilidad , Humanos , Modelos Estadísticos , Factores de Riesgo , Hipertensión , Interpretación Estadística de Datos , Biometría/métodos
19.
JAMA ; 332(8): 649-657, 2024 08 27.
Artículo en Inglés | MEDLINE | ID: mdl-39088200

RESUMEN

Importance: Accurate assessment of gestational age (GA) is essential to good pregnancy care but often requires ultrasonography, which may not be available in low-resource settings. This study developed a deep learning artificial intelligence (AI) model to estimate GA from blind ultrasonography sweeps and incorporated it into the software of a low-cost, battery-powered device. Objective: To evaluate GA estimation accuracy of an AI-enabled ultrasonography tool when used by novice users with no prior training in sonography. Design, Setting, and Participants: This prospective diagnostic accuracy study enrolled 400 individuals with viable, single, nonanomalous, first-trimester pregnancies in Lusaka, Zambia, and Chapel Hill, North Carolina. Credentialed sonographers established the "ground truth" GA via transvaginal crown-rump length measurement. At random follow-up visits throughout gestation, including a primary evaluation window from 14 0/7 weeks' to 27 6/7 weeks' gestation, novice users obtained blind sweeps of the maternal abdomen using the AI-enabled device (index test) and credentialed sonographers performed fetal biometry with a high-specification machine (study standard). Main Outcomes and Measures: The primary outcome was the mean absolute error (MAE) of the index test and study standard, which was calculated by comparing each method's estimate to the previously established GA and considered equivalent if the difference fell within a prespecified margin of ±2 days. Results: In the primary evaluation window, the AI-enabled device met criteria for equivalence to the study standard, with an MAE (SE) of 3.2 (0.1) days vs 3.0 (0.1) days (difference, 0.2 days [95% CI, -0.1 to 0.5]). Additionally, the percentage of assessments within 7 days of the ground truth GA was comparable (90.7% for the index test vs 92.5% for the study standard). Performance was consistent in prespecified subgroups, including the Zambia and North Carolina cohorts and those with high body mass index. Conclusions and Relevance: Between 14 and 27 weeks' gestation, novice users with no prior training in ultrasonography estimated GA as accurately with the low-cost, point-of-care AI tool as credentialed sonographers performing standard biometry on high-specification machines. These findings have immediate implications for obstetrical care in low-resource settings, advancing the World Health Organization goal of ultrasonography estimation of GA for all pregnant people. Trial Registration: ClinicalTrials.gov Identifier: NCT05433519.


Asunto(s)
Inteligencia Artificial , Edad Gestacional , Ultrasonografía Prenatal , Adulto , Femenino , Humanos , Embarazo , Biometría/métodos , Largo Cráneo-Cadera , Sistemas de Atención de Punto/economía , Primer Trimestre del Embarazo , Estudios Prospectivos , Programas Informáticos , Ultrasonografía Prenatal/economía , Ultrasonografía Prenatal/instrumentación , Ultrasonografía Prenatal/métodos , Zambia
20.
Biometrics ; 80(3)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39101548

RESUMEN

We consider the setting where (1) an internal study builds a linear regression model for prediction based on individual-level data, (2) some external studies have fitted similar linear regression models that use only subsets of the covariates and provide coefficient estimates for the reduced models without individual-level data, and (3) there is heterogeneity across these study populations. The goal is to integrate the external model summary information into fitting the internal model to improve prediction accuracy. We adapt the James-Stein shrinkage method to propose estimators that are no worse and are oftentimes better in the prediction mean squared error after information integration, regardless of the degree of study population heterogeneity. We conduct comprehensive simulation studies to investigate the numerical performance of the proposed estimators. We also apply the method to enhance a prediction model for patella bone lead level in terms of blood lead level and other covariates by integrating summary information from published literature.


Asunto(s)
Simulación por Computador , Humanos , Modelos Lineales , Biometría/métodos , Plomo/sangre , Rótula , Modelos Estadísticos , Interpretación Estadística de Datos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA