Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.013
Filtrar
1.
Int J Legal Med ; 2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39227492

RESUMO

Certificates of medical evidence are often used to aid the court in assessing the cause and severity of a victim's injuries. In cases with significant blood loss, the question whether the bleeding itself was life-threatening sometimes arises. To answer this, the volume classification of hypovolemic shock described in ATLS® is commonly used as an aid, where a relative blood loss > 30% is considered life-threatening. In a recent study of deaths due to internal haemorrhage, many cases had a relative blood loss < 30%. However, many included cases had injuries which could presumably cause deaths via other mechanisms, making the interpretation uncertain. To resolve remaining ambiguity, we studied whether deaths due to isolated liver lacerations had a relative blood loss < 30%, a cause of death where the mechanism of death is presumably exsanguination only. Using the National Board of Forensic Medicine autopsy database, we identified all adult decedents, who had undergone a medico-legal autopsy 2001-2021 (n = 105 952), where liver laceration was registered as the underlying cause of death (n = 102). Cases where death resulted from a combination of also other injuries (n = 79), and cases that had received hospital care, were excluded (n = 4), leaving 19 cases. The proportion of internal haemorrhage to calculated total blood volume in these fatal pure exsanguinations ranged from 12 to 52%, with 63% of cases having a proportion < 30%. Our results lend further support to the claim that the volume classification of hypovolemic shock described in ATLS® is inappropriate for assessing the degree of life-threatening haemorrhage in medico-legal cases.

2.
Am J Epidemiol ; 2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39270679

RESUMO

During infectious disease outbreaks, estimates for the instantaneous reproduction number, R(t), are essential for understanding transmission dynamics. This study develops and analyzes new methodology to improve estimation of R(t) when observed case counts are subject to reporting patterns and available serial interval estimates are subject to uncertainty and non-representativeness. Specifically, we developed a Bayesian time-since-infection model with layers to adjust for reporting measurement error, integrate multiple candidate serial interval estimates, and estimate transmission with an autoregressive time-series model incorporating factors relevant to transmission. Additionally, we provide practical tools to identify reporting patterns and determine when to smooth case counts for more usable R(t) estimates. We evaluated model performance relative to widely adopted methodology by simulating outbreak data, finding improved R(t) estimation with the proposed methodology. We also used 2020 COVID-19 data to analyze transmission trends and predictors, identifying strong day-of-week and social distancing effects that subsequently reduced estimate volatility. In addition to new approaches for addressing serial interval uncertainty and incorporating transmission predictor information, this study provides an alternative approach for addressing case-reporting patterns without delaying detection or smoothing over relevant transmission signals. These tools and findings may be used or built upon for current and future outbreaks.

3.
Int J Exerc Sci ; 17(4): 1134-1154, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39258120

RESUMO

The purpose of the current study was to test the hypothesis that individual response classification for surrogate markers of cardiorespiratory fitness (CRF) will agree with response classification for VO2peak. Surrogate markers of CRF were time to fatigue on treadmill test (TTF), time trial performance (3kTT), resting heart rate (RHR), submaximal heart rate (SubmaxHR), and submaximal ratings of perceived exertion (SubmaxRPE). Twenty-five participants were randomized into a high-intensity interval training (HIIT: n = 14) group or non-exercise control group (CTL: n = 11). Training consisted of four weeks of high-intensity interval training (HIIT) - 4x4 minute intervals at 90-95% HRmax 3 times per week. We observed poor agreement between response classification for VO2peak and surrogate markers (agreement < 60% for all outcomes). Although surrogate markers and VO2peak correlated at the pre- and post-intervention time points, change scores for VO2peak were not correlated with changes in surrogate markers of CRF. Interestingly, a significant relationship (r 2 = 0.36, p = 0.02) was observed when comparing improvements in estimated training performance (VO2) and change in VO2peak. Contrary to our hypothesis, we observed poor classification agreement and non-significant correlations for changes scores of VO2peak and surrogate markers of CRF. Our results suggest that individuals concerned with their VO2peak response seek direct measurements of VO2.

4.
BMC Med Res Methodol ; 24(1): 194, 2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39243025

RESUMO

BACKGROUND: Early identification of children at high risk of developing myopia is essential to prevent myopia progression by introducing timely interventions. However, missing data and measurement error (ME) are common challenges in risk prediction modelling that can introduce bias in myopia prediction. METHODS: We explore four imputation methods to address missing data and ME: single imputation (SI), multiple imputation under missing at random (MI-MAR), multiple imputation with calibration procedure (MI-ME), and multiple imputation under missing not at random (MI-MNAR). We compare four machine-learning models (Decision Tree, Naive Bayes, Random Forest, and Xgboost) and three statistical models (logistic regression, stepwise logistic regression, and least absolute shrinkage and selection operator logistic regression) in myopia risk prediction. We apply these models to the Shanghai Jinshan Myopia Cohort Study and also conduct a simulation study to investigate the impact of missing mechanisms, the degree of ME, and the importance of predictors on model performance. Model performance is evaluated using the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC). RESULTS: Our findings indicate that in scenarios with missing data and ME, using MI-ME in combination with logistic regression yields the best prediction results. In scenarios without ME, employing MI-MAR to handle missing data outperforms SI regardless of the missing mechanisms. When ME has a greater impact on prediction than missing data, the relative advantage of MI-MAR diminishes, and MI-ME becomes more superior. Furthermore, our results demonstrate that statistical models exhibit better prediction performance than machine-learning models. CONCLUSION: MI-ME emerges as a reliable method for handling missing data and ME in important predictors for early-onset myopia risk prediction.


Assuntos
Aprendizado de Máquina , Miopia , Humanos , Miopia/diagnóstico , Miopia/epidemiologia , Feminino , Criança , Masculino , Modelos Logísticos , Modelos Estatísticos , Medição de Risco/métodos , Medição de Risco/estatística & dados numéricos , Fatores de Risco , Curva ROC , Teorema de Bayes , China/epidemiologia , Estudos de Coortes , Idade de Início
5.
Int J Psychophysiol ; 205: 112441, 2024 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-39299302

RESUMO

The late positive potential (LPP) is an ERP component commonly used to study emotional processes and has been proposed as a neuroaffective biomarker for research and clinical uses. These applications, however, require standardized procedures for elicitation and ERP data processing. We evaluated the impact of different EEG preprocessing steps on the LPP's data quality and statistical power. Using a diverse sample of 158 adults, we implemented a multiverse analytical approach to compare preprocessing pipelines that progressively incorporated more steps: artifact detection and rejection, bad channel interpolation, and bad segment deletion. We assessed each pipeline's effectiveness by computing the standardized measurement error (SME) and conducting simulated experiments to estimate statistical power in detecting significant LPP differences between emotional and neutral images. Our findings highlighted that artifact rejection is crucial for enhancing data quality and statistical power. Voltage thresholds to reject trials contaminated by artifacts significantly affected SME and statistical power. Once artifact detection was optimized, further steps provided minor improvements in data quality and statistical power. Importantly, different preprocessing pipelines yielded similar outcomes. These results underscore the robustness of the LPP's affective modulation to preprocessing choices and the critical role of effective artifact management. By refining and standardizing preprocessing procedures, the LPP can become a reliable neuroaffective biomarker, supporting personalized clinical interventions for affective disorders.

6.
Stat Med ; 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39291682

RESUMO

We consider evaluating biomarkers for treatment selection under assay modification. Survival outcome, treatment, and Affymetrix gene expression data were attained from cancer patients. Consider migrating a gene expression biomarker to the Illumina platform. A recent novel approach allows a quick evaluation of the migrated biomarker with only a reproducibility study needed to compare the two platforms, achieved by treating the original biomarker as an error-contaminated observation of the migrated biomarker. However, its assumptions of a classical measurement error model and a linear predictor for the outcome may not hold. Ignoring such model deviations may lead to sub-optimal treatment selection or failure to identify effective biomarkers. To overcome such limitations, we adopt a nonparametric logistic regression to model the relationship between the event rate and the biomarker, and the deduced marker-based treatment selection is optimal. We further assume a nonparametric relationship between the migrated and original biomarkers and show that the error-contaminated biomarker leads to sub-optimal treatment selection compared to the error-free biomarker. We obtain the estimation via B-spline approximation. The approach is assessed by simulation studies and demonstrated through application to lung cancer data.

7.
J R Stat Soc Ser C Appl Stat ; 73(1): 104-122, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-39280900

RESUMO

Cognitive impairment has been widely accepted as a disease progression measure prior to the onset of Huntington's disease. We propose a sophisticated measurement error correction method that can handle potentially correlated measurement errors in longitudinally collected exposures and multiple outcomes. The asymptotic theory for the proposed method is developed. A simulation study is conducted to demonstrate the satisfactory performance of the proposed two-stage fitting method and shows that the independent working correlation structure outperforms other alternatives. We conduct a comprehensive longitudinal analysis to assess how brain striatal atrophy affects impairment in various cognitive domains for Huntington's disease.

8.
Front Physiol ; 15: 1435103, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39318360

RESUMO

Introduction: While maximum strength diagnostics are applied in several sports and rehabilitative settings, dynamic strength capacity has been determined via the one-repetition maximum (1RM) testing for decades. Because the literature concerned several limitations, such as injury risk and limited practical applicability in large populations (e.g., athletic training groups), the strength prediction via the velocity profile has received increasing attention recently. Referring to relative reliability coefficients and inappropriate interpretation of agreement statistics, several previous recommendations neglected systematic and random measurement bias. Methods: This article explored the random measurement error arising from repeated testing (repeatability) and the agreement between two common sensors (vMaxPro and TENDO) within one repetition, using minimal velocity thresholds as well as the velocity = 0 m/s method. Furthermore, agreement analyses were applied to the estimated and measured 1RM in 25 young elite male soccer athletes. Results: The results reported repeatability values with an intraclass correlation coefficient (ICC) = 0.66-0.80, which was accompanied by mean absolute (percentage) errors (MAE and MAPE) of up to 0.04-0.22 m/s and ≤7.5%. Agreement between the two sensors within one repetition showed a systematic lower velocity for the vMaxPro device than the Tendo, with ICCs ranging from 0.28 to 0.88, which were accompanied by an MAE/MAPE of ≤0.13 m/s (11%). Almost all estimations systematically over/ underestimated the measured 1RM, with a random scattering between 4.12% and 71.6%, depending on the velocity threshold used. Discussion: In agreement with most actual reviews, the presented results call for caution when using velocity profiles to estimate strength. Further approaches must be explored to minimize especially the random scattering.

9.
Am J Epidemiol ; 2024 Sep 25.
Artigo em Inglês | MEDLINE | ID: mdl-39323264

RESUMO

Negative controls are increasingly used to evaluate the presence of potential unmeasured confounding in observational studies. Beyond the use of negative controls to detect the presence of residual confounding, proximal causal inference (PCI) was recently proposed to de-bias confounded causal effect estimates, by leveraging a pair of treatment and outcome negative control or confounding proxy variables. While formal methods for statistical inference have been developed for PCI, these methods can be challenging to implement as they involve solving complex integral equations that are typically ill-posed. We develop a regression-based PCI approach, employing two-stage generalized linear regression models (GLMs) to implement PCI, which obviates the need to solve difficult integral equations. The proposed approach has merit in that (i) it is applicable to continuous, count, and binary outcomes cases, making it relevant to a wide range of real-world applications, and (ii) it is easy to implement using off-the-shelf software for GLMs. We establish the statistical properties of regression-based PCI and illustrate their performance in both synthetic and real-world empirical applications.

10.
J Surv Stat Methodol ; 12(4): 961-986, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39220584

RESUMO

Biosocial surveys increasingly use interviewers to collect objective physical health measures (or "biomeasures") in respondents' homes. While interviewers play an important role, their high involvement can lead to unintended interviewer effects on the collected measurements. Such interviewer effects add uncertainty to population estimates and have the potential to lead to erroneous inferences. This study examines interviewer effects on the measurement of physical performance in a cross-national and longitudinal setting using data from the Survey of Health, Ageing and Retirement in Europe. The analyzed biomeasures exhibited moderate-to-large interviewer effects on the measurements, which varied across biomeasure types and across countries. Our findings demonstrate the necessity to better understand the origin of interviewer-related measurement errors in biomeasure collection and account for these errors in statistical analyses of biomeasure data.

11.
Heliyon ; 10(16): e35852, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39220900

RESUMO

Randomized response scrambling techniques have been in existence for over fifty years. These scrambling methods are very useful in sample surveys where researchers deal with sensitive variables. Out of many available scrambling techniques, survey researchers often need to evaluate these techniques to choose the best technique for real-world surveys. In the current literature, only a limited number of model-evaluation metrics are available for analyzing the performance of different scrambling methods. This leaves a big research gap for the development of new unified evaluation measures which can quantify all aspects of a scrambling technique. We develop a novel unified metric for evaluation of randomized response models and compare it with the existing unified measure. The proposed measure can quantify the efficiency and the level of the respondents' privacy of any scrambling technique. Being less sensitive to sample sizes than the existing unified measure, the proposed measure can be used with small sample sizes to evaluate models.

12.
Am J Epidemiol ; 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39198907

RESUMO

Higher order evidence (evidence about evidence) allows epidemiologists and other health data scientists to account for measurement error in validation data. Here, to illustrate the use of higher order evidence, we provide a minimal nontrivial example of estimating the proportion and show how higher order evidence can be used to construct sensitivity analyses. The proposed method provides a flexible approach to account for multiple levels of distortion in the results of epidemiologic studies.

13.
BMC Ophthalmol ; 24(1): 326, 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39103785

RESUMO

PURPOSE: To research the accuracy of intraocular lens (IOL) calculation formulas and investigate the effect of anterior chamber depth (ACD) and lens thickness (LT) measured by swept-source optical coherence tomography biometer (IOLMaster 700) in patients with posterior chamber phakic IOL (PC-pIOL). METHODS: Retrospective case series. The IOLMaster 700 biometer was used to measure axial length (AL) and anterior segment parameters. The traditional formulas (SRK/T, Holladay 1 and Haigis) with or without Wang-Koch (WK) AL adjustment, and new-generation formulas (Barret Universal II [BUII], Emmetropia Verifying Optical [EVO] v2.0, Kane, Pearl-DGS) were utilized in IOL power calculation. RESULTS: This study enrolled 24 eyes of 24 patients undergoing combined PC-pIOL removal and cataract surgery at Xiamen Eye Center of Xiamen University, Xiamen, Fujian, China. The median absolute prediction error in ascending order was EVO 2.0 (0.33), Kane (0.35), SRK/T-WKmodified (0.42), Holladay 1-WKmodified (0.44), Haigis-WKC1 (0.46), Pearl-DGS (0.47), BUII (0.58), Haigis (0.75), SRK/T (0.79), and Holladay 1 (1.32). The root-mean-square absolute error in ascending order was Haigis-WKC1 (0.591), Holladay 1-WKmodified (0.622), SRK/T-WKmodified (0.623), EVO (0.673), Kane (0.678), Pearl-DGS (0.753), BUII (0.863), Haigis (1.061), SRK/T (1.188), and Holladay 1 (1.513). A detailed analysis of ACD and LT measurement error revealed negligible impact on refractive outcomes in BUII and EVO 2.0 when these parameters were incorporated or omitted in the formula calculation. CONCLUSION: The Kane, EVO 2.0, and traditional formulas with WK AL adjustment displayed high prediction accuracy. Furthermore, the ACD and LT measurement error does not exert a significant influence on the accuracy of IOL power calculation formulas in highly myopic eyes implanted with PC-pIOL.


Assuntos
Biometria , Catarata , Lentes Intraoculares Fácicas , Refração Ocular , Tomografia de Coerência Óptica , Humanos , Estudos Retrospectivos , Tomografia de Coerência Óptica/métodos , Feminino , Masculino , Pessoa de Meia-Idade , Biometria/métodos , Refração Ocular/fisiologia , Catarata/complicações , Adulto , Óptica e Fotônica , Reprodutibilidade dos Testes , Idoso , Comprimento Axial do Olho/diagnóstico por imagem , Comprimento Axial do Olho/patologia , Câmara Anterior/diagnóstico por imagem , Acuidade Visual/fisiologia , Implante de Lente Intraocular/métodos
14.
Gait Posture ; 113: 543-552, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39178597

RESUMO

BACKGROUND: Wearable technologies using inertial sensors are an alternative for gait assessment. However, their psychometric properties in evaluating post-stroke patients are still being determined. This systematic review aimed to evaluate the psychometric properties of wearable technologies used to assess post-stroke gait and analyze their reliability and measurement error. The review also investigated which wearable technologies have been used to assess angular changes in post-stroke gait. METHODS: The present review included studies in English with no publication date restrictions that evaluated the psychometric properties (e.g., validity, reliability, responsiveness, and measurement error) of wearable technologies used to assess post-stroke gait. Searches were conducted from February to March 2023 in the following databases: Cochrane Central Registry of Controlled Trials (CENTRAL), Medline/PubMed, EMBASE Ovid, CINAHL EBSCO, PsycINFO Ovid, IEEE Xplore Digital Library (IEEE), and Physiotherapy Evidence Database (PEDro); the gray literature was also verified. The Consensus-based Standards for the Selection of Health Measurement Instruments (COSMIN) risk-of-bias tool was used to assess the quality of the studies that analyzed reliability and measurement error. RESULTS: Forty-two studies investigating validity (37 studies), reliability (16 studies), and measurement error (6 studies) of wearable technologies were included. Devices presented good reliability in measuring gait speed and step count; however, the quality of the evidence supporting this was low. The evidence of measurement error in step counts was indeterminate. Moreover, only two studies obtained angular results using wearable technology. SIGNIFICANCE: Wearable technologies have demonstrated reliability in analyzing gait parameters (gait speed and step count) among post-stroke patients. However, higher-quality studies should be conducted to improve the quality of evidence and to address the measurement error assessment. Also, few studies used wearable technology to analyze angular changes during post-stroke gait.


Assuntos
Análise da Marcha , Transtornos Neurológicos da Marcha , Psicometria , Dispositivos Eletrônicos Vestíveis , Humanos , Marcha/fisiologia , Análise da Marcha/instrumentação , Transtornos Neurológicos da Marcha/diagnóstico , Transtornos Neurológicos da Marcha/etiologia , Transtornos Neurológicos da Marcha/fisiopatologia , Transtornos Neurológicos da Marcha/reabilitação , Psicometria/instrumentação , Reprodutibilidade dos Testes , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/fisiopatologia , Reabilitação do Acidente Vascular Cerebral/métodos
15.
J Cyst Fibros ; 23(5): 943-946, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39147620

RESUMO

Clinical trials often demonstrate treatment efficacy through change in forced expiratory volume in one second (FEV1), comparing single FEV1 measurements from post- versus pre-treatment timepoints. Day-to-day variation in measured FEV1 is common for reasons such as diurnal variation and intermittent health changes, relative to a stable, monthly average. This variation can alter estimation of associations between change in FEV1 and baseline in predictable ways, through a phenomenon called regression to the mean. We quantify and explain day-to-day variation in percent-predicted FEV1 (ppFEV1) from 4 previous trials, and we present a statistical, data-driven explanation for potential bias in ceiling and floor effects due to commonly observed amounts of variation. We recommend accounting for variation when assessing associations between baseline value and change in CF outcomes in single-arm trials, and we consider possible impact of variation on conventional standards for study eligibility.


Assuntos
Fibrose Cística , Humanos , Volume Expiratório Forçado , Fibrose Cística/fisiopatologia , Fibrose Cística/terapia
16.
Artigo em Inglês | MEDLINE | ID: mdl-39113782

RESUMO

A biomarker is a measurable indicator of the severity or presence of a disease or medical condition in biomedical or epidemiological research. Biomarkers may help in early diagnosis and prevention of diseases. Several biomarkers have been identified for many diseases such as carbohydrate antigen 19-9 for pancreatic cancer. However, biomarkers may be measured with errors due to many reasons such as specimen collection or day-to-day within-subject variability of the biomarker, among others. Measurement error in the biomarker leads to bias in the regression parameter estimation for the association of the biomarker with disease in epidemiological studies. In addition, measurement error in the biomarkers may affect standard diagnostic measures to evaluate the performance of biomarkers such as the receiver operating characteristic (ROC) curve, area under the ROC curve, sensitivity, and specificity. Measurement error may also have an effect on how to combine multiple cancer biomarkers as a composite predictor for disease diagnosis. In follow-up studies, biomarkers are often collected intermittently at examination times, which may be sparse and typically biomarkers are not observed at the event times. Joint modeling of longitudinal and time-to-event data is a valid approach to account for measurement error in the analysis of repeatedly measured biomarkers and time-to-event outcomes. In this article, we provide a literature review on existing methods to correct for estimation in regression analysis, diagnostic measures, and joint modeling of longitudinal biomarkers and survival outcomes when the biomarkers are measured with errors. This article is categorized under: Statistical and Graphical Methods of Data Analysis > Robust MethodsStatistical and Graphical Methods of Data Analysis > EM AlgorithmStatistical Models > Survival Models.

17.
Am J Epidemiol ; 2024 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-39098825

RESUMO

Measuring age-specific, contextual exposures is crucial for lifecourse epidemiology research. Longitudinal residential data offers a "golden ticket" to cumulative exposure metrics and can enhance our understanding of health disparities. Residential history can be linked to myriad spatiotemporal databases to characterize environmental, socioeconomic, and policy contexts that a person experienced throughout life. However, obtaining accurate residential history is challenging in the United States due to the limitations of administrative registries and self-reports. Xu et al. (Am J Epidemiol. 2024; 193(2):348-359) detail an approach to linking residential history sourced from LexisNexis ® Accurint ® to a Wisconsin-based research cohort, offering insights into challenges with residential history collection. Researchers must analyze the magnitude of selection and misclassification biases inherent to ascertaining residential history from cohort data. A lifecourse framework can provide insights into why the frequency and distance of moves is patterned by age, birth cohort, racial/ethnic identity, socioeconomic status, and urbanicity. Historic and contemporary migration patterns of marginalized people seeking economic and political opportunities must guide interpretations of residential history data. We outline methodologic priorities for use of residential history in health disparities research, including contextualizing residential history data with determinants of residential moves, triangulating spatial exposure assessment methods, and transparently quantifying measurement error.

18.
J Autism Dev Disord ; 2024 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-39096462

RESUMO

Several autism knowledge assessments include "don't know" as a response option. The inclusion of this response option may lead to systematic error, such that participants' guessing rate affects the measurement of their autism knowledge. This study examines both predictors of guessing rate for autism knowledge and predictors of autism knowledge, including guessing rate. School-based professionals (n = 396) completed the Autism Spectrum Knowledge Scale Professional Version-Revised (ASKSP-R; McClain et al, Journal of Autism and Developmental Disorders 50(3):998-1006, 2020). and the Autism Stigma and Knowledge Questionnaire (ASK-Q; Harrison et al, Journal of Autism and Developmental Disorders 47(10):3281-3295, 2017). Both assessments include "don't know" as a response option. Guessing rate was the strongest predictor of autism knowledge across both the ASKSP-R and the ASK-Q assessments. For the ASKSP-R, participants who were school psychologists, practicing for more years, had more autism-related clinical experiences, and who personally knew an autistic person had a higher guessing rate. School psychologists and participants who worked with more autistic students scored higher in autism knowledge. For the ASK-Q, participants with greater self-perceived autism knowledge had a higher guessing rate. Participants with a doctorate degree, who personally knew an autistic person, and who worked with more autistic students scored higher in autism knowledge. Guessing rate can be a source of systematic error on autism knowledge assessments. Potential solutions to correct for guessing rate are examined and recommended for future use.

19.
Curr Dev Nutr ; 8(8): 103774, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39157011

RESUMO

Dairy, especially cheese, is associated with high levels of greenhouse gas emissions. Accurate estimates of dairy consumption are therefore important for monitoring dietary transition targets. Previous studies found that disaggregating the meat out of composite foods significantly impacts estimates of meat consumption. Our objective was to determine whether disaggregating the dairy out of composite foods impacts estimates of dairy consumption in Scotland. Approximately 32% of foods in the UK Nutrient Databank contain some dairy. In the 2021 Scottish Health Survey, mean daily intakes of dairy with and without disaggregation of composite foods were 238.6 and 218.4 g, respectively. This translates into an 8% underestimation of dairy consumption when not accounting for dairy in composite foods. In particular, milk was underestimated by 7% and cheese and butter by 50%, whereas yogurt was overestimated by 15% and cream by 79%. Failing to disaggregate dairy from composite foods may underestimate dairy consumption.

20.
Sensors (Basel) ; 24(16)2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39204951

RESUMO

The best method to prevent error due to inhomogeneity is to use a new thermocouple design-the thermocouple with controlled temperature field (TCTF). It uses the auxiliary furnace to control the temperature field along its legs. Such a design allows setting and maintaining the temperature field along the thermocouple (TC) legs for the sensor. Error due to inhomogeneity of TCs cannot appear in a stable temperature field. However, the auxiliary furnace and TCs, to control the temperature field, have errors, so the temperature field along the main TC is maintained with some error. This leads to residual error due to acquired inhomogeneity of the TCTF. We constructed the mathematical models to fit the experimental data of error due to drift for the type K TC. The authors used the constructed models to study error due to inhomogeneity of the TCTF and the conventional type K TC under considerable changes in temperature field. The main results of modelling are as follows: (i) if the changes in temperature field exceed 7 °C, error due to inhomogeneity of the TCTF is lesser than that of the conventional TC; (ii) the maximum error due to inhomogeneity of the conventional type K TC is 10.75 °C; (iii) the maximum error due to inhomogeneity of the TCTF is below 0.2 °C.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA