Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 194
Filtrar
1.
Accid Anal Prev ; 207: 107761, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39236440

RESUMEN

Electric vehicles (EVs) differ significantly from their internal combustion engine (ICE) counterparts, with reduced mechanical parts, Lithium-ion batteries and differences in pedal and transmission control. These differences in vehicle operation, coupled with the proliferation of EVs on our roads, warrant an in-depth investigation into the divergent risk profiles and driving behaviour of EVs, Hybrids (HYB) and ICEs. In this unique study, we analyze a novel telematics dataset of 14,642 vehicles in the Netherlands accompanied by accident claims data. We train a Logistic Regression model to predict the occurrence of driver at-fault claims, where an at-fault claim refers to First and Third Party damages where the driver was at fault. Our results reveal that EV drivers are more exposed to incurring at-fault claims than ICE drivers despite their lower average mileage. Additionally, we investigate the financial implications of these increased at-fault claims likelihoods and have found that EVs experience a 6.7% increase in significant first-party damage costs compared to ICE. When analyzing driver behaviour, we found that EVs and HYBs record fewer harsh acceleration, braking, cornering and speeding events than ICE. However, these reduced harsh events do not translate to reducing claims frequency for EVs. This research finds evidence of a higher frequency of accidents caused by Electric Vehicles. This burden should be considered explicitly by regulators, manufacturers, businesses and the general public when evaluating the cost of transitioning to alternative fuel vehicles.


Asunto(s)
Accidentes de Tránsito , Conducción de Automóvil , Humanos , Conducción de Automóvil/estadística & datos numéricos , Accidentes de Tránsito/estadística & datos numéricos , Accidentes de Tránsito/prevención & control , Países Bajos , Modelos Logísticos , Automóviles , Suministros de Energía Eléctrica
2.
Chest ; 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39232999

RESUMEN

BACKGROUND: The diagnostic performance of the available risk assessment models for venous thromboembolism in critically ill patients receiving pharmacologic thromboprophylaxis is unclear. RESEARCH QUESTION: For critically ill patients receiving pharmacologic thromboprophylaxis, do risk assessment models predict who would develop venous thromboembolism or who could benefit from adjunctive pneumatic compression for thromboprophylaxis? STUDY DESIGN AND METHODS: In this post hoc analysis of the PREVENT trial, we evaluated different risk assessment models for venous thromboembolism (ICU-VTE, Kucher, Intermountain, Caprini, Padua, and IMPROVE models). We constructed receiving operator characteristic curves and calculated the sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios. Additionally, we conducted subgroup analyses evaluating the effect of adjunctive pneumatic compression versus none on the study primary outcome. RESULTS: Among 2003 patients receiving pharmacologic thromboprophylaxis, 198 (9.9%) developed venous thromboembolism. With multivariable logistic regression analysis, the independent predictors of venous thromboembolism were APACHE II score, prior immobilization, femoral central venous catheter, and invasive mechanical ventilation. All risk assessment models had areas under the curve <0.60 except for the Caprini model (0.64, 95% confidence interval 0.60, 0.68). The Caprini, Padua and Intermountain models had high sensitivity (>85%) but low specificity (<20%) for predicting venous thromboembolism, whereas ICU-VTE, Kucher, and IMPROVE models had low sensitivities (<15%), but high specificities (>85%). The positive predictive value was low (<20%) for all studied cutoff scores, whereas the negative predictive value was mostly >90%. Using the risk assessment models to stratify patients into high- versus low-risk subgroups, the effect of adjunctive pneumatic compression versus pharmacologic prophylaxis alone was not different across the subgroups (p for interaction >0.05). INTERPRETATION: The risk assessment models for venous thromboembolism performed poorly in critically ill patients receiving pharmacologic thromboprophylaxis. None of the models identified a subgroup of patients who might benefit from adjunctive pneumatic compression.

3.
Front Mol Biosci ; 11: 1397281, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39184152

RESUMEN

Background: Mitochondria have always been considered too be closely related to the occurrence and development of malignant tumors. However, the bioinformatic analysis of mitochondria in lung adenocarcinoma (LUAD) has not been reported yet. Methods: In the present study, we constructed a novel and reliable algorithm, comprising a consensus cluster analysis and risk assessment model, to predict the survival outcomes and tumor immunity for patients with terminal LUAD. Results: Patients with LUAD were classified into three clusters, and patients in cluster 1 exhibited the best survival outcomes. The patients in cluster 3 had the highest expression of PDL1 (encoding programmed cell death 1 ligand 11) and HAVCR2 (encoding Hepatitis A virus cellular receptor 2), and the highest tumor mutation burden (TMB). In the risk assessment model, patients in the low-risk group tended to have a significantly better survival outcome. Furthermore, the risk score combined with stage could act as a reliable independent prognostic indicator for patients with LUAD. The prognostic signature is a novel and effective biomarker to select anti-tumor drugs. Low-risk patients tended to have a higher expression of CTLA4 (encoding cytotoxic T-lymphocyte associated protein 4) and HAVCR2. Moreover, patients in the high-risk group were more sensitive to Cisplatin, Docetaxel, Erlotinib, Gemcitabine, and Paclitaxel, while low-risk patients would probably benefit more from Gefitinib. Conclusion: We constructed a novel and reliable algorithm comprising a consensus cluster analysis and risk assessment model to predict survival outcomes, which functions as a reliable guideline for anti-tumor drug treatment for patients with terminal LUAD.

4.
Clin Appl Thromb Hemost ; 30: 10760296241271406, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39215513

RESUMEN

BACKGROUND: Currently, no universally accepted standardized VTE risk assessment model (RAM) is specifically designed for critically ill patients. Although the ICU-venous thromboembolism (ICU-VTE) RAM was initially developed in 2020, it lacks prospective external validation. OBJECTIVES: To evaluate the predictive performance of the ICU-VTE RAM in terms of VTE occurrence in mixed medical-surgical ICU patients. METHODS: We prospectively enrolled adult patients in the ICU. The ICU-VTE score and Caprini or Padua score were calculated at admission, and the incidence of in-hospital VTE was investigated. The performance of the ICU-VTE RAM was evaluated and compared with that of Caprini or Padua RAM using the receiver operating curve. RESULTS: We included 269 patients (median age: 70 years; 62.5% male). Eighty-three (30.9%) patients experienced inpatient VTE. The AUC of the ICU-VTE RAM was 0.743 (95% CI, 0.682-0.804, P < 0.001) for mixed medical-surgical ICU patients. Comparatively, the performance of the ICU-VTE RAM was superior to that of the Pauda RAM (AUC: 0.727 vs 0.583, P < 0.001) in critically ill medical patients and the Caprini RAM (AUC: 0.774 vs 0.617, P = 0.128) in critically ill surgical patients, although the latter comparison was not statistically significant. CONCLUSIONS: The ICU-VTE RAM may be a practical and valuable tool for identifying and stratifying VTE risk in mixed medical-surgical critically ill patients, aiding in managing and preventing VTE complications.


Asunto(s)
Enfermedad Crítica , Unidades de Cuidados Intensivos , Tromboembolia Venosa , Humanos , Masculino , Femenino , Anciano , Medición de Riesgo/métodos , Persona de Mediana Edad , Estudios Prospectivos , Adulto , Factores de Riesgo
5.
Environ Sci Pollut Res Int ; 31(37): 49744-49756, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39080173

RESUMEN

Regular groundwater quality monitoring in resource-constrained regions present formidable challenges in terms of funding, testing facilities and manpower; necessitating the development of easily implementable monitoring techniques. This study proposes a copula-based risk assessment model utilizing easily measurable indicators (e.g., turbidity, alkalinity, pH, total dissolved solids (TDS), conductivity), to monitor the contaminates in groundwater which are otherwise difficult to measure (i.e., iron, nitrate, sulfate, fluoride, etc.). Preliminary correlation between the indicators and the target contaminates were identified using Pearson coefficient. Best representative univariate distributions for these pairs were selected using the Akaike Information Criterion (AIC), which were used in the formulation of the copula model. Validation against observed data showcased the model's high accuracy, supported by consistent Kendall Tau correlation coefficients. Through this model, conditional probabilities of the contaminants not exceeding the permissible limits set by the Bureau of Indian Standards (BIS) were calculated using indicator concentration. Notably, an inverse correlation between iron concentration and conductivity was noted, with the likelihood of iron exceeding BIS limits decreasing from 90 to 50% as conductivity rose from 500 to 2000 micromhos/cm. TDS emerged as a pivotal indicator for nitrate and sulfate concentrations, with the probability of sulfate surpassing 10 mg/l decreasing from 75 to 25% as TDS increased from 250 to 750 mg/l. Likewise, the probability of nitrate exceeding 1 mg/l decreased from 90 to 60% with TDS levels reaching 1500 mg/l. Furthermore, a 63% probability of fluoride concentrations remaining below 1 mg/l was observed at turbidity levels of 0-10 NTU. These findings hold significant implications for policymakers and researchers since the model can provide crucial insights into the risks associated with the contaminates exceeding the permissible limit, facilitating the development of an efficient monitoring and management strategies to ensure safe drinking water access for vulnerable populations.


Asunto(s)
Monitoreo del Ambiente , Agua Subterránea , Contaminantes Químicos del Agua , Agua Subterránea/química , Monitoreo del Ambiente/métodos , Medición de Riesgo , Contaminantes Químicos del Agua/análisis , Nitratos/análisis , Sulfatos/análisis
6.
Thromb J ; 22(1): 68, 2024 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-39049082

RESUMEN

PURPOSE: This study aims to investigate the potential role of Caprini risk assessment model (RAM) in predicting the risk of venous thromboembolism (VTE) in patients undergoing total hip or knee arthroplasty (THA/TKA). No national study has investigated the role of Caprini RAM after primary THA/TKA. METHODS: Data from The National Sample of Healthcare Cost and Utilization Project (HCUP) in 2019 were utilized for this study. The dataset consisted of 229,134 patients who underwent primary THA/TKA. Deep vein thrombosis (DVT) and pulmonary embolism (PE) were considered as VTE. The incidence of thrombosis was calculated based on different Caprini scores, and the risk of the Caprini indicator for VTE events was evaluated using a forest plot. RESULTS: The prevalence of VTE after primary THA/TKA in the U.S. population in 2019 was found to be 4.7 cases per 1000 patients. Age, body mass index (BMI), and Caprini score showed a positive association with the risk of VTE (P < 0.05). The receiver operating characteristic (ROC) curve analysis indicated that a Caprini score of 9.5 had a sensitivity of 47.2% and a specificity of 82.7%, with an area under the curve (AUC) of 0.693 (95% CI, 0.677-0.710). The highest Youden index was 0.299. Multivariate logistic regression analysis revealed that malignancy, varicose vein, positive blood test for thrombophilia, history of thrombosis, COPD, hip fracture, blood transfusion, and age were significant risk factors for VTE. Based on these findings, a new risk stratification system incorporating the Caprini score was proposed. CONCLUSIONS: Although the Caprini score does not seem to be a good predictive model for VTE after primary THA/TKA, new risk stratification for the Caprini score is proposed to increase its usefulness.

7.
J Clin Med ; 13(13)2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-38999420

RESUMEN

Introduction: Hospital-acquired venous thromboembolisms (HA-VTEs) carry a significant health burden on patients and a financial burden on hospitals due to reimbursement penalties. VTE prophylaxis at our institute was performed through utilizing an order set based on healthcare professionals' perceived level of risk. However, the use of standardized risk assessment models is recommended by multiple professional societies. Furthermore, integrating decision support tools (DST) based on the standardized risk assessment models has been shown to increase the administration of appropriate deep vein thrombosis (DVT) prophylaxis. Nonetheless, such scoring systems are not inherently flawless and their integration into EMR as a mandatory step can come at the risk of healthcare professional fatigue and burnout. We conducted a study to evaluate the incidence of HA-VTE and length of stay pre- and post implementation of a DST. Methods: We conducted a retrospective, pre-post-implementation observational study at a tertiary medical center after implementing a mandatory DST. The DST used Padua scores for medical patients and Caprini scores for surgical patients. Patients were identified through ICD-10 codes and outcomes were collected from electronic charts. Healthcare professionals were surveyed through an anonymous survey and stored securely. Statistical analysis was conducted by using R (version 3.4.3). Results: A total of 343 patients developed HA-VTE during the study period. Of these, 170 patients developed HA-VTE in the 9 months following the implementation of the DST, while 173 patients were identified in the 9 months preceding the implementation. There was no statistically significant difference in mean HA-VTE/1000 discharge/month pre- and post implementation (4.4 (SD 1.6) compared to 4.6 (SD 1.2), confidence interval [CI] -1.6 to 1.2, p = 0.8). The DST was used in 73% of all HA-VTE cases over the first 6 months of implementation. The hospital length of stay (LOS) was 14.2 (SD 1.9) days prior to implementation and 14.1 (SD 1.6) days afterwards. No statistically significant change in readmission rates was noted (8.8% (SD 2.6) prior to implementation and 15.53% (SD 9.6) afterwards, CI -14.27 to 0.74, p = 0.07). Of the 56 healthcare professionals who answered the survey, 84% (n = 47) reported to be dissatisfied or extremely dissatisfied with the DST, while 91% (n = 51) reported that it slowed them down. Conclusions: There were no apparent changes in the prevalence of HA-VTE, length of stay, or readmission rates when VTE prophylaxis was mandated through DST compared to a prior model which used order sets based on perceived risk. Further studies are needed to further evaluate the current risk assessment models and improve healthcare professionals' satisfaction with DST.

8.
Front Neurol ; 15: 1370029, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38872827

RESUMEN

Introduction: Research indicates that individuals experiencing hemorrhagic stroke face a greater likelihood of developing lower extremity deep vein thrombosis (DVT) compared to those with ischemic stroke. This study aimed to assess the predictive capacity of the Caprini risk assessment model (RAM), D-dimer (D-D) levels, and fibrinogen (FIB) levels for lower extremity DVT in patients with spontaneous intracerebral hemorrhage (sICH). Methodology: This study involved a retrospective analysis of medical records from all sICH patients admitted to Shanghai General Hospital between June 2020 and June 2023. Within 48 h of admission, patients underwent routine screening via color Doppler ultrasonography (CDUS). Patients were categorized into the DVT and control groups based on the occurrence of lower extremity DVT during hospitalization. Differences in Caprini RAM, D-dimer, and FIB levels between the two groups were compared. The sensitivity and specificity of combined Caprini RAM, peripheral blood D-dimer, and FIB levels in predicting lower extremity DVT in sICH patients were analyzed. Receiver operating characteristic (ROC) curves assessed the overall predictive accuracy of Caprini RAM, D-D, and FIB levels. Results: The study involving 842 sICH patients revealed 225 patients with DVT and 617 patients without DVT. Caprini RAM, D-D, and FIB levels were significantly higher in the DVT group compared to the control group (P < 0.05). Sensitivity values for Caprini RAM, D-D, and FIB levels in predicting lower extremity DVT in sICH patients were 0.920, 0.893, and 0.680, respectively, while specificities were 0.840, 0.680, and 0.747, respectively. The ROC curve analysis demonstrated an area under the curve (AUC) of 0.947 for combined DVT prediction, with 97.33% sensitivity and 92.00% specificity, indicating superior predictive value compared to individual applications of Caprini RAM, D-D, and FIB levels. Conclusion: The combined utilization of Caprini RAM, D-D, and FIB levels holds significant clinical relevance in predicting lower extremity DVT in sICH patients.

9.
Cell Signal ; 121: 111237, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38810861

RESUMEN

BACKGROUND: The study aimed to investigate the role of copper death-related genes (CRGs) in bladder cancer (BC) for improved prognosis assessment. METHODS: Multi-omics techniques were utilized to analyze CRG expression in BC tissues from TCGA and GEO databases. Consensus clustering categorized patients into molecular subtypes based on clinical characteristics and immune cell infiltration. RESULTS: An innovative risk assessment model identified eight critical genes associated with BC risk. In vitro and in vivo experiments validated LIPT1's significant impact on copper-induced cell death, proliferation, migration, and invasion in BC. CONCLUSION: This multi-omics analysis elucidates the pivotal role of CRGs in BC progression, suggesting enhanced risk assessment through molecular subtype categorization and identification of key genes like LIPT1. Insights into these mechanisms offer the potential for improved diagnosis and treatment strategies for BC patients.


Asunto(s)
Cobre , Perfilación de la Expresión Génica , Neoplasias de la Vejiga Urinaria , Neoplasias de la Vejiga Urinaria/genética , Neoplasias de la Vejiga Urinaria/patología , Humanos , Regulación Neoplásica de la Expresión Génica , Medición de Riesgo , Animales , Línea Celular Tumoral , Proliferación Celular/genética , Movimiento Celular/genética , Femenino , Masculino , Transcriptoma , Ratones
10.
J Nepal Health Res Counc ; 21(4): 587-592, 2024 Mar 31.
Artículo en Inglés | MEDLINE | ID: mdl-38616587

RESUMEN

BACKGROUND: Although rare, deep vein thrombosis is a potentially life-threatening complication of knee arthroscopy. There are scanty literature analysing deep vein thrombosis after arthroscopy in Nepal. This study aimed to identify the prevalence of deep vein thrombosis in patients undergoing knee arthroscopy without chemoprophylaxis postoperatively at 2 weeks and 6 weeks, respectively. The study also aimed to estimate the risk of deep vein thrombosis in these patients by using Caprini Risk Assessment Model. METHODS: This prospective observational study was conducted at AKB center, B and B Hospital, Gwarko, Lalitpur, over a period of 16 months. All patients who underwent arthroscopy knee surgeries fulfilling the inclusion criteria were included in the study. The primary outcome measure was the prevalence of deep vein thrombosis as diagnosed by compression color-coded ultrasonography of the popliteal vein and calf vein at 2 weeks and 6 weeks postoperatively. The secondary outcome measure was the prevalence of deep vein thrombosis in the risk groups according to Caprini Risk Assessment Model. RESULTS: Out of 612 patients who underwent arthroscopic knee surgeries during the study period, 2 patients (0.33%) developed deep vein thrombosis at 6 weeks follow-up as diagnosed with ultrasonography of the popliteal and calf veins. The prevalence rate in high-risk group was 0.33% (1 in 307) and in very high-risk group was 5.88% (1 in 17). CONCLUSIONS: There was a low prevalence of deep vein thrombosis without chemoprophylaxis following knee arthroscopy in our study. There was higher prevalence of deep vein thrombosis in very high-risk group patients, so close monitoring of such patients during follow-up is recommended.


Asunto(s)
Tromboembolia Venosa , Trombosis de la Vena , Humanos , Artroscopía/efectos adversos , Tromboembolia Venosa/epidemiología , Tromboembolia Venosa/etiología , Tromboembolia Venosa/prevención & control , Nepal/epidemiología , Venas , Trombosis de la Vena/epidemiología , Trombosis de la Vena/etiología , Trombosis de la Vena/prevención & control
11.
Clin Appl Thromb Hemost ; 30: 10760296241238210, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38562103

RESUMEN

INTRODUCTION: Postoperative venous thromboembolism (VTE) is a frequently occurring complication among glioma patients. Several risk assessment models (RAMs), including the Caprini RAM, the IMPROVE Risk Score, the IMPROVED VTE Risk Score, and the Padua Prediction Score, have not been validated within the glioma patient population. The purpose of this study was to assess the predictive accuracy of established VTE risk scales in patients with glioma. MATERIALS AND METHODS: A single-center, retrospective, observational cohort study was conducted on 265 glioma patients who underwent surgery at the Almazov Medical and Research Centre between 2021 and 2022. VTE detection followed the current clinical guidelines. Threshold values for the Caprini, IMPROVE VTE, IMPROVEDD, and Padua scales were determined using ROC analysis methods, with cumulative weighting for sensitivity and specificity in predicting VTE development. The areas under the ROC curves (AUC) were calculated, and comparisons were made using the DeLong test. RESULTS: The area under the curve for the Caprini risk assessment model was 80.41, while the IMPROVEDD VTE risk score was 75.38, the Padua prediction score was 76.9, and the IMPROVE risk score was 72.58. No significant differences were observed in the AUC values for any of the scales. The positive predictive values of all four scales were low, with values of 50 (28-72) for Caprini, 48 (28-69) for IMPROVEDD VTE, 50 (30-70) for Padua, and 64 (35-87) for IMPROVE RAM. No significant differences were found in terms of PPV, NPV, positive likelihood ratio, and negative likelihood ratio among the analyzed scales. CONCLUSIONS: The Caprini Risk Assessment Model, the IMPROVE Risk Score, the IMPROVED VTE Risk Score, and the Padua Prediction Score exhibit acceptable specificity and sensitivity for glioma patients. However, their low positive predictive ability, coupled with the complexity of interpretation, limits their utility in neurosurgical practice.


Asunto(s)
Tromboembolia Venosa , Humanos , Estudios Retrospectivos , Tromboembolia Venosa/diagnóstico , Medición de Riesgo/métodos , Factores de Riesgo , Estudios de Cohortes
12.
Heliyon ; 10(7): e28756, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38601665

RESUMEN

Various health risk assessment models have been developed to evaluate occupational pesticide exposure in China. However, there has been limited investigation into the relationship between health risks and pesticide spraying in orchards. In this study, we analyzed pesticide exposure of applicators while spraying with a stretcher-mounted sprayer in orchards located in four different climatic regions. All garments' unit exposure (UE) demonstrated a right-skewed distribution, with gloves and shins accounting for the highest proportion of dermal pesticide exposure. We observed little difference in dermal and inhalation UE levels between apple and citrus orchards, except for pesticide exposure levels on wipes and faces. While 57% of the inhalation UE distribution variance was attributed to clustering and location effects, no significant differences were observed in dermal exposure levels. We evaluated the impact of different levels of protective clothing on pesticide exposure levels, according to applicators' working habits in China. Our findings revealed that improved levels of protection significantly reduced dermal exposure to pesticides, particularly when wearing gloves during spraying with a stretcher-mounted sprayer. Based on our empirical data, we utilized a simple random sampling model and an intercept-only lognormal mixed model to estimate dermal and inhalation exposure levels. The estimated dermal UE was accurate to within 3-fold with 95% confidence, and half of the estimated inhalation UE was acceptable according to the fold relative accuracy (fRA). Our established and verified statistics for dermal and inhalation UE can be utilized to evaluate the potential pesticide exposure to applicators during spraying in orchards with a stretcher-mounted sprayer.

13.
Clin Appl Thromb Hemost ; 30: 10760296241247205, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38632943

RESUMEN

To external validate the risk assessment model (RAM) of venous thromboembolism (VTE) in multicenter internal medicine inpatients. We prospectively collected 595 internal medical patients (310 with VTE patients, 285 non-VTE patients) were from Beijing Shijitan Hospital, Beijing Chaoyang Hospital, and the respiratory department of Beijing Tsinghua Changgeng Hospital from January 2022 to December 2022 for multicenter external validation. The prediction ability of Caprini RAM, Padua RAM, The International Medical Prevention Registry on Venous Thromboembolism (IMPROVE) RAM, and Shijitan (SJT) RAM were compared. This study included a total of 595 internal medicine inpatients, including 242 (40.67%) in the respiratory department, 17 (2.86%) in the respiratory intensive care unit, 49 (8.24%) in the neurology department, 34 (5.71%) in the intensive care unit, 26 (4.37%) in the geriatric department, 22 (3.70%) in the emergency department, 71 (11.93%) in the nephrology department, 63 (10.59%) in the cardiology department, 24 (4.03%) in the hematology department, 6 (1.01%) in the traditional Chinese medicine department, 9 (1.51%) cases in the rheumatology department, 7 (1.18%) in the endocrinology department, 14 (2.35%) in the oncology department, and 11 (1.85%) in the gastroenterology department. Multivariate logistic regression analysis showed that among internal medicine inpatients, age > 60 years old, heart failure, nephrotic syndrome, tumors, history of VTE, and elevated D-dimer were significantly correlated with the occurrence of VTE (P < .05). The incidence of VTE increases with the increase of D-dimer. It was found that the effectiveness of SJT RAM (AUC = 0.80 ± 0.03) was better than Caprini RAM (AUC = 0.74 ± 0.03), Padua RAM (AUC = 0.72 ± 0.03) and IMPROVE RAM (AUC = 0.52 ± 0.03) (P < .05). The sensitivity and Yoden index of SJT RAM were higher than those of Caprini RAM, Pauda RAM, and IMPROVE RAM (P < .05), but specificity was not significantly different between the 4 models (P > .05). The SJT RAM derived from general hospitalized Chinese patients has effective and better predictive ability for internal medicine inpatients at risk of VTE.


Asunto(s)
Tromboembolia Venosa , Humanos , Anciano , Persona de Mediana Edad , Tromboembolia Venosa/etiología , Factores de Riesgo , Pacientes Internos , Estudios Retrospectivos , Medición de Riesgo
14.
Huan Jing Ke Xue ; 45(2): 1026-1037, 2024 Feb 08.
Artículo en Chino | MEDLINE | ID: mdl-38471940

RESUMEN

Quantifying the risk of soil heavy metal sources can identify the main pollution sources. It can provide a scientific basis for reducing the ecological and human health risks of soil heavy metals. Taking the shallow soil in a Pb-Zn mine watershed in northern Guangxi as a research object, ecological and human health risk assessments were conducted using potential ecological risk assessment (RI) and human health risk assessment (HRA), and the source apportionment of soil heavy metals was completed using the absolute principal component-multiple linear regression receptor (APCS-MLR) model and random forest (RF) model. Then, a combined risk assessment model, consisting of RI, HRA, and APCS-MLR, was used to quantify the risk of soil heavy metal sources. The results showed that the contents of Pb, Zn, Cu, and Cd exceeded the environmental screening values for agricultural land with mean values of 342.77, 693.34, 61.27, and 3.08 mg·kg-1, respectively, and there was a certain degree of contamination. Pb, Cr, and As were the main health risk impact factors, with higher health risks for children than for adults. Three sources were identified: mining activities (Source Ⅰ), soil parent material sources and original formation (Source Ⅱ), and unknown sources. Pb, Zn, Cu, and Cd were mainly derived from Source Ⅰ, and Cr and As were controlled by unknown sources and Source Ⅱ. The source risk assessment results of soil heavy metals indicated that the potential ecological risk and non-carcinogenic risk were mainly from Source Ⅰ and Source Ⅱ, and carcinogenic risk was mainly from unknown sources. The unknown sources had a high proportion in source apportionment and risk assessment, and should be further researched to provide scientific basis for soil heavy metal control. The combined risk assessment model based on source analysis, focusing on the risk characteristics of different sources, can accurately identify high-risk pollution sources. It is a more reasonable and reliable risk assessment method.


Asunto(s)
Metales Pesados , Contaminantes del Suelo , Adulto , Niño , Humanos , Suelo , Monitoreo del Ambiente , Cadmio , Plomo , Contaminantes del Suelo/análisis , China , Medición de Riesgo , Metales Pesados/análisis
15.
Digit Health ; 10: 20552076241241381, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38550266

RESUMEN

Background: Hyperuricemia is a common complication of type 2 diabetes mellitus and can lead to serious consequences such as gout and kidney disease. Methods: Patients with type 2 diabetes mellitus from six different communities in Fuzhou were recruited from June to December 2022. Questionnaires, physical examinations, and laboratory tests were conducted to collect data on various variables. Variable screening steps were performed using univariate and multivariate stepwise regression, least absolute shrinkage and selection operator (LASSO) regression, and Boruta feature selection. The dataset was divided into a training-testing set (80%) and an independent validation set (20%). Six machine learning models were built and validated. Results: A total of 8243 patients with type 2 diabetes mellitus were included in this study. According to Occam's razor method, the LASSO regression algorithm was determined to be the optimal risk factors selection method, and nine variables were identified as parameters for the risk assessment model. The absence of diabetes medication and elevated fasting blood glucose levels exhibited a negative correlation with the risk of hyperuricemia. Conversely, seven other variables demonstrated a positive association with the risk of hyperuricemia among patients diagnosed with type 2 diabetes mellitus. Among the six machine learning models, the artificial neural network (ANN) model demonstrated the highest performance. It achieved an areas under curve of 0.736, accuracy of 68.3%, sensitivity of 65.0%, specificity of 72.2%, precision of 73.6% and F1-score of 69.0%. Conclusions: We developed an ANN model to better evaluate the risk of hyperuricemia in the type 2 diabetes population. In the type 2 diabetes population, women should pay particular attention to their uric acid levels, and type 2 diabetics should not neglect their obesity level, blood pressure, kidney function and lipid profile during their regular medical check-ups, in order to do their best to avoid the risks associated with the combination of type 2 diabetes and hyperuricemia.

16.
Sci Total Environ ; 926: 171815, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38513859

RESUMEN

Typhoons can bring substantial casualties and economic ramifications, and effective prevention strategies necessitate a comprehensive risk assessment. Nevertheless, existing studies on its comprehensive risk assessment are characterized by coarse spatial scales, limited incorporation of geographic big data, and rarely considering disaster mitigation capacity. To address these problems, this study combined multi-source geographic big data to develop the Comprehensive Risk Assessment Model (CRAM). The model integrated 17 indicators from 4 categories of factors, including exposure, vulnerability, hazard, and mitigation capacity. A subjective-objective combination weighting method was introduced to generate the indicator weights, and comprehensive risk index of typhoon disasters was calculated for 987 counties along China's coastal regions. Results revealed a pronounced spatial heterogeneity of the comprehensive typhoon risk, which exhibited an overall decreasing trend from the southeast coastal areas toward the northwest inland territories. 61.7 % of the counties exhibited a medium-to-high level of comprehensive risk, and counties with very-high risks are predominantly concentrated in the Shandong Peninsula, Yangtze River Delta, Hokkien Golden Triangle, Greater Bay Area, Leizhou Peninsula, and Hainan Province, mainly due to high exposure and hazard factors. The correlation coefficient between the risk assessment results and typhoon-induced direct economic losses reached 0.702, indicating the effectiveness and reliability of the CRAM. Meanwhile, indicators from intrinsic attributes of typhoons and geographic big data had pronounced importance, and regional mitigation capacity should be improved. Our proposed method can help to scientifically understand spatial patterns of comprehensive risk and mitigate the effects of typhoon disasters in China's coastal regions.

17.
J Hazard Mater ; 469: 133549, 2024 May 05.
Artículo en Inglés | MEDLINE | ID: mdl-38447362

RESUMEN

Particle size is a critical influencing factor in assessing human exposure risk as fine particles are generally more hazardous than larger coarse particles. However, how particle composition influences human health risk is only poorly understood as different studies have different utilised different definitions and as a consequence there is no consensus. Here, with a new methodology taking insights of each size fraction load (%GSFload), metal bioaccessibility, we classify which specific particle size can reliably estimate the human exposure risk of lead and other metals. We then validate these by correlating the metals in each size fraction with those in human blood, hair, crop grain and different anthropogenic sources. Although increasing health risks are linked to metal concentration these increase as particle size decrease, the adjusted-risk for each size fraction differs when %GSFload is introduced to the risk assessment program. When using a single size fraction (250-50 µm, 50-5 µm, 5-1 µm, and < 1 µm) for comparison, the risk may be either over- or under-estimated. However, by considering bulk and adjusting the risk, it would be possible to obtain results that are closer to the real scenarios, which have been validated through human responses and evidence from crops. Fine particle size fractions (< 5 µm) bearing the mineral crystalline or aggregates (CaCO3, Fe3O4, Fe2O3, CaHPO4, Pb5(PO4)3Cl) alter the accumulation, chemical speciation, and fate of metals in soil/dust/sediment from the different sources. Loaded lead in the size fraction of < 50 µm has a significantly higher positive association with the risk-receptor biomarkers (BLLs, Hair Pb, Corn Pb, and Crop Pb) than other size fractions (bulk and 50-250 µm). Thus, we conclude that the < 50 µm fraction would be likely to be recommended as a reliable fraction to include in a risk assessment program. This methodology acts as a valuable instrument for future research undertakings, highlighting the importance of choosing suitable size fractions and attaining improved accuracy in risk assessment results that can be effectively compared.


Asunto(s)
Metales Pesados , Contaminantes del Suelo , Humanos , Plomo , Metales Pesados/análisis , Tamaño de la Partícula , Suelo/química , Polvo/análisis , Medición de Riesgo , Contaminantes del Suelo/análisis , Monitoreo del Ambiente
18.
J Clin Hypertens (Greenwich) ; 26(3): 274-285, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38341620

RESUMEN

Electrocardiography (ECG) is an accessible diagnostic tool for screening patients with hypertensive left ventricular hypertrophy (LVH). However, its diagnostic sensitivity is low, with a high probability of false-negatives. Thus, this study aimed to establish a clinically useful nomogram to supplement the assessment of LVH in patients with hypertension and without ECG-LVH based on Cornell product criteria (low-risk hypertensive population). A cross-sectional dataset was used for model construction and divided into development (n = 2906) and verification (n = 1447) datasets. A multivariable logistic regression risk model and nomogram were developed after screening for risk factors. Of the 4353 low-risk hypertensive patients, 673 (15.4%) had LVH diagnosed by echocardiography (Echo-LVH). Eleven risk factors were identified: hypertension awareness, duration of hypertension, age, sex, high waist-hip ratio, education level, tea consumption, hypochloremia, and other ECG-LVH diagnostic criteria (including Sokolow-Lyon, Sokolow-Lyon products, and Peguero-Lo Presti). For the development and validation datasets, the areas under the curve were 0.724 (sensitivity = 0.606) and 0.700 (sensitivity = 0.663), respectively. After including blood pressure, the areas under the curve were 0.735 (sensitivity = 0.734) and 0.716 (sensitivity = 0.718), respectively. This novel nomogram had a good predictive ability and may be used to assess the Echo-LVH risk in patients with hypertension and without ECG-LVH based on Cornell product criteria.


Asunto(s)
Hipertensión , Humanos , Hipertensión/complicaciones , Hipertensión/diagnóstico , Hipertensión/epidemiología , Hipertrofia Ventricular Izquierda/diagnóstico por imagen , Hipertrofia Ventricular Izquierda/epidemiología , Nomogramas , Estudios Transversales , Electrocardiografía
19.
Jpn J Clin Oncol ; 54(6): 699-707, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38376811

RESUMEN

OBJECTIVE: This study aimed to construct a nomogram to predict radiation-induced hepatic toxicity in patients with hepatocellular carcinoma treated with intensity-modulated radiotherapy. METHODS: This study reviewed the clinical characteristics and dose-volume parameters of 196 patients with hepatocellular carcinoma. Radiation-induced hepatic toxicity was defined as progression of the Child-Pugh score caused by intensity-modulated radiotherapy. Factors relevant to radiation-induced hepatic toxicity were selected using receiver operating characteristic and univariate logistic analysis. A risk assessment model was developed, and its discrimination was validated. RESULTS: Eighty-eight (44.90%) and 28 (14.29%) patients had radiation-induced hepatic toxicity ≥ 1 (Child-Pugh ≥ 1) and radiation-induced hepatic toxicity ≥ 2 (Child-Pugh ≥ 2). Pre-treatment Child-Pugh, body mass index and dose-volume parameters were correlated with radiation-induced hepatic toxicity ≥ 1 using univariate logistic analysis. V15 had the best predictive effectiveness among the dose-volume parameters in both the training (area under the curve: 0.763, 95% confidence interval: 0.683-0.842, P < 0.001) and validation cohorts (area under the curve: 0.759, 95% confidence interval: 0.635-0.883, P < 0.001). The area under the curve values of the model that was constructed by pre-treatment Child-Pugh, body mass index and V15 for radiation-induced hepatic toxicity ≥1 were 0.799 (95% confidence interval: 0.719-0.878, P < 0.001) and 0.775 (95% confidence interval: 0.657-0.894, P < 0.001) in the training and validation cohorts, respectively. Patients with a body mass index ≤ 20.425, Barcelona clinic liver cancer = C, Hepatitis B Virus-positive, Eastern Cooperative Oncology Group = 1-2 and hepatic fibrosis require lower V15 dose limits. CONCLUSIONS: Risk assessment model constructed from Pre-treatment Child-Pugh, V15 and body mass index can guide individualized patient selection of toxicity minimization strategies.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Nomogramas , Traumatismos por Radiación , Radioterapia de Intensidad Modulada , Humanos , Carcinoma Hepatocelular/radioterapia , Neoplasias Hepáticas/radioterapia , Masculino , Femenino , Estudios Retrospectivos , Radioterapia de Intensidad Modulada/efectos adversos , Radioterapia de Intensidad Modulada/métodos , Persona de Mediana Edad , Anciano , Traumatismos por Radiación/etiología , Adulto , Anciano de 80 o más Años , Hígado/efectos de la radiación
20.
J Thromb Haemost ; 22(4): 1094-1104, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38184201

RESUMEN

BACKGROUND: Only 1 conventional score is available for assessing bleeding risk in patients with cancer-associated thrombosis (CAT): the CAT-BLEED score. OBJECTIVES: Our aim was to develop a machine learning-based risk assessment model for predicting bleeding in CAT and to evaluate its predictive performance in comparison to that of the CAT-BLEED score. METHODS: We collected 488 attributes (clinical data, biochemistry, and International Classification of Diseases, 10th Revision, diagnosis) in 1080 unique patients with CAT. We compared CAT-BLEED score, Ridge and Lasso logistic regression, random forest, and Extreme Gradient Boosting (XGBoost) algorithms for predicting major bleeding or clinically relevant nonmajor bleeding occurring 1 to 90 days, 1 to 365 days, and 90 to 455 days after venous thromboembolism (VTE). RESULTS: The predictive performances of Lasso logistic regression, random forest, and XGBoost were higher than that of the CAT-BLEED score in the prediction of bleeding occurring 1 to 90 days and 1 to 365 days after VTE. For predicting major bleeding or clinically relevant nonmajor bleeding 1 to 90 days after VTE, the CAT-BLEED score achieved a mean area under the receiver operating characteristic curve (AUROC) of 0.48 ± 0.13, while Lasso logistic regression and XGBoost both achieved AUROCs of 0.64 ± 0.12. For predicting bleeding 1 to 365 days after VTE, the CAT-BLEED score achieved a mean AUROC of 0.47 ± 0.08, while Lasso logistic regression and XGBoost achieved AUROCs of 0.64 ± 0.08 and 0.59 ± 0.08, respectively. CONCLUSION: This is the first machine learning-based risk model for bleeding prediction in patients with CAT receiving anticoagulation therapy. Its predictive performance was higher than that of the conventional CAT-BLEED score. With further development, this novel algorithm might enable clinicians to perform personalized anticoagulation strategies with improved clinical outcomes.


Asunto(s)
Neoplasias , Trombosis , Tromboembolia Venosa , Humanos , Tromboembolia Venosa/diagnóstico , Tromboembolia Venosa/tratamiento farmacológico , Tromboembolia Venosa/etiología , Hemorragia/diagnóstico , Trombosis/etiología , Trombosis/tratamiento farmacológico , Anticoagulantes/efectos adversos , Aprendizaje Automático , Neoplasias/complicaciones , Neoplasias/tratamiento farmacológico
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA