Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
1.
Diagnostics (Basel) ; 14(14)2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39061675

RESUMEN

Background: Segmenting computed tomography (CT) is crucial in various clinical applications, such as tailoring personalized cardiac ablation for managing cardiac arrhythmias. Automating segmentation through machine learning (ML) is hindered by the necessity for large, labeled training data, which can be challenging to obtain. This article proposes a novel approach for automated, robust labeling using domain knowledge to achieve high-performance segmentation by ML from a small training set. The approach, the domain knowledge-encoding (DOKEN) algorithm, reduces the reliance on large training datasets by encoding cardiac geometry while automatically labeling the training set. The method was validated in a hold-out dataset of CT results from an atrial fibrillation (AF) ablation study. Methods: The DOKEN algorithm parses left atrial (LA) structures, extracts "anatomical knowledge" by leveraging digital LA models (available publicly), and then applies this knowledge to achieve high ML segmentation performance with a small number of training samples. The DOKEN-labeled training set was used to train a nnU-Net deep neural network (DNN) model for segmenting cardiac CT in N = 20 patients. Subsequently, the method was tested in a hold-out set with N = 100 patients (five times larger than training set) who underwent AF ablation. Results: The DOKEN algorithm integrated with the nn-Unet model achieved high segmentation performance with few training samples, with a training to test ratio of 1:5. The Dice score of the DOKEN-enhanced model was 96.7% (IQR: 95.3% to 97.7%), with a median error in surface distance of boundaries of 1.51 mm (IQR: 0.72 to 3.12) and a mean centroid-boundary distance of 1.16 mm (95% CI: -4.57 to 6.89), similar to expert results (r = 0.99; p < 0.001). In digital hearts, the novel DOKEN approach segmented the LA structures with a mean difference for the centroid-boundary distances of -0.27 mm (95% CI: -3.87 to 3.33; r = 0.99; p < 0.0001). Conclusions: The proposed novel domain knowledge-encoding algorithm was able to perform the segmentation of six substructures of the LA, reducing the need for large training data sets. The combination of domain knowledge encoding and a machine learning approach could reduce the dependence of ML on large training datasets and could potentially be applied to AF ablation procedures and extended in the future to other imaging, 3D printing, and data science applications.

2.
Commun Med (Lond) ; 4(1): 137, 2024 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-38987347

RESUMEN

BACKGROUND: The prevalence of obesity has been increasing worldwide, with substantial implications for public health. Obesity is independently associated with cardiovascular morbidity and mortality and is estimated to cost the health system over $200 billion dollars annually. Glucagon-like peptide-1 receptor agonists (GLP-1 RAs) have emerged as a practice-changing therapy for weight loss and cardiovascular risk reduction independent of diabetes. METHODS: We used large language models to augment our previously reported artificial intelligence-enabled topic modeling pipeline to analyze over 390,000 unique GLP-1 RA-related Reddit discussions. RESULTS: We find high interest around GLP-1 RAs, with a total of 168 topics and 33 groups focused on the GLP-1 RA experience with weight loss, comparison of side effects between differing GLP-1 RAs and alternate therapies, issues with GLP-1 RA access and supply, and the positive psychological benefits of GLP-1 RAs and associated weight loss. Notably, public sentiment in these discussions was mostly neutral-to-positive. CONCLUSIONS: These findings have important implications for monitoring new side effects not captured in randomized control trials and understanding the public health challenge of drug shortages.


Obesity is a global public health burden that increases heart disease risk. Glucagon-like peptide-1 receptor agonists (GLP-1 RAs) are a class of medications originally developed for diabetes but are now used to improve lifespans in those with heart disease and increase weight loss. To better understand how the public views this type of drug, over 390,000 discussions from the social media platform Reddit were analyzed using computer software. Topics of discussion included experiences with weight loss, side effects of different GLP-1 RAs, and concerns about drug access and supply. The results showed a mainly neutral-to-positive view of these medications. The findings may help identify new side effects not previously seen in clinical trials and highlight future directions for research and public health efforts.

3.
Eur Heart J Digit Health ; 5(4): 427-434, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39081946

RESUMEN

Aims: Deep learning methods have recently gained success in detecting left ventricular systolic dysfunction (LVSD) from electrocardiogram (ECG) waveforms. Despite their high level of accuracy, they are difficult to interpret and deploy broadly in the clinical setting. In this study, we set out to determine whether simpler models based on standard ECG measurements could detect LVSD with similar accuracy to that of deep learning models. Methods and results: Using an observational data set of 40 994 matched 12-lead ECGs and transthoracic echocardiograms, we trained a range of models with increasing complexity to detect LVSD based on ECG waveforms and derived measurements. The training data were acquired from the Stanford University Medical Center. External validation data were acquired from the Columbia Medical Center and the UK Biobank. The Stanford data set consisted of 40 994 matched ECGs and echocardiograms, of which 9.72% had LVSD. A random forest model using 555 discrete, automated measurements achieved an area under the receiver operator characteristic curve (AUC) of 0.92 (0.91-0.93), similar to a deep learning waveform model with an AUC of 0.94 (0.93-0.94). A logistic regression model based on five measurements achieved high performance [AUC of 0.86 (0.85-0.87)], close to a deep learning model and better than N-terminal prohormone brain natriuretic peptide (NT-proBNP). Finally, we found that simpler models were more portable across sites, with experiments at two independent, external sites. Conclusion: Our study demonstrates the value of simple electrocardiographic models that perform nearly as well as deep learning models, while being much easier to implement and interpret.

4.
Curr Atheroscler Rep ; 26(7): 263-272, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38780665

RESUMEN

PURPOSE OF REVIEW: This review evaluates how Artificial Intelligence (AI) enhances atherosclerotic cardiovascular disease (ASCVD) risk assessment, allows for opportunistic screening, and improves adherence to guidelines through the analysis of unstructured clinical data and patient-generated data. Additionally, it discusses strategies for integrating AI into clinical practice in preventive cardiology. RECENT FINDINGS: AI models have shown superior performance in personalized ASCVD risk evaluations compared to traditional risk scores. These models now support automated detection of ASCVD risk markers, including coronary artery calcium (CAC), across various imaging modalities such as dedicated ECG-gated CT scans, chest X-rays, mammograms, coronary angiography, and non-gated chest CT scans. Moreover, large language model (LLM) pipelines are effective in identifying and addressing gaps and disparities in ASCVD preventive care, and can also enhance patient education. AI applications are proving invaluable in preventing and managing ASCVD and are primed for clinical use, provided they are implemented within well-regulated, iterative clinical pathways.


Asunto(s)
Inteligencia Artificial , Enfermedades Cardiovasculares , Humanos , Enfermedades Cardiovasculares/prevención & control , Enfermedades Cardiovasculares/diagnóstico , Medición de Riesgo/métodos
5.
NPJ Digit Med ; 7(1): 83, 2024 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-38555387

RESUMEN

Coronary artery calcium (CAC) is a powerful tool to refine atherosclerotic cardiovascular disease (ASCVD) risk assessment. Despite its growing interest, contemporary public attitudes around CAC are not well-described in literature and have important implications for shared decision-making around cardiovascular prevention. We used an artificial intelligence (AI) pipeline consisting of a semi-supervised natural language processing model and unsupervised machine learning techniques to analyze 5,606 CAC-related discussions on Reddit. A total of 91 discussion topics were identified and were classified into 14 overarching thematic groups. These included the strong impact of CAC on therapeutic decision-making, ongoing non-evidence-based use of CAC testing, and the patient perceived downsides of CAC testing (e.g., radiation risk). Sentiment analysis also revealed that most discussions had a neutral (49.5%) or negative (48.4%) sentiment. The results of this study demonstrate the potential of an AI-based approach to analyze large, publicly available social media data to generate insights into public perceptions about CAC, which may help guide strategies to improve shared decision-making around ASCVD management and public health interventions.

7.
Curr Opin Cardiol ; 39(1): 1-5, 2024 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-37751365

RESUMEN

PURPOSE OF REVIEW: The field of cardiac pacing has undergone significant evolution with the introduction and adoption of conduction system pacing (CSP) and leadless pacemakers (LLPMs). These innovations provide benefits over conventional pacing methods including avoiding lead related complications and achieving more physiological cardiac activation. This review critically assesses the latest advancements in CSP and LLPMs, including their benefits, challenges, and potential for future growth. RECENT FINDINGS: CSP, especially of the left bundle branch area, enhances ventricular depolarization and cardiac mechanics. Recent studies show CSP to be favorable over traditional pacing in various patient populations, with an increase in its global adoption. Nevertheless, challenges related to lead placement and long-term maintenance persist. Meanwhile, LLPMs have emerged in response to complications from conventional pacemaker leads. Two main types, Aveir and Micra, have demonstrated improved outcomes and adoption over time. The incorporation of new technologies allows LLPMs to cater to broader patient groups, and their integration with CSP techniques offers exciting potential. SUMMARY: The advancements in CSP and LLPMs present a transformative shift in cardiac pacing, with evidence pointing towards enhanced clinical outcomes and reduced complications. Future innovations and research are likely to further elevate the clinical impact of these technologies, ensuring improved patient care for those with conduction system disorders.


Asunto(s)
Estimulación Cardíaca Artificial , Marcapaso Artificial , Humanos , Estimulación Cardíaca Artificial/métodos , Diseño de Equipo , Resultado del Tratamiento
8.
Front Cardiovasc Med ; 10: 1189293, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37849936

RESUMEN

Background: Segmentation of computed tomography (CT) is important for many clinical procedures including personalized cardiac ablation for the management of cardiac arrhythmias. While segmentation can be automated by machine learning (ML), it is limited by the need for large, labeled training data that may be difficult to obtain. We set out to combine ML of cardiac CT with domain knowledge, which reduces the need for large training datasets by encoding cardiac geometry, which we then tested in independent datasets and in a prospective study of atrial fibrillation (AF) ablation. Methods: We mathematically represented atrial anatomy with simple geometric shapes and derived a model to parse cardiac structures in a small set of N = 6 digital hearts. The model, termed "virtual dissection," was used to train ML to segment cardiac CT in N = 20 patients, then tested in independent datasets and in a prospective study. Results: In independent test cohorts (N = 160) from 2 Institutions with different CT scanners, atrial structures were accurately segmented with Dice scores of 96.7% in internal (IQR: 95.3%-97.7%) and 93.5% in external (IQR: 91.9%-94.7%) test data, with good agreement with experts (r = 0.99; p < 0.0001). In a prospective study of 42 patients at ablation, this approach reduced segmentation time by 85% (2.3 ± 0.8 vs. 15.0 ± 6.9 min, p < 0.0001), yet provided similar Dice scores to experts (93.9% (IQR: 93.0%-94.6%) vs. 94.4% (IQR: 92.8%-95.7%), p = NS). Conclusions: Encoding cardiac geometry using mathematical models greatly accelerated training of ML to segment CT, reducing the need for large training sets while retaining accuracy in independent test data. Combining ML with domain knowledge may have broad applications.

9.
Front Cardiovasc Med ; 10: 1251511, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37711561

RESUMEN

Introduction: Left ventricular hypertrophy (LVH) detection techniques on by electrocardiogram (ECG) are cumbersome to remember with modest performance. This study validated a rapid technique for LVH detection and measured its performance against other techniques. Methods: This was a retrospective cohort study of patients at Stanford Health Care who received ECGs and resting transthoracic echocardiograms (TTE) from 2006 through 2018. The novel technique, Witteles-Somani (WS), assesses for S- and R-wave overlap on adjacent precordial leads. The WS, Sokolow-Lyon, Cornell, and Peguero-Lo Presti techniques were algorithmically implemented on ECGs. Classification metrics, receiver-operator curves, and Pearson correlations measured performance. Age- and sex-adjusted Cox proportional hazard models evaluated associations between incident cardiovascular outcomes and each technique. Results: A total of 53,333 ECG-TTE pairs from 18,873 patients were identified. Of all ECG-TTE pairs, 21,638 (40.6%) had TTE-diagnosed LVH. The WS technique had a sensitivity of 0.46, specificity of 0.66, and AUROC of 0.56, compared to Sokolow-Lyon (AUROC 0.55), Cornell (AUROC 0.63), and Peguero-Lo Presti (AUROC 0.63). Patients meeting LVH by WS technique had a higher risk of cardiovascular mortality [HR 1.18, 95% CI (1.12, 1.24), P < 0.001] and a higher risk of developing any cardiovascular disease [HR 1.29, 95% CI (1.22, 1.36), P < 0.001], myocardial infarction [HR 1.60, 95% CI (1.44, 1.78), P < 0.005], and heart failure [HR 1.24, 95% CI (1.17, 1.32), P < 0.001]. Conclusions: The WS criteria is a rapid visual technique for LVH detection with performance like other LVH detection techniques and is associated with incident cardiovascular outcomes.

10.
JAMA Netw Open ; 6(4): e239747, 2023 04 03.
Artículo en Inglés | MEDLINE | ID: mdl-37093597

RESUMEN

Importance: Despite compelling evidence that statins are safe, are generally well tolerated, and reduce cardiovascular events, statins are underused even in patients with the highest risk. Social media may provide contemporary insights into public perceptions about statins. Objective: To characterize and classify public perceptions about statins that were gleaned from more than a decade of statin-related discussions on Reddit, a widely used social media platform. Design, Setting, and Participants: This qualitative study analyzed all statin-related discussions on the social media platform that were dated between January 1, 2009, and July 12, 2022. Statin- and cholesterol-focused communities, were identified to create a list of statin-related discussions. An artificial intelligence (AI) pipeline was developed to cluster these discussions into specific topics and overarching thematic groups. The pipeline consisted of a semisupervised natural language processing model (BERT [Bidirectional Encoder Representations from Transformers]), a dimensionality reduction technique, and a clustering algorithm. The sentiment for each discussion was labeled as positive, neutral, or negative using a pretrained BERT model. Exposures: Statin-related posts and comments containing the terms statin and cholesterol. Main Outcomes and Measures: Statin-related topics and thematic groups. Results: A total of 10 233 unique statin-related discussions (961 posts and 9272 comments) from 5188 unique authors were identified. The number of statin-related discussions increased by a mean (SD) of 32.9% (41.1%) per year. A total of 100 discussion topics were identified and were classified into 6 overarching thematic groups: (1) ketogenic diets, diabetes, supplements, and statins; (2) statin adverse effects; (3) statin hesitancy; (4) clinical trial appraisals; (5) pharmaceutical industry bias and statins; and (6) red yeast rice and statins. The sentiment analysis revealed that most discussions had a neutral (66.6%) or negative (30.8%) sentiment. Conclusions and Relevance: Results of this study demonstrated the potential of an AI approach to analyze large, contemporary, publicly available social media data and generate insights into public perceptions about statins. This information may help guide strategies for addressing barriers to statin use and adherence.


Asunto(s)
Inhibidores de Hidroximetilglutaril-CoA Reductasas , Medios de Comunicación Sociales , Humanos , Inhibidores de Hidroximetilglutaril-CoA Reductasas/uso terapéutico , Inteligencia Artificial , Colesterol , Actitud
11.
Cardiovasc Digit Health J ; 3(5): 220-231, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36310683

RESUMEN

Background: Electrocardiogram (ECG) deep learning (DL) has promise to improve the outcomes of patients with cardiovascular abnormalities. In ECG DL, researchers often use convolutional neural networks (CNNs) and traditionally use the full duration of raw ECG waveforms that create redundancies in feature learning and result in inaccurate predictions with large uncertainties. Objective: For enhancing these predictions, we introduced a sub-waveform representation that leverages the rhythmic pattern of ECG waveforms (data-centric approach) rather than changing the CNN architecture (model-centric approach). Results: We applied the proposed representation to a population with 92,446 patients to identify left ventricular dysfunction. We found that the sub-waveform representation increases the performance metrics compared to the full-waveform representation. We observed a 2% increase for area under the receiver operating characteristic curve and 10% increase for area under the precision-recall curve. We also carefully examined three reliability components of explainability, interpretability, and fairness. We provided an explanation for enhancements obtained by heartbeat alignment mechanism. By developing a new scoring system, we interpreted the clinical relevance of ECG features and showed that sub-waveform representation further pushes the scores towards clinical predictions. Finally, we showed that the new representation significantly reduces prediction uncertainties within subgroups that contributes to individual fairness. Conclusion: We expect that this added control over the granularity of ECG data will improve the DL modeling for new artificial intelligence technologies in the cardiovascular space.

12.
Math Biosci Eng ; 19(7): 6795-6813, 2022 05 05.
Artículo en Inglés | MEDLINE | ID: mdl-35730283

RESUMEN

A significant amount of clinical research is observational by nature and derived from medical records, clinical trials, and large-scale registries. While there is no substitute for randomized, controlled experimentation, such experiments or trials are often costly, time consuming, and even ethically or practically impossible to execute. Combining classical regression and structural equation modeling with matching techniques can leverage the value of observational data. Nevertheless, identifying variables of greatest interest in high-dimensional data is frequently challenging, even with application of classical dimensionality reduction and/or propensity scoring techniques. Here, we demonstrate that projecting high-dimensional medical data onto a lower-dimensional manifold using deep autoencoders and post-hoc generation of treatment/control cohorts based on proximity in the lower-dimensional space results in better matching of confounding variables compared to classical propensity score matching (PSM) in the original high-dimensional space (P<0.0001) and performs similarly to PSM models constructed by experts with prior knowledge of the underlying pathology when evaluated on predicting risk ratios from real-world clinical data. Thus, in cases when the underlying problem is poorly understood and the data is high-dimensional in nature, matching in the autoencoder latent space might be of particular benefit.


Asunto(s)
Proyectos de Investigación , Estudios de Cohortes , Humanos , Puntaje de Propensión
13.
Eur Heart J Digit Health ; 3(1): 56-66, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35355847

RESUMEN

Aims: Clinical scoring systems for pulmonary embolism (PE) screening have low specificity and contribute to computed tomography pulmonary angiogram (CTPA) overuse. We assessed whether deep learning models using an existing and routinely collected data modality, electrocardiogram (ECG) waveforms, can increase specificity for PE detection. Methods and results: We create a retrospective cohort of 21 183 patients at moderate- to high suspicion of PE and associate 23 793 CTPAs (10.0% PE-positive) with 320 746 ECGs and encounter-level clinical data (demographics, comorbidities, vital signs, and labs). We develop three machine learning models to predict PE likelihood: an ECG model using only ECG waveform data, an EHR model using tabular clinical data, and a Fusion model integrating clinical data and an embedded representation of the ECG waveform. We find that a Fusion model [area under the receiver-operating characteristic curve (AUROC) 0.81 ± 0.01] outperforms both the ECG model (AUROC 0.59 ± 0.01) and EHR model (AUROC 0.65 ± 0.01). On a sample of 100 patients from the test set, the Fusion model also achieves greater specificity (0.18) and performance (AUROC 0.84 ± 0.01) than four commonly evaluated clinical scores: Wells' Criteria, Revised Geneva Score, Pulmonary Embolism Rule-Out Criteria, and 4-Level Pulmonary Embolism Clinical Probability Score (AUROC 0.50-0.58, specificity 0.00-0.05). The model is superior to these scores on feature sensitivity analyses (AUROC 0.66-0.84) and achieves comparable performance across sex (AUROC 0.81) and racial/ethnic (AUROC 0.77-0.84) subgroups. Conclusion: Synergistic deep learning of ECG waveforms with traditional clinical variables can increase the specificity of PE detection in patients at least at moderate suspicion for PE.

14.
JACC Cardiovasc Imaging ; 15(3): 395-410, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34656465

RESUMEN

OBJECTIVES: This study sought to develop DL models capable of comprehensively quantifying left and right ventricular dysfunction from ECG data in a large, diverse population. BACKGROUND: Rapid evaluation of left and right ventricular function using deep learning (DL) on electrocardiograms (ECGs) can assist diagnostic workflow. However, DL tools to estimate right ventricular (RV) function do not exist, whereas those to estimate left ventricular (LV) function are restricted to quantification of very low LV function only. METHODS: A multicenter study was conducted with data from 5 New York City hospitals: 4 for internal testing and 1 serving as external validation. We created novel DL models to classify left ventricular ejection fraction (LVEF) into categories derived from the latest universal definition of heart failure, estimate LVEF through regression, and predict a composite outcome of either RV systolic dysfunction or RV dilation. RESULTS: We obtained echocardiogram LVEF estimates for 147,636 patients paired to 715,890 ECGs. We used natural language processing (NLP) to extract RV size and systolic function information from 404,502 echocardiogram reports paired to 761,510 ECGs for 148,227 patients. For LVEF classification in internal testing, area under curve (AUC) at detection of LVEF ≤40%, 40% < LVEF ≤50%, and LVEF >50% was 0.94 (95% CI: 0.94-0.94), 0.82 (95% CI: 0.81-0.83), and 0.89 (95% CI: 0.89-0.89), respectively. For external validation, these results were 0.94 (95% CI: 0.94-0.95), 0.73 (95% CI: 0.72-0.74), and 0.87 (95% CI: 0.87-0.88). For regression, the mean absolute error was 5.84% (95% CI: 5.82%-5.85%) for internal testing and 6.14% (95% CI: 6.13%-6.16%) in external validation. For prediction of the composite RV outcome, AUC was 0.84 (95% CI: 0.84-0.84) in both internal testing and external validation. CONCLUSIONS: DL on ECG data can be used to create inexpensive screening, diagnostic, and predictive tools for both LV and RV dysfunction. Such tools may bridge the applicability of ECGs and echocardiography and enable prioritization of patients for further interventions for either sided failure progressing to biventricular disease.


Asunto(s)
Aprendizaje Profundo , Disfunción Ventricular Izquierda , Disfunción Ventricular Derecha , Electrocardiografía , Humanos , Valor Predictivo de las Pruebas , Volumen Sistólico , Disfunción Ventricular Izquierda/diagnóstico por imagen , Disfunción Ventricular Derecha/diagnóstico por imagen , Función Ventricular Izquierda , Función Ventricular Derecha
15.
Patterns (N Y) ; 2(12): 100389, 2021 Dec 10.
Artículo en Inglés | MEDLINE | ID: mdl-34723227

RESUMEN

Deep learning (DL) models typically require large-scale, balanced training data to be robust, generalizable, and effective in the context of healthcare. This has been a major issue for developing DL models for the coronavirus disease 2019 (COVID-19) pandemic, where data are highly class imbalanced. Conventional approaches in DL use cross-entropy loss (CEL), which often suffers from poor margin classification. We show that contrastive loss (CL) improves the performance of CEL, especially in imbalanced electronic health records (EHR) data for COVID-19 analyses. We use a diverse EHR dataset to predict three outcomes: mortality, intubation, and intensive care unit (ICU) transfer in hospitalized COVID-19 patients over multiple time windows. To compare the performance of CEL and CL, models are tested on the full dataset and a restricted dataset. CL models consistently outperform CEL models, with differences ranging from 0.04 to 0.15 for area under the precision and recall curve (AUPRC) and 0.05 to 0.1 for area under the receiver-operating characteristic curve (AUROC).

16.
JMIR Res Protoc ; 10(9): e27799, 2021 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-34533458

RESUMEN

BACKGROUND: Though artificial intelligence (AI) has the potential to augment the patient-physician relationship in primary care, bias in intelligent health care systems has the potential to differentially impact vulnerable patient populations. OBJECTIVE: The purpose of this scoping review is to summarize the extent to which AI systems in primary care examine the inherent bias toward or against vulnerable populations and appraise how these systems have mitigated the impact of such biases during their development. METHODS: We will conduct a search update from an existing scoping review to identify studies on AI and primary care in the following databases: Medline-OVID, Embase, CINAHL, Cochrane Library, Web of Science, Scopus, IEEE Xplore, ACM Digital Library, MathSciNet, AAAI, and arXiv. Two screeners will independently review all abstracts, titles, and full-text articles. The team will extract data using a structured data extraction form and synthesize the results in accordance with PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. RESULTS: This review will provide an assessment of the current state of health care equity within AI for primary care. Specifically, we will identify the degree to which vulnerable patients have been included, assess how bias is interpreted and documented, and understand the extent to which harmful biases are addressed. As of October 2020, the scoping review is in the title- and abstract-screening stage. The results are expected to be submitted for publication in fall 2021. CONCLUSIONS: AI applications in primary care are becoming an increasingly common tool in health care delivery and in preventative care efforts for underserved populations. This scoping review would potentially show the extent to which studies on AI in primary care employ a health equity lens and take steps to mitigate bias. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/27799.

17.
JAMIA Open ; 4(3): ooab068, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34423260

RESUMEN

OBJECTIVES: Classifying hospital admissions into various acute myocardial infarction phenotypes in electronic health records (EHRs) is a challenging task with strong research implications that remains unsolved. To our knowledge, this study is the first study to design and validate phenotyping algorithms using cardiac catheterizations to identify not only patients with a ST-elevation myocardial infarction (STEMI), but the specific encounter when it occurred. MATERIALS AND METHODS: We design and validate multi-modal algorithms to phenotype STEMI on a multicenter EHR containing 5.1 million patients and 115 million patient encounters by using discharge summaries, diagnosis codes, electrocardiography readings, and the presence of cardiac catheterizations on the encounter. RESULTS: We demonstrate that robustly phenotyping STEMIs by selecting discharge summaries containing "STEM" has the potential to capture the most number of STEMIs (positive predictive value [PPV] = 0.36, N = 2110), but that addition of a STEMI-related International Classification of Disease (ICD) code and cardiac catheterizations to these summaries yields the highest precision (PPV = 0.94, N = 952). DISCUSSION AND CONCLUSION: In this study, we demonstrate that the incorporation of percutaneous coronary intervention increases the PPV for detecting STEMI-related patient encounters from the EHR.

18.
IEEE Trans Big Data ; 7(1): 38-44, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33768136

RESUMEN

Traditional Machine Learning (ML) models have had limited success in predicting Coronoavirus-19 (COVID-19) outcomes using Electronic Health Record (EHR) data partially due to not effectively capturing the inter-connectivity patterns between various data modalities. In this work, we propose a novel framework that utilizes relational learning based on a heterogeneous graph model (HGM) for predicting mortality at different time windows in COVID-19 patients within the intensive care unit (ICU). We utilize the EHRs of one of the largest and most diverse patient populations across five hospitals in major health system in New York City. In our model, we use an LSTM for processing time varying patient data and apply our proposed relational learning strategy in the final output layer along with other static features. Here, we replace the traditional softmax layer with a Skip-Gram relational learning strategy to compare the similarity between a patient and outcome embedding representation. We demonstrate that the construction of a HGM can robustly learn the patterns classifying patient representations of outcomes through leveraging patterns within the embeddings of similar patients. Our experimental results show that our relational learning-based HGM model achieves higher area under the receiver operating characteristic curve (auROC) than both comparator models in all prediction time windows, with dramatic improvements to recall.

19.
Europace ; 23(8): 1179-1191, 2021 08 06.
Artículo en Inglés | MEDLINE | ID: mdl-33564873

RESUMEN

In the recent decade, deep learning, a subset of artificial intelligence and machine learning, has been used to identify patterns in big healthcare datasets for disease phenotyping, event predictions, and complex decision making. Public datasets for electrocardiograms (ECGs) have existed since the 1980s and have been used for very specific tasks in cardiology, such as arrhythmia, ischemia, and cardiomyopathy detection. Recently, private institutions have begun curating large ECG databases that are orders of magnitude larger than the public databases for ingestion by deep learning models. These efforts have demonstrated not only improved performance and generalizability in these aforementioned tasks but also application to novel clinical scenarios. This review focuses on orienting the clinician towards fundamental tenets of deep learning, state-of-the-art prior to its use for ECG analysis, and current applications of deep learning on ECGs, as well as their limitations and future areas of improvement.


Asunto(s)
Cardiología , Aprendizaje Profundo , Inteligencia Artificial , Electrocardiografía , Humanos , Aprendizaje Automático
20.
JMIR Med Inform ; 9(1): e24207, 2021 Jan 27.
Artículo en Inglés | MEDLINE | ID: mdl-33400679

RESUMEN

BACKGROUND: Machine learning models require large datasets that may be siloed across different health care institutions. Machine learning studies that focus on COVID-19 have been limited to single-hospital data, which limits model generalizability. OBJECTIVE: We aimed to use federated learning, a machine learning technique that avoids locally aggregating raw clinical data across multiple institutions, to predict mortality in hospitalized patients with COVID-19 within 7 days. METHODS: Patient data were collected from the electronic health records of 5 hospitals within the Mount Sinai Health System. Logistic regression with L1 regularization/least absolute shrinkage and selection operator (LASSO) and multilayer perceptron (MLP) models were trained by using local data at each site. We developed a pooled model with combined data from all 5 sites, and a federated model that only shared parameters with a central aggregator. RESULTS: The LASSOfederated model outperformed the LASSOlocal model at 3 hospitals, and the MLPfederated model performed better than the MLPlocal model at all 5 hospitals, as determined by the area under the receiver operating characteristic curve. The LASSOpooled model outperformed the LASSOfederated model at all hospitals, and the MLPfederated model outperformed the MLPpooled model at 2 hospitals. CONCLUSIONS: The federated learning of COVID-19 electronic health record data shows promise in developing robust predictive models without compromising patient privacy.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA