Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Nat Med ; 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39266748

RESUMEN

With progressive digitalization of healthcare systems worldwide, large-scale collection of electronic health records (EHRs) has become commonplace. However, an extensible framework for comprehensive exploratory analysis that accounts for data heterogeneity is missing. Here we introduce ehrapy, a modular open-source Python framework designed for exploratory analysis of heterogeneous epidemiology and EHR data. ehrapy incorporates a series of analytical steps, from data extraction and quality control to the generation of low-dimensional representations. Complemented by rich statistical modules, ehrapy facilitates associating patients with disease states, differential comparison between patient clusters, survival analysis, trajectory inference, causal inference and more. Leveraging ontologies, ehrapy further enables data sharing and training EHR deep learning models, paving the way for foundational models in biomedical research. We demonstrate ehrapy's features in six distinct examples. We applied ehrapy to stratify patients affected by unspecified pneumonia into finer-grained phenotypes. Furthermore, we reveal biomarkers for significant differences in survival among these groups. Additionally, we quantify medication-class effects of pneumonia medications on length of stay. We further leveraged ehrapy to analyze cardiovascular risks across different data modalities. We reconstructed disease state trajectories in patients with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) based on imaging data. Finally, we conducted a case study to demonstrate how ehrapy can detect and mitigate biases in EHR data. ehrapy, thus, provides a framework that we envision will standardize analysis pipelines on EHR data and serve as a cornerstone for the community.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38086412

RESUMEN

BACKGROUND: In optical coherence tomography (OCT) scans of patients with inherited retinal diseases (IRDs), the measurement of the thickness of the outer nuclear layer (ONL) has been well established as a surrogate marker for photoreceptor preservation. Current automatic segmentation tools fail in OCT segmentation in IRDs, and manual segmentation is time-consuming. METHODS AND MATERIAL: Patients with IRD and an available OCT scan were screened for the present study. Additionally, OCT scans of patients without retinal disease were included to provide training data for artificial intelligence (AI). We trained a U-net-based model on healthy patients and applied a domain adaption technique to the IRD patients' scans. RESULTS: We established an AI-based image segmentation algorithm that reliably segments the ONL in OCT scans of IRD patients. In a test dataset, the dice score of the algorithm was 98.7%. Furthermore, we generated thickness maps of the full retinal thickness and the ONL layer for each patient. CONCLUSION: Accurate segmentation of anatomical layers on OCT scans plays a crucial role for predictive models linking retinal structure to visual function. Our algorithm for segmentation of OCT images could provide the basis for further studies on IRDs.

3.
Biosensors (Basel) ; 10(9)2020 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-32933146

RESUMEN

The risk of personal data exposure through unauthorized access has never been as imminent as today. To counter this, biometric authentication has been proposed: the use of distinctive physiological and behavioral characteristics as a form of identification and access control. One of the recent developments is electroencephalography (EEG)-based authentication. It builds on the subject-specific nature of brain responses which are difficult to recreate artificially. We propose an authentication system based on EEG signals recorded in response to a simple motor paradigm. Authentication is achieved with a novel two-stage decoder. In the first stage, EEG signal features are extracted using an inception- and a VGG-like deep learning neural network (NN) both of which we compare with principal component analysis (PCA). In the second stage, a support vector machine (SVM) is used for binary classification to authenticate the subject based on the extracted features. All decoders are trained on EEG motor-movement data recorded from 105 subjects. We achieved with the VGG-like NN-SVM decoder a false-acceptance rate (FAR) of 2.55% with an overall accuracy of 88.29%, a FAR of 3.33% with an accuracy of 87.47%, and a FAR of 2.89% with an accuracy of 90.68% for 8, 16, and 64 channels, respectively. With the Inception-like NN-SVM decoder we achieved a false-acceptance rate (FAR) of 4.08% with an overall accuracy of 87.29%, a FAR of 3.53% with an accuracy of 85.31%, and a FAR of 1.27% with an accuracy of 93.40% for 8, 16, and 64 channels, respectively. The PCA-SVM decoder achieved accuracies of 92.09%, 92.36%, and 95.64% with FARs of 2.19%, 2.17%, and 1.26% for 8, 16, and 64 channels, respectively.


Asunto(s)
Identificación Biométrica/métodos , Electroencefalografía , Algoritmos , Encéfalo , Humanos , Redes Neurales de la Computación , Análisis de Componente Principal , Procesamiento de Señales Asistido por Computador , Máquina de Vectores de Soporte
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA