Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Stud Health Technol Inform ; 317: 40-48, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39234705

RESUMEN

INTRODUCTION: The Local Data Hub (LDH) is a platform for FAIR sharing of medical research (meta-)data. In order to promote the usage of LDH in different research communities, it is important to understand the domain-specific needs, solutions currently used for data organization and provide support for seamless uploads to a LDH. In this work, we analyze the use case of microneurography, which is an electrophysiological technique for analyzing neural activity. METHODS: After performing a requirements analysis in dialogue with microneurography researchers, we propose a concept-mapping and a workflow, for the researchers to transform and upload their metadata. Further, we implemented a semi-automatic upload extension to odMLtables, a template-based tool for handling metadata in the electrophysiological community. RESULTS: The open-source implementation enables the odML-to-LDH concept mapping, allows data anonymization from within the tool and the creation of custom-made summaries on the underlying data sets. DISCUSSION: This concludes a first step towards integrating improved FAIR processes into the research laboratory's daily workflow. In future work, we will extend this approach to other use cases to disseminate the usage of LDHs in a larger research community.


Asunto(s)
Metadatos , Humanos , Difusión de la Información/métodos , Almacenamiento y Recuperación de la Información/métodos
2.
Stud Health Technol Inform ; 317: 59-66, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39234707

RESUMEN

INTRODUCTION: To support research projects that require medical data from multiple sites is one of the goals of the German Medical Informatics Initiative (MII). The data integration centers (DIC) at university medical centers in Germany provide patient data via FHIR® in compliance with the MII core data set (CDS). Requirements for data protection and other legal bases for processing prefer decentralized processing of the relevant data in the DICs and the subsequent exchange of aggregated results for cross-site evaluation. METHODS: Requirements from clinical experts were obtained in the context of the MII use case INTERPOLAR. A software architecture was then developed, modeled using 3LGM2, finally implemented and published in a github repository. RESULTS: With the CDS tool chain, we have created software components for decentralized processing on the basis of the MII CDS. The CDS tool chain requires access to a local FHIR endpoint and then transfers the data to an SQL database. This is accessed by the DataProcessor component, which performs calculations with the help of rules (input repo) and writes the results back to the database. The CDS tool chain also has a frontend module (REDCap), which is used to display the output data and calculated results, and allows verification, evaluation, comments and other responses. This feedback is also persisted in the database and is available for further use, analysis or data sharing in the future. DISCUSSION: Other solutions are conceivable. Our solution utilizes the advantages of an SQL database. This enables flexible and direct processing of the stored data using established analysis methods. Due to the modularization, adjustments can be made so that it can be used in other projects. We are planning further developments to support pseudonymization and data sharing. Initial experience is being gathered. An evaluation is pending and planned.


Asunto(s)
Programas Informáticos , Alemania , Registros Electrónicos de Salud , Humanos , Informática Médica , Seguridad Computacional , Conjuntos de Datos como Asunto
3.
Stud Health Technol Inform ; 317: 171-179, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39234720

RESUMEN

INTRODUCTION: The German Medical Text Project (GeMTeX) is one of the largest infrastructure efforts targeting German-language clinical documents. We here introduce the architecture of the de-identification pipeline of GeMTeX. METHODS: This pipeline comprises the export of raw clinical documents from the local hospital information system, the import into the annotation platform INCEpTION, fully automatic pre-tagging with protected health information (PHI) items by the Averbis Health Discovery pipeline, a manual curation step of these pre-annotated data, and, finally, the automatic replacement of PHI items with type-conformant substitutes. This design was implemented in a pilot study involving six annotators and two curators each at the Data Integration Centers of the University Hospitals Leipzig and Erlangen. RESULTS: As a proof of concept, the publicly available Graz Synthetic Text Clinical Corpus (GRASSCO) was enhanced with PHI annotations in an annotation campaign for which reasonable inter-annotator agreement values of Krippendorff's α ≈ 0.97 can be reported. CONCLUSION: These curated 1.4 K PHI annotations are released as open-source data constituting the first publicly available German clinical language text corpus with PHI metadata.


Asunto(s)
Registros Electrónicos de Salud , Proyectos Piloto , Alemania , Procesamiento de Lenguaje Natural , Confidencialidad , Humanos , Seguridad Computacional
4.
Stud Health Technol Inform ; 317: 115-122, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39234713

RESUMEN

INTRODUCTION: NFDI4Health is a consortium funded by the German Research Foundation to make structured health data findable and accessible internationally according to the FAIR principles. Its goal is bringing data users and Data Holding Organizations (DHOs) together. It mainly considers DHOs conducting epidemiological and public health studies or clinical trials. METHODS: Local data hubs (LDH) are provided for such DHOs to connect decentralized local research data management within their organizations with the option of publishing shareable metadata via centralized NFDI4Health services such as the German central Health Study Hub. The LDH platform is based on FAIRDOM SEEK and provides a complete and flexible, locally controlled data and information management platform for health research data. A tailored NFDI4Health metadata schema for studies and their corresponding resources has been developed which is fully supported by the LDH software, e.g. for metadata transfer to other NFDI4Health services. RESULTS: The SEEK platform has been technically enhanced to support extended metadata structures tailored to the needs of the user communities in addition to the existing metadata structuring of SEEK. CONCLUSION: With the LDH and the MDS, the NFDI4Health provides all DHOs with a standardized and free and open source research data management platform for the FAIR exchange of structured health data.


Asunto(s)
Metadatos , Alemania , Humanos , Manejo de Datos , Difusión de la Información , Programas Informáticos
5.
Artículo en Alemán | MEDLINE | ID: mdl-38753022

RESUMEN

The interoperability Working Group of the Medical Informatics Initiative (MII) is the platform for the coordination of overarching procedures, data structures, and interfaces between the data integration centers (DIC) of the university hospitals and national and international interoperability committees. The goal is the joint content-related and technical design of a distributed infrastructure for the secondary use of healthcare data that can be used via the Research Data Portal for Health. Important general conditions are data privacy and IT security for the use of health data in biomedical research. To this end, suitable methods are used in dedicated task forces to enable procedural, syntactic, and semantic interoperability for data use projects. The MII core dataset was developed as several modules with corresponding information models and implemented using the HL7® FHIR® standard to enable content-related and technical specifications for the interoperable provision of healthcare data through the DIC. International terminologies and consented metadata are used to describe these data in more detail. The overall architecture, including overarching interfaces, implements the methodological and legal requirements for a distributed data use infrastructure, for example, by providing pseudonymized data or by federated analyses. With these results of the Interoperability Working Group, the MII is presenting a future-oriented solution for the exchange and use of healthcare data, the applicability of which goes beyond the purpose of research and can play an essential role in the digital transformation of the healthcare system.


Asunto(s)
Interoperabilidad de la Información en Salud , Humanos , Conjuntos de Datos como Asunto , Registros Electrónicos de Salud , Alemania , Interoperabilidad de la Información en Salud/normas , Informática Médica , Registro Médico Coordinado/métodos , Integración de Sistemas
6.
Stud Health Technol Inform ; 307: 137-145, 2023 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-37697847

RESUMEN

INTRODUCTION: Prospective data collection in clinical trials is considered the gold standard of clinical research. Validating data entered in input fields in case report forms is unavoidable to maintain good data quality. Data quality checks include both the conformance of individual inputs to the specification of the data element, the detection of missing values, and the plausibility of the values entered. STATE-OF-THE-ART: Besides Libre-/OpenClinica there are many applications for capturing clinical data. While most of them have a commercial approach, free and open-source solutions lack intuitive operation. CONCEPT: Our ocRuleTool is made for the specific use case to write validation rules for Open-/LibreClinica, a clinical study management software for designing case report forms and managing medical data in clinical trials. It addresses parts of all three categories of data quality checks mentioned above. IMPLEMENTATION: The required rules and error messages are entered in the normative Excel specification and then converted to an XML document which can be uploaded to Open-/LibreClinica. The advantage of this intermediate step is a better readability as the complex XML elements are broken down into easy to fill out columns in Excel. The tool then generates the ready to use XML file by itself. LESSONS LEARNED: This approach saves time, is less error-prone and allows collaboration with clinicians on improving data quality. CONCLUSION: Our ocRuleTool has proven useful in over a dozen studies. We hope to increase the user base by releasing it to open source on GitHub.


Asunto(s)
Exactitud de los Datos , Manejo de Datos , Humanos , Escritura , Recolección de Datos , Registros
7.
Stud Health Technol Inform ; 307: 146-151, 2023 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-37697848

RESUMEN

The German Medical Informatics Initiative has agreed on a HL7 FHIR-based core data set as the common data model that all 37 university hospitals use for their patient's data. These data are stored locally at the site but are centrally queryable for researchers and accessible upon request. This infrastructure is currently under construction, and its functionality is being tested by so-called Projectathons. In the 6th Projectathon, a clinical hypothesis was formulated, executed in a multicenter scenario, and its results were analyzed. A number of oddities emerged in the analysis of data from different sites. Biometricians, who had previously performed analyses in prospective data collection settings such as clinical trials or cohorts, were not consistently aware of these idiosyncrasies. This field report describes data quality problems that have occurred, although not all are genuine errors. The aim is to point out such circumstances of data generation that may affect statistical analysis.


Asunto(s)
Concienciación , Informática Médica , Humanos , Hospitales Universitarios , Exactitud de los Datos , Recolección de Datos
8.
Stud Health Technol Inform ; 302: 835-836, 2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37203512

RESUMEN

The largest publicly funded project to generate a German-language medical text corpus will start in mid-2023. GeMTeX comprises clinical texts from information systems of six university hospitals, which will be made accessible for NLP by annotation of entities and relations, which will be enhanced with additional meta-information. A strong governance provides a stable legal framework for the use of the corpus. State-of-the art NLP methods are used to build, pre-annotate and annotate the corpus and train language models. A community will be built around GeMTeX to ensure its sustainable maintenance, use, and dissemination.


Asunto(s)
Lenguaje , Procesamiento de Lenguaje Natural , Humanos
9.
Appl Clin Inform ; 14(1): 54-64, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36696915

RESUMEN

BACKGROUND: The growing interest in the secondary use of electronic health record (EHR) data has increased the number of new data integration and data sharing infrastructures. The present work has been developed in the context of the German Medical Informatics Initiative, where 29 university hospitals agreed to the usage of the Health Level Seven Fast Healthcare Interoperability Resources (FHIR) standard for their newly established data integration centers. This standard is optimized to describe and exchange medical data but less suitable for standard statistical analysis which mostly requires tabular data formats. OBJECTIVES: The objective of this work is to establish a tool that makes FHIR data accessible for standard statistical analysis by providing means to retrieve and transform data from a FHIR server. The tool should be implemented in a programming environment known to most data analysts and offer functions with variable degrees of flexibility and automation catering to users with different levels of FHIR expertise. METHODS: We propose the fhircrackr framework, which allows downloading and flattening FHIR resources for data analysis. The framework supports different download and authentication protocols and gives the user full control over the data that is extracted from the FHIR resources and transformed into tables. We implemented it using the programming language R [1] and published it under the GPL-3 open source license. RESULTS: The framework was successfully applied to both publicly available test data and real-world data from several ongoing studies. While the processing of larger real-world data sets puts a considerable burden on computation time and memory consumption, those challenges can be attenuated with a number of suitable measures like parallelization and temporary storage mechanisms. CONCLUSION: The fhircrackr R package provides an open source solution within an environment that is familiar to most data scientists and helps overcome the practical challenges that still hamper the usage of EHR data for research.


Asunto(s)
Registros Electrónicos de Salud , Informática Médica , Humanos , Lenguajes de Programación , Difusión de la Información , Estándar HL7 , Atención a la Salud
10.
Methods Inf Med ; 61(S 02): e103-e115, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-35915977

RESUMEN

BACKGROUND: Clinical trials, epidemiological studies, clinical registries, and other prospective research projects, together with patient care services, are main sources of data in the medical research domain. They serve often as a basis for secondary research in evidence-based medicine, prediction models for disease, and its progression. This data are often neither sufficiently described nor accessible. Related models are often not accessible as a functional program tool for interested users from the health care and biomedical domains. OBJECTIVE: The interdisciplinary project Leipzig Health Atlas (LHA) was developed to close this gap. LHA is an online platform that serves as a sustainable archive providing medical data, metadata, models, and novel phenotypes from clinical trials, epidemiological studies, and other medical research projects. METHODS: Data, models, and phenotypes are described by semantically rich metadata. The platform prefers to share data and models presented in original publications but is also open for nonpublished data. LHA provides and associates unique permanent identifiers for each dataset and model. Hence, the platform can be used to share prepared, quality-assured datasets and models while they are referenced in publications. All managed data, models, and phenotypes in LHA follow the FAIR principles, with public availability or restricted access for specific user groups. RESULTS: The LHA platform is in productive mode (https://www.health-atlas.de/). It is already used by a variety of clinical trial and research groups and is becoming increasingly popular also in the biomedical community. LHA is an integral part of the forthcoming initiative building a national research data infrastructure for health in Germany.


Asunto(s)
Estudios Prospectivos , Alemania
11.
Stud Health Technol Inform ; 278: 66-74, 2021 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-34042877

RESUMEN

Sharing data is of great importance for research in medical sciences. It is the basis for reproducibility and reuse of already generated outcomes in new projects and in new contexts. FAIR data principles are the basics for sharing data. The Leipzig Health Atlas (LHA) platform follows these principles and provides data, describing metadata, and models that have been implemented in novel software tools and are available as demonstrators. LHA reuses and extends three different major components that have been previously developed by other projects. The SEEK management platform is the foundation providing a repository for archiving, presenting and secure sharing a wide range of publication results, such as published reports, (bio)medical data as well as interactive models and tools. The LHA Data Portal manages study metadata and data allowing to search for data of interest. Finally, PhenoMan is an ontological framework for phenotype modelling. This paper describes the interrelation of these three components. In particular, we use the PhenoMan to, firstly, model and represent phenotypes within the LHA platform. Then, secondly, the ontological phenotype representation can be used to generate search queries that are executed by the LHA Data Portal. The PhenoMan generates the queries in a novel domain specific query language (SDQL), which is specific for data management systems based on CDISC ODM standard, such as the LHA Data Portal. Our approach was successfully applied to represent phenotypes in the Leipzig Health Atlas with the possibility to execute corresponding queries within the LHA Data Portal.


Asunto(s)
Metadatos , Programas Informáticos , Archivos , Fenotipo , Reproducibilidad de los Resultados
12.
J Biomed Semantics ; 11(1): 15, 2020 12 21.
Artículo en Inglés | MEDLINE | ID: mdl-33349245

RESUMEN

BACKGROUND: The successful determination and analysis of phenotypes plays a key role in the diagnostic process, the evaluation of risk factors and the recruitment of participants for clinical and epidemiological studies. The development of computable phenotype algorithms to solve these tasks is a challenging problem, caused by various reasons. Firstly, the term 'phenotype' has no generally agreed definition and its meaning depends on context. Secondly, the phenotypes are most commonly specified as non-computable descriptive documents. Recent attempts have shown that ontologies are a suitable way to handle phenotypes and that they can support clinical research and decision making. The SMITH Consortium is dedicated to rapidly establish an integrative medical informatics framework to provide physicians with the best available data and knowledge and enable innovative use of healthcare data for research and treatment optimisation. In the context of a methodological use case 'phenotype pipeline' (PheP), a technology to automatically generate phenotype classifications and annotations based on electronic health records (EHR) is developed. A large series of phenotype algorithms will be implemented. This implies that for each algorithm a classification scheme and its input variables have to be defined. Furthermore, a phenotype engine is required to evaluate and execute developed algorithms. RESULTS: In this article, we present a Core Ontology of Phenotypes (COP) and the software Phenotype Manager (PhenoMan), which implements a novel ontology-based method to model, classify and compute phenotypes from already available data. Our solution includes an enhanced iterative reasoning process combining classification tasks with mathematical calculations at runtime. The ontology as well as the reasoning method were successfully evaluated with selected phenotypes including SOFA score, socio-economic status, body surface area and WHO BMI classification based on available medical data. CONCLUSIONS: We developed a novel ontology-based method to model phenotypes of living beings with the aim of automated phenotype reasoning based on available data. This new approach can be used in clinical context, e.g., for supporting the diagnostic process, evaluating risk factors, and recruiting appropriate participants for clinical and epidemiological studies.


Asunto(s)
Ontologías Biológicas , Informática Médica/estadística & datos numéricos , Sistemas de Registros Médicos Computarizados/estadística & datos numéricos , Semántica , Algoritmos , Humanos , Informática Médica/métodos , Modelos Teóricos , Fenotipo
13.
Stud Health Technol Inform ; 270: 392-396, 2020 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-32570413

RESUMEN

Despite their young age, the FAIR principles are recognised as important guidelines for research data management. Their generic design, however, leaves much room for interpretation in domain-specific application. Based on practical experience in the operation of a data repository, this article addresses problems in FAIR provisioning of medical data for research purposes in the use case of the Leipzig Health Atlas project and shows necessary future developments.


Asunto(s)
Bases de Datos Factuales
14.
Stud Health Technol Inform ; 267: 164-172, 2019 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-31483269

RESUMEN

Phenotyping means the determination of clinical relevant phenotypes, e.g. by classification or calculation based on EHR data. Within the German Medical Informatics Initiative, the SMITH consortium is working on the implementation of a phenotyping pipeline. to extract, structure and normalize information from the EHR data of the hospital information systems of the participating sites; to automatically apply complex algorithms and models and to enrich the data within the research data warehouses of the distributed data integration centers with the computed results. Here we present the overall picture and essential building blocks and workflows of this concept.


Asunto(s)
Registros Electrónicos de Salud , Informática Médica , Algoritmos , Fenotipo
15.
Stud Health Technol Inform ; 264: 1528-1529, 2019 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-31438215

RESUMEN

Secondary use of electronic health record (EHR) data requires a detailed description of metadata, especially when data collection and data re-use are organizationally and technically far apart. This paper describes the concept of the SMITH consortium that includes conventions, processes, and tools for describing and managing metadata using common standards for semantic interoperability. It deals in particular with the chain of processing steps of data from existing information systems and provides an overview of the planned use of metadata, medical terminologies, and semantic services in the consortium.


Asunto(s)
Registros Electrónicos de Salud , Metadatos , Recolección de Datos , Alemania , Sistemas de Información , Semántica
16.
Stud Health Technol Inform ; 258: 211-215, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30942748

RESUMEN

Clinical Data Management Systems (CDMS) are used to electronically capture and store data about study participants in clinical trials. CDMS tend to be superior compared to paper-based data capture with respect to data quality, consistency, completeness and traceability. Nevertheless, their application is not default - especially in small-scale, academic clinical studies. While clinical researchers can choose from many different software vendors, the vast requirements of data management and the growing need for integration with other systems make it hard to select the most suitable one. Additionally, the financial and personnel costs for purchasing, deploying and maintaining a commercial solution can easily go beyond the limits of a research project's resources. The aim of this paper is to assess the suitability of the web-based open-source software OpenClinica for academic clinical trials with regards to functionalities required in a large research network.


Asunto(s)
Ensayos Clínicos como Asunto , Gestión de la Información , Programas Informáticos
17.
Stud Health Technol Inform ; 247: 426-430, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29677996

RESUMEN

Medical research is an active field in which a wide range of information is collected, collated, combined and analyzed. Essential results are reported in publications, but it is often problematic to have the data (raw and processed), algorithms and tools associated with the publication available. The Leipzig Health Atlas (LHA) project has therefore set itself the goal of providing a repository for this purpose and enabling controlled access to it via a web-based portal. A data sharing concept in accordance to FAIR and OAIS is the basis for the processing and provision of data in the LHA. An IT architecture has been designed for this purpose. The paper presents essential aspects of the data sharing concept, the IT architecture and the methods used.


Asunto(s)
Algoritmos , Estadística como Asunto , Humanos , Investigación
18.
Stud Health Technol Inform ; 205: 1115-9, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25160362

RESUMEN

We present a working approach for a clinical research database as part of an archival information system. The CDISC ODM standard is target for clinical study and research relevant routine data, thus decoupling the data ingest process from the access layer. The presented research database is comprehensive as it covers annotating, mapping and curation of poorly annotated source data. Besides a conventional relational database the medical data warehouse i2b2 serves as main frontend for end-users. The system we developed is suitable to support patient recruitment, cohort identification and quality assurance in daily routine.


Asunto(s)
Investigación Biomédica/organización & administración , Curaduría de Datos/métodos , Bases de Datos Factuales , Registros Electrónicos de Salud/organización & administración , Sistemas de Información en Salud/organización & administración , Almacenamiento y Recuperación de la Información/métodos , Registro Médico Coordinado/métodos , Sistemas de Administración de Bases de Datos , Alemania
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA