Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Stud Health Technol Inform ; 316: 237-241, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176718

RESUMEN

As the reliance on clinical epidemiological information from human specimens grows, so does the need for effective clinical information management systems, particularly for biobanks. Our study focuses on enhancing the Korea Biobank Network's (KBN) system with data quality verification features. By comparing the quality of data collected before and after these enhancements, we observed a notable improvement in data accuracy, with the error rate decreasing from 0.1198% to 0.0492%. This advancement underscores the importance of robust data quality management in supporting high-quality clinical research and sets a precedent for the development of clinical information management systems.


Asunto(s)
Bancos de Muestras Biológicas , Exactitud de los Datos , República de Corea , Humanos
2.
Stud Health Technol Inform ; 310: 349-353, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38269823

RESUMEN

The amount of research on the gathering and handling of healthcare data keeps growing. To support multi-center research, numerous institutions have sought to create a common data model (CDM). However, data quality issues continue to be a major obstacle in the development of CDM. To address these limitations, a data quality assessment system was created based on the representative data model OMOP CDM v5.3.1. Additionally, 2,433 advanced evaluation rules were created and incorporated into the system by mapping the rules of existing OMOP CDM quality assessment systems. The data quality of six hospitals was verified using the developed system and an overall error rate of 0.197% was confirmed. Finally, we proposed a plan for high-quality data generation and the evaluation of multi-center CDM quality.


Asunto(s)
Exactitud de los Datos , Manejo de Datos , Instituciones de Salud , Hospitales
3.
Chinese Hospital Management ; (12): 72-74, 2024.
Artículo en Chino | WPRIM (Pacífico Occidental) | ID: wpr-1026614

RESUMEN

Objective To construct a list of quality scoring criteria for the attached sheet to the summary page of inpatient cases to achieve quantitative evaluation of the data quality.Methods It uses the Data Quality Management model of the American AHIMA as the evaluation framework to develop the list of data quality scoring criteria for the attached sheet,and score in Attached Sheet to the Summary Page of Inpatient Cases issued by the Hubei Provincial Health Commission as a demonstration.Results The average score of the 40 items in Attached Sheet to the Summary Page of Inpatient Casesis 6.725 out of 10.The main quality defects include that all items fail to clarify the person responsible for filling or the time limit for filling.In addition,some items are duplicated with the summary page(35%)or do not have a summary nature(40%).Conclusion Significant room exists for the improvement in the data quality of the attached sheet,especially in defining the person responsible and the time limit for filling in when setting up the items,making sure that the items supplement and extend the summary page,and applying effective quality control methods to the items.

4.
Int J Med Inform ; 180: 105262, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37871445

RESUMEN

OBJECTIVES: In the medical field, we face many challenges, including the high cost of data collection and processing, difficult standards issues, and complex preprocessing techniques. It is necessary to establish an objective and systematic data quality management system that ensures data reliability, mitigates risks caused by incorrect data, reduces data management costs, and increases data utilization. We introduce the concept of SMART data in a data quality management system and conducted a case study using real-world data on colorectal cancer. METHODS: We defined the data quality management system from three aspects (Construction - Operation - Utilization) based on the life cycle of medical data. Based on this, we proposed the "SMART DATA" concept and tested it on colorectal cancer data, which is actual real-world data. RESULTS: We define "SMART DATA" as systematized, high-quality data collected based on the life cycle of data construction, operation, and utilization through quality control activities for medical data. In this study, we selected a scenario using data on colorectal cancer patients from a single medical institution provided by the Clinical Oncology Network (CONNECT). As SMART DATA, we curated 1,724 learning data and 27 Clinically Critical Set (CCS) data for colorectal cancer prediction. These datasets contributed to the development and fine-tuning of the colorectal cancer prediction model, and it was determined that CCS cases had unique characteristics and patterns that warranted additional clinical review and consideration in the context of colorectal cancer prediction. CONCLUSIONS: In this study, we conducted primary research to develop a medical data quality management system. This will standardize medical data extraction and quality control methods and increase the utilization of medical data. Ultimately, we aim to provide an opportunity to develop a medical data quality management methodology and contribute to the establishment of a medical data quality management system.


Asunto(s)
Neoplasias Colorrectales , Exactitud de los Datos , Humanos , Reproducibilidad de los Resultados , Manejo de Datos , Registros Electrónicos de Salud , Neoplasias Colorrectales/terapia
5.
Stud Health Technol Inform ; 302: 322-326, 2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37203671

RESUMEN

The amount of research on the gathering and handling of healthcare data keeps growing. To support multi-center research, numerous institutions have sought to create a common data model (CDM). However, data quality issues continue to be a major obstacle in the development of CDM. To address these limitations, a data quality assessment system was created based on the representative data model OMOP CDM v5.3.1. Additionally, 2,433 advanced evaluation rules were created and incorporated into the system by mapping the rules of existing OMOP CDM quality assessment systems. The data quality of six hospitals was verified using the developed system and an overall error rate of 0.197% was confirmed. Finally, we proposed a plan for high-quality data generation and the evaluation of multi-center CDM quality.


Asunto(s)
Exactitud de los Datos , Hospitales , Bases de Datos Factuales , Atención a la Salud , Registros Electrónicos de Salud
6.
JMIR Mhealth Uhealth ; 11: e35917, 2023 02 24.
Artículo en Inglés | MEDLINE | ID: mdl-36826986

RESUMEN

BACKGROUND: Patient-generated health data (PGHD) collected from innovative wearables are enabling health care to shift to outside clinical settings through remote patient monitoring (RPM) initiatives. However, PGHD are collected continuously under the patient's responsibility in rapidly changing circumstances during the patient's daily life. This poses risks to the quality of PGHD and, in turn, reduces their trustworthiness and fitness for use in clinical practice. OBJECTIVE: Using a sociotechnical health informatics lens, we developed a data quality management (DQM) guideline for PGHD captured from wearable devices used in RPM with the objective of investigating how DQM principles can be applied to ensure that PGHD can reliably inform clinical decision-making in RPM. METHODS: First, clinicians, health information specialists, and MedTech industry representatives with experience in RPM were interviewed to identify DQM challenges. Second, these stakeholder groups were joined by patient representatives in a workshop to co-design potential solutions to meet the expectations of all the stakeholders. Third, the findings, along with the literature and policy review results, were interpreted to construct a guideline. Finally, we validated the guideline through a Delphi survey of international health informatics and health information management experts. RESULTS: The guideline constructed in this study comprised 19 recommendations across 7 aspects of DQM. It explicitly addressed the needs of patients and clinicians but implied that there must be collaboration among all stakeholders to meet these needs. CONCLUSIONS: The increasing proliferation of PGHD from wearables in RPM requires a systematic approach to DQM so that these data can be reliably used in clinical care. The developed guideline is an important next step toward safe RPM.


Asunto(s)
Informática Médica , Dispositivos Electrónicos Vestibles , Humanos , Informática Médica/métodos , Atención a la Salud , Monitoreo Fisiológico
7.
Sensors (Basel) ; 21(17)2021 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-34502723

RESUMEN

Nowadays, IoT is being used in more and more application areas and the importance of IoT data quality is widely recognized by practitioners and researchers. The requirements for data and its quality vary from application to application or organization in different contexts. Many methodologies and frameworks include techniques for defining, assessing, and improving data quality. However, due to the diversity of requirements, it can be a challenge to choose the appropriate technique for the IoT system. This paper surveys data quality frameworks and methodologies for IoT data, and related international standards, comparing them in terms of data types, data quality definitions, dimensions and metrics, and the choice of assessment dimensions. The survey is intended to help narrow down the possible choices of IoT data quality management technique.

8.
Health Inf Manag ; 50(1-2): 88-92, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-31805788

RESUMEN

Data quality (DQ) is the degree to which a given dataset meets a user's requirements. In the primary healthcare setting, poor quality data can lead to poor patient care, negatively affect the validity and reproducibility of research results and limit the value that such data may have for public health surveillance. To extract reliable and useful information from a large quantity of data and to make more effective and informed decisions, data should be as clean and free of errors as possible. Moreover, because DQ is defined within the context of different user requirements that often change, DQ should be considered to be an emergent construct. As such, we cannot expect that a sufficient level of DQ will last forever. Therefore, the quality of clinical data should be constantly assessed and reassessed in an iterative fashion to ensure that appropriate levels of quality are sustained in an acceptable and transparent manner. This document is based on our hands-on experiences dealing with DQ improvement for the Canadian Primary Care Sentinel Surveillance Network database. The DQ dimensions that are discussed here are accuracy and precision, completeness and comprehensiveness, consistency, timeliness, uniqueness, data cleaning and coherence.


Asunto(s)
Exactitud de los Datos , Bases de Datos Factuales/normas , Atención Primaria de Salud , Vigilancia de Guardia , Canadá , Vigilancia en Salud Pública , Mejoramiento de la Calidad
9.
BMC Med Res Methodol ; 19(1): 98, 2019 05 10.
Artículo en Inglés | MEDLINE | ID: mdl-31077148

RESUMEN

BACKGROUND: A dataset is indispensable to answer the research questions of clinical research studies. Inaccurate data lead to ambiguous results, and the removal of errors results in increased cost. The aim of this Quality Improvement Project (QIP) was to improve the Data Quality (DQ) by enhancing conformance and minimizing data entry errors. METHODS: This is a QIP which was conducted in the Department of Biostatistics using historical datasets submitted for statistical data analysis from the department's knowledge base system. Forty-five datasets received for statistical data analysis, were included at baseline. A 12-item checklist based on six DQ domains (i) completeness (ii) uniqueness (iii) timeliness (iv) accuracy (v) validity and (vi) consistency was developed to assess the DQ. The checklist was comprised of 12 items; missing values, un-coded values, miscoded values, embedded values, implausible values, unformatted values, missing codebook, inconsistencies with the codebook, inaccurate format, unanalyzable data structure, missing outcome variables, and missing analytic variables. The outcome was the number of defects per dataset. Quality improvement DMAIC (Define, Measure, Analyze, Improve, Control) framework and sigma improvement tools were used. Pre-Post design was implemented using mode of interventions. Pre-Post change in defects (zero, one, two or more defects) was compared by using chi-square test. RESULTS: At baseline, out of forty-five datasets; six (13.3%) datasets had zero defects, eight (17.8%) had one defect, and 31(69%) had ≥2 defects. The association between the nature of data capture (single vs. multiple data points) and defective data was statistically significant (p = 0.008). Twenty-one datasets were received during post-intervention for statistical data analysis. Seventeen (81%) had zero defects, two (9.5%) had one defect, and two (9.5%) had two or more defects. The proportion of datasets with zero defects had increased from 13.3 to 81%, whereas the proportion of datasets with two or more defects had decreased from 69 to 9.5% (p = < 0.001). CONCLUSION: Clinical research study teams often have limited knowledge of data structuring. Given the need for good quality data, we recommend training programs, consultation with data experts prior to data structuring and use of electronic data capturing methods.


Asunto(s)
Exactitud de los Datos , Conjuntos de Datos como Asunto , Humanos , Control de Calidad , Proyectos de Investigación
10.
Sensors (Basel) ; 18(9)2018 Sep 14.
Artículo en Inglés | MEDLINE | ID: mdl-30223516

RESUMEN

The Internet-of-Things (IoT) introduces several technical and managerial challenges when it comes to the use of data generated and exchanged by and between various Smart, Connected Products (SCPs) that are part of an IoT system (i.e., physical, intelligent devices with sensors and actuators). Added to the volume and the heterogeneous exchange and consumption of data, it is paramount to assure that data quality levels are maintained in every step of the data chain/lifecycle. Otherwise, the system may fail to meet its expected function. While Data Quality (DQ) is a mature field, existing solutions are highly heterogeneous. Therefore, we propose that companies, developers and vendors should align their data quality management mechanisms and artefacts with well-known best practices and standards, as for example, those provided by ISO 8000-61. This standard enables a process-approach to data quality management, overcoming the difficulties of isolated data quality activities. This paper introduces DAQUA-MASS, a methodology based on ISO 8000-61 for data quality management in sensor networks. The methodology consists of four steps according to the Plan-Do-Check-Act cycle by Deming.

11.
Artículo en Inglés | MEDLINE | ID: mdl-30040674

RESUMEN

To be considered fit for use in clinical care, health data should meet quality requirements, and data generated in remote patient monitoring settings are not exempt. Since patients take part of the responsibility in ensuring the quality of data during their flow from patient to the clinical setting, the quality management of these data is of great importance. This study aims to systematically review the literature to understand the current situation of quality management of patient generated data in remote patient monitoring particularly in use of wearable medical devices.


Asunto(s)
Datos de Salud Generados por el Paciente , Dispositivos Electrónicos Vestibles , Humanos , Monitoreo Fisiológico
12.
Int J Life Cycle Assess ; 23(4): 759-772, 2018 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-29713113

RESUMEN

PURPOSE: Despite growing access to data, questions of "best fit" data and the appropriate use of results in supporting decision making still plague the life cycle assessment (LCA) community. This discussion paper addresses revisions to assessing data quality captured in a new US Environmental Protection Agency guidance document as well as additional recommendations on data quality creation, management, and use in LCA databases and studies. APPROACH: Existing data quality systems and approaches in LCA were reviewed and tested. The evaluations resulted in a revision to a commonly used pedigree matrix, for which flow and process level data quality indicators are described, more clarity for scoring criteria, and further guidance on interpretation are given. DISCUSSION: Increased training for practitioners on data quality application and its limits are recommended. A multi-faceted approach to data quality assessment utilizing the pedigree method alongside uncertainty analysis in result interpretation is recommended. A method of data quality score aggregation is proposed and recommendations for usage of data quality scores in existing data are made to enable improved use of data quality scores in LCA results interpretation. Roles for data generators, data repositories, and data users are described in LCA data quality management. Guidance is provided on using data with data quality scores from other systems alongside data with scores from the new system. The new pedigree matrix and recommended data quality aggregation procedure can now be implemented in openLCA software. FUTURE WORK: Additional ways in which data quality assessment might be improved and expanded are described. Interoperability efforts in LCA data should focus on descriptors to enable user scoring of data quality rather than translation of existing scores. Developing and using data quality indicators for additional dimensions of LCA data, and automation of data quality scoring through metadata extraction and comparison to goal and scope are needed.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA