Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 78
Filtrar
1.
J Clin Microbiol ; 62(9): e0060524, 2024 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-39162437

RESUMEN

Given the cost and unclear clinical impact of metagenomic next-generation sequencing (mNGS), laboratory stewardship may improve utilization. This retrospective observational study examines mNGS results from two academic medical centers employing different stewardship approaches. Eighty mNGS orders [54 cerebrospinal fluid (CSF) and 26 plasma] were identified from 2019 to 2021 at the University of Washington (UW), which requires director-level approval for mNGS orders, and the University of Utah (Utah), which does not restrict ordering. The impact of mNGS results and the relationship to traditional microbiology orders were evaluated. Nineteen percent (10/54) of CSF and 65% (17/26) of plasma studies detected at least one organism. Compared to CSF results, plasma results more frequently identified clinically significant organisms (31% vs 7%) and pathogens not detected by traditional methods (12% vs 0%). Antibiotic management was more frequently impacted by plasma versus CSF results (31% vs 4%). These outcome measures were not statistically different between study sites. The number and cumulative cost of traditional microbiology tests at UW were greater than Utah for CSF mNGS testing (UW: 46 tests, $6,237; Utah: 26 tests, $2,812; P < 0.05) but similar for plasma mNGS (UW: 31 tests, $3,975; Utah: 21 tests, $2,715; P = 0.14). mNGS testing accounted for 30%-50% of the total microbiology costs. Improving the diagnostic performance of mNGS by stewardship remains challenging due to low positivity rates and difficulties assessing clinical impact. From a fiscal perspective, stewardship efforts should focus on reducing testing in low-yield populations given the high costs of mNGS relative to overall microbiology testing expenditures. IMPORTANCE: Metagenomic next-generation sequencing (mNGS) stewardship practices remain poorly standardized. This study aims to provide actionable insights for institutions that seek to reduce the unnecessary usage of mNGS. Importantly, we highlight that clinical impact remains challenging to measure without standardized guidelines, and we provide an actual cost estimate of microbiology expenditures on individuals undergoing mNGS.


Asunto(s)
Centros Médicos Académicos , Secuenciación de Nucleótidos de Alto Rendimiento , Humanos , Estudios Retrospectivos , Metagenómica/métodos , Programas de Optimización del Uso de los Antimicrobianos , Utah , Antibacterianos/uso terapéutico , Antibacterianos/farmacología
2.
BMC Prim Care ; 25(1): 270, 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39054449

RESUMEN

BACKGROUND: Clinical laboratory testing, essential for medical diagnostics, represents a significant part of healthcare activity, influencing around 70% of critical clinical decisions. The automation of laboratory equipment has expanded test menus and increased efficiency to meet the growing demands for clinical testing. However, concerns about misutilization remain prevalent. In Belgium, primary care has seen a dramatic increase in lab test usage, but recent utilization data is lacking. METHODS: We conducted a comprehensive retrospective analysis of laboratory test utilization trends within the primary care settings of Belgium over a ten-year period, spanning from 2012 to 2021, incorporating a vast dataset of 189 million test records for almost 1.5 million persons. This was the first study to integrate the metadata from both the INTEGO & THIN databases, which are derived from the two major electronic medical record (EMR) systems used in primary care in Belgium, providing a comprehensive national perspective. This research provides crucial insights into patient-level patterns, test-level utilization, and offers international perspectives through comparative analysis. RESULTS: We found a subtle annual increase in the average number of laboratory tests per patient (ranging from approximately 0.5-1%), indicative of a deceleration in growth in laboratory test ordering when compared to previous decades. We also witnessed stability and consistency of the most frequently ordered laboratory tests across diverse patient populations and healthcare contexts over the years. CONCLUSIONS: These findings emphasize the need for continued efforts to optimize test utilization, focusing not only on tackling overutilization but on enhancing the diagnostic relevance of tests ordered. The frequently ordered tests should be prioritized in these initiatives to ensure their continued effectiveness in patient care. By consolidating extensive datasets, employing rigorous statistical analysis, and incorporating international perspectives, this study provides a solid foundation for evidence-based strategies aimed at refining laboratory test utilization practices. These strategies can potentially improve the quality of healthcare delivery while simultaneously addressing cost-effectiveness concerns in healthcare.


Asunto(s)
Atención Primaria de Salud , Bélgica , Humanos , Atención Primaria de Salud/estadística & datos numéricos , Atención Primaria de Salud/tendencias , Estudios Retrospectivos , Registros Electrónicos de Salud/tendencias , Registros Electrónicos de Salud/estadística & datos numéricos , Técnicas de Laboratorio Clínico/tendencias , Técnicas de Laboratorio Clínico/estadística & datos numéricos , Femenino , Masculino , Persona de Mediana Edad , Adulto , Anciano
3.
Open Forum Infect Dis ; 11(7): ofae358, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39035574

RESUMEN

Background: (1,3)- ß-D-glucan (BDG) testing is one of the noninvasive tests to aid diagnosis of invasive fungal infections (IFIs). The study results have been heterogenous, and diagnostic performance varies depending on the risks for IFI. Thus, it is important to select appropriate patients for BDG testing to prevent false-positive results. An algorithmic diagnostic stewardship intervention was instituted at a single academic medical center to improve BDG test utilization. Methods: The BDG test order in the electronic health record was replaced with the BDG test request order, which required approval to process the actual test order. The approval criteria were (1) immunocompromised or intensive care unit patient and (2) on empiric antifungal therapy, or inability to undergo invasive diagnostic procedures. A retrospective observational study was conducted to evaluate the efficacy of the intervention by comparing the number of BDG tests performed between 1 year pre- and post-intervention. Safety was assessed by chart review of the patients for whom BDG test requests were deemed inappropriate and rejected. Results: The number of BDG tests performed per year decreased by 85% from 156 in the pre-intervention period to 24 in the post-intervention period. The average monthly number of BDG tests performed was significantly lower between those periods (P = .002). There was no delay in IFI diagnosis or IFI-related deaths in the patients whose BDG test requests were rejected. The sustained effectiveness of the intervention was observed for 5 years. Conclusions: Institution of the diagnostic stewardship intervention successfully and safely improved BDG test utilization.

4.
Lab Med ; 55(5): 627-632, 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-38619036

RESUMEN

OBJECTIVE: To assess the appropriateness of laboratory testing intervals and antiphospholipid syndrome (APS) incidence. METHODS: Between January 2010 and August 2022, insurance claims data of patients with disease codes for other thrombophilia (D68.6) and APS (V253) were retrieved in South Korea. Patients who received antiphospholipid antibody tests more than twice were classified as having suspected APS. The interval between the first 2 antiphospholipid antibody tests was evaluated in the patients with suspected APS. Patients with suspected APS who received anticoagulants for >180 days were classified as having APS. RESULTS: Overall, 8656 patients were classified as having suspected APS. The testing interval for the first 2 tests in patients with suspected APS was <6 and <12 weeks in 11.1% and 20.6% of cases, respectively, in 2010, gradually increasing to 21.0% and 35.4%, respectively, in 2021. Subsequently, 4344 patients were classified as having APS, with 65.0% being female. Only 330 patients were diagnosed with APS in 2021, down from 436 in 2020. CONCLUSION: This study showed a gradual increase in patients receiving antiphospholipid antibody testing with an inappropriate short-term interval, underscoring the need for laboratory stewardship to ensure an appropriate interval for APS testing.


Asunto(s)
Síndrome Antifosfolípido , Humanos , Síndrome Antifosfolípido/diagnóstico , Síndrome Antifosfolípido/epidemiología , Síndrome Antifosfolípido/sangre , República de Corea/epidemiología , Femenino , Masculino , Estudios Retrospectivos , Adulto , Persona de Mediana Edad , Anticuerpos Antifosfolípidos/sangre , Anciano , Adulto Joven , Factores de Tiempo , Adolescente , Incidencia
5.
Clin Biochem ; 127-128: 110764, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38636695

RESUMEN

Quality in laboratory medicine encompasses multiple components related to total quality management, including quality control (QC), quality assurance (QA), quality indicators, and quality improvement (QI). Together, they contribute to minimizing errors (pre-analytical, analytical, or post-analytical) in clinical service delivery and improving process appropriateness and efficiency. In contrast to static quality benchmarks (QC, QA, quality indicators), the QI paradigm is a continuous approach to systemic process improvement for optimizing patient safety, timeliness, effectiveness, and efficiency. Healthcare institutions have placed emphasis on applying the QI framework to identify and improve healthcare delivery. Despite QI's increasing importance, there is a lack of guidance on preparing, executing, and sustaining QI initiatives in the field of laboratory medicine. This has presented a significant barrier for clinical laboratorians to participate in and lead QI initiatives. This three-part primer series will bridge this knowledge gap by providing a guide for clinical laboratories to implement a QI project that issuccessful and sustainable. In the first article, we introduce the steps needed to prepare a QI project with focus on relevant methodology and tools related to problem identification, stakeholder engagement, root cause analysis (e.g., fishbone diagrams, Pareto charts and process mapping), and SMART aim establishment. Throughout, we describe a clinical vignette of a real QI project completed at our institution focused on serum protein electrophoresis (SPEP) utilization. This primer series is the first of its kind in laboratory medicine and will serve as a useful resource for future engagement of clinical laboratory leaders in QI initiatives.


Asunto(s)
Laboratorios Clínicos , Mejoramiento de la Calidad , Humanos , Control de Calidad , Garantía de la Calidad de Atención de Salud
6.
J Med Virol ; 96(3): e29505, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38465748

RESUMEN

SARS-CoV-2 antibody levels may serve as a correlate for immunity and could inform optimal booster timing. The relationship between antibody levels and protection from infection was evaluated in vaccinated individuals from the US National Basketball Association who had antibody levels measured at a single time point from September 12, 2021, to December 31, 2021. Cox proportional hazards models were used to estimate the risk of infection within 90 days of serologic testing by antibody level (<250, 250-800, and >800 AU/mL1 ), adjusting for age, time since last vaccine dose, and history of SARS-CoV-2 infection. Individuals were censored on date of booster receipt. The analytic cohort comprised 2323 individuals and was 78.2% male, 68.1% aged ≤40 years, and 56.4% vaccinated (primary series) with the Pfizer-BioNTech mRNA vaccine. Among the 2248 (96.8%) individuals not yet boosted at antibody testing, 77% completed their primary vaccine series 4-6 months before testing and the median (interquartile range) antibody level was 293.5 (interquartile range: 121.0-740.5) AU/mL. Those with levels <250 AU/mL (adj hazard ratio [HR]: 2.4; 95% confidence interval [CI]: 1.5-3.7) and 250-800 AU/mL (adj HR: 1.5; 95% CI: 0.98-2.4) had greater infection risk compared to those with levels >800 AU/mL. Antibody levels could inform individual COVID-19 risk and booster scheduling.


Asunto(s)
Baloncesto , COVID-19 , Vacunas , Humanos , Masculino , Femenino , COVID-19/prevención & control , SARS-CoV-2 , Anticuerpos Antivirales
7.
Gynecol Oncol ; 184: 96-102, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38301312

RESUMEN

BACKGROUND: Little is known about cervical cancer screening strategy utilization (cytology alone, cytology plus high-risk human papillomavirus [HPV] testing [cotesting], primary HPV testing) and test results in the United States. METHODS: Data from the Centers for Disease Control and Prevention's National Breast and Cervical Cancer Early Detection Program were analyzed for 199,578 persons aged 21-65 years screened from 2019 to 2020. Screening test utilization and results were stratified by demographic characteristics and geographic region. Age-standardized pooled HPV test positivity and genotyping test positivity were estimated within cytology result categories. RESULTS: Primary HPV testing was performed in 592 persons (0.3%). Among the remaining 176,290 persons aged 30-65 years, cotesting was utilized in 72.1% (95% confidence interval [CI] 71.9-72.3%), and cytology alone was utilized in 27.9% (95% CI 27.7-28.1%). Utilization of cytology alone varied by geographic region, ranging from 18.3% (95% CI 17.4-19.1%) to 49.0% (95% CI 48.4-49.6%). HPV genotyping test utilization among those with positive pooled HPV test results was 33.9%. In persons aged ≥30 years, variations in age-adjusted test results by region were observed for pooled HPV-positive test results and for HPV genotyping-positive test results. CONCLUSIONS: Cervical cancer screening strategy utilization and test results vary substantially by geographic region within a national screening program. Variation in utilization may be due to regional differences in screening test availability or the preferences of healthcare systems, screened persons and/or clinicians. Test result variations may reflect differing risk factors for HPV infections by geographic region.


Asunto(s)
Detección Precoz del Cáncer , Infecciones por Papillomavirus , Neoplasias del Cuello Uterino , Humanos , Femenino , Neoplasias del Cuello Uterino/diagnóstico , Neoplasias del Cuello Uterino/virología , Neoplasias del Cuello Uterino/epidemiología , Persona de Mediana Edad , Adulto , Detección Precoz del Cáncer/estadística & datos numéricos , Detección Precoz del Cáncer/métodos , Estados Unidos/epidemiología , Anciano , Infecciones por Papillomavirus/diagnóstico , Infecciones por Papillomavirus/virología , Infecciones por Papillomavirus/epidemiología , Adulto Joven , Frotis Vaginal/estadística & datos numéricos , Papillomaviridae/aislamiento & purificación , Papillomaviridae/genética
8.
Clin Chim Acta ; 552: 117686, 2024 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-38042461

RESUMEN

BACKGROUND: During the COVID-19 pandemic, concerns arose about disparate access to health care and laboratory testing. There is limited information about the pandemic's impact on the frequency of diabetic laboratory testing across demographic subgroups (e.g., sex, age over 65 y, and race). METHODS: This retrospective study examined outpatient hemoglobin A1c (HbA1c) testing in a large academic medical center in Upstate New York between March 2019 and March 2021. Multivariate Poisson regression models were used to evaluate the pandemic's effects on HbA1c utilization. RESULTS: Over 190,000 HbA1c results from predominately white (76.1 %) and older (mean age, 60.6 y) outpatients were analyzed. Compared to pre-pandemic time period, the average number of HbA1c tests per patient during COVID time period experienced a small, though significant, drop (1.3 to 1.2; p < 0.001) on aggregate and in outpatients, males, females, and seniors. The modest reduction was not significant by race except for the white seniors (p < 0.001). However, the testing frequency remained within recommendations from the American Diabetes Association for monitoring prediabetic patients and patients with stable glycemic control. CONCLUSION: Given the propensity for healthcare disruptions to widen disparities, it is reassuring that we did not observe a worsening of disparities in rates of HbA1c testing during the COVID-19 pandemic.


Asunto(s)
COVID-19 , Pacientes Ambulatorios , Masculino , Femenino , Humanos , Persona de Mediana Edad , Hemoglobina Glucada , Pandemias , Estudios Retrospectivos
9.
J Gen Intern Med ; 2023 Nov 22.
Artículo en Inglés | MEDLINE | ID: mdl-37993739

RESUMEN

BACKGROUND: Guidelines recommend high-sensitivity cardiac troponin (hs-cTn) for diagnosis of myocardial infarction. Use of hs-cTn is increasing across the U.S., but questions remain regarding clinical and operational impact. Prior studies have had methodologic limitations and yielded conflicting results. OBJECTIVE: To evaluate the impact of transitioning from conventional cardiac troponin (cTn) to hs-cTn on test and resource utilization, operational efficiency, and patient safety. DESIGN: Retrospective cohort study in two New York City hospitals during the months before and after transition from conventional cTn to hs-cTn at Hospital 1. Hospital 2 served as a control. PARTICIPANTS: Consecutive emergency department (ED) patients with at least one cTn test resulted. INTERVENTION: Multifaceted hs-cTn intervention bundle, including a 0/2-h diagnostic algorithm for non-ST-elevation myocardial infarction, an educational bundle, enhancements to the electronic medical record, and nursing interventions to facilitate timed sample collection. MAIN MEASURES: Primary outcomes included serial cTn test utilization, probability of hospital admission, ED length of stay (LOS), and among discharged patients, probability of ED revisit within 72 h resulting in hospital admission. Multivariable regression models adjusted for age, sex, temporal trends, and interhospital differences. KEY RESULTS: The intervention was associated with increased use of serial cTn testing (adjusted risk difference: 48 percentage points, 95% CI: 45-50, P < 0.001) and ED LOS (adjusted geometric mean difference: 50 min, 95% CI: 50-51, P < 0.001). There was no significant association between the intervention and probability of admission (adjusted relative risk [aRR]: 0.99, 95% CI: 0.89-1.1, P = 0.81) or probability of ED revisit within 72 h resulting in admission (aRR: 1.1, 95% CI: 0.44-2.9, P = 0.81). CONCLUSIONS: Implementation of a hs-cTn intervention bundle was associated with an improvement in serial cTn testing, a neutral effect on probability of hospital admission, and a modest increase in ED LOS.

10.
Heliyon ; 9(5): e15334, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37131426

RESUMEN

Background and objectives: Overutilization of phlebotomy tubes at healthcare facilities leads to iatrogenic anemia, patient dissatisfaction, and increase in operational costs. In this study, we analyzed the phlebotomy tube usage data at the Zhongshan Hospital, Fudan University, to show potential inefficiencies with phlebotomy tube usage. Methods: Data of 984,078 patients with 1,408,175 orders and 4,622,349 total phlebotomy tubes were collected during years 2018-2021. Data of different patient types were compared. Furthermore, we assessed the data from subspecialty and test levels to explore the factors influencing the increase in phlebotomy tube usage. Results: We observed an overall 8% increase in both the mean number of tubes used and blood loss per order over the past 4 years. The mean blood loss per day for intensive care unit (ICU) patients was 18.7 ml (maximum 121.6 ml), which was well under the 200 ml/day threshold. However, the maximum number of tubes used reached more than 30 tubes/day. Conclusions: The 8% increase of phlebotomy tubes over 4 years should alarm laboratory managements, as tests offered are expected to increase in the future. Importantly, the whole healthcare community needs to work together to solve this problem with more creative solutions.

11.
Acad Pathol ; 9(1): 100039, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35983307

RESUMEN

Appropriate laboratory test utilization is of growing interest in the face of rising healthcare costs and documented evidence of over- and under-utilization. Building from published literature, laboratory organizations have recently published guidelines for establishing laboratory utilization management programs. However, systematic reviews and meta-analyses have consistently struggled to define rigorous evidence-based best practice recommendations due to the paucity of published data or the heterogeneity of available data. We sought to gain information about utilization practices and programs currently in use and which factors contribute to their success by distributing a survey among laboratory professionals. The survey received seventy-four eligible respondents. We observed a wide range in the duration of laboratory utilization programs and the number of stewardship initiatives. In addition, there was great variety in the utilization practices used and the tests or processes targeted by programs. There was similarity in how initiatives are evaluated and who is involved with utilization programs. Finally, respondents often credited a multidisciplinary committee, support from leadership, and strong IT support/data access as important factors for their program's perceived success. Many of these factors agree with previously published literature.

12.
Clin Chem Lab Med ; 60(11): 1706-1718, 2022 10 26.
Artículo en Inglés | MEDLINE | ID: mdl-35998662

RESUMEN

Appropriateness in Laboratory Medicine has been the object of various types of interventions. From published experiences, it is now clear that to effectively manage the laboratory test demand it is recommended to activate evidence-based preventative strategies stopping inappropriate requests before they can reach the laboratory. To guarantee appropriate laboratory test utilization, healthcare institutions should implement and optimize a computerized provider order entry (CPOE), exploiting the potential of electronic requesting as "enabling factor" for reinforcing appropriateness and sustaining its effects over time. In our academic institution, over the last 15 years, our medical laboratory has enforced various interventions to improve test appropriateness, all directly or indirectly based on CPOE use. The following types of intervention were implemented: (1) applying specific recommendations supported by monitoring by CPOE as well as a continuous consultation with clinicians (tumour markers); (2) removing outdated tests and avoiding redundant duplications (cardiac markers, pancreatic enzymes); (3) order restraints to selected wards and gating policy (procalcitonin, B-type natriuretic peptide, homocysteine); (4) reflex testing (bilirubin fractions, free prostate-specific antigen, aminotransferases, magnesium in hypocalcemia); and (5) minimum retesting interval (D-Dimer, vitamin B12, C-reactive protein, γ-glutamyltranspeptidase). In this paper, we reviewed these interventions and summarized their outcomes primarily related to the changes in total test volumes and cost savings, without neglecting patient safety. Our experience confirmed that laboratory professionals have an irreplaceable role as "stewards" in designing, implementing, evaluating, and maintaining interventions focused to improving test appropriateness.


Asunto(s)
Pruebas Diagnósticas de Rutina , Procedimientos Innecesarios , Centros Médicos Académicos , Bilirrubina , Proteína C-Reactiva , Homocisteína , Humanos , Magnesio , Péptido Natriurético Encefálico , Polipéptido alfa Relacionado con Calcitonina , Antígeno Prostático Específico , Transaminasas , Vitaminas
13.
Diagnostics (Basel) ; 12(7)2022 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-35885541

RESUMEN

We evaluated the utilization and characteristics of thyroid function test (TFT) results, including serum thyroid stimulating hormone (TSH), free thyroxine (free T4), and total triiodothyronine (total T3) in Korean adults who visited local clinics and hospitals between 2018 and 2020. We obtained TFT results for 69,575 specimens from 47,685 adult Korean patients (4878 men and 42,807 women) with a mean age of 42.7 (standard deviation of 13.2) years. Among them, 23,581 specimens were tested for TSH only, 38,447 specimens were tested for TSH and free T4 (including 17,978 specimens without total T3), and 20,469 specimens were tested for all three, i.e., TSH, free T4, and total T3. The proportion of euthyroid was 80.0% among all 69,575 specimens, 71.2% among specimens with TSH and free T4, and 64.2% among specimens with all three TFTs. TFT patterns that were difficult to interpret and needed more clinical information were observed for about 6.9% of the 20,469 specimens with all three TFTs. Among the 20,469 specimens with all three TFTs, no specimen had increases in all three. Information on the prevalence of specimen results of TFTs would be helpful to expand our knowledge of patient population characteristics and to improve test utilization.

14.
J Multidiscip Healthc ; 15: 413-429, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35264855

RESUMEN

Background: The use of diagnostic laboratory tests is increasing worldwide. Improper test utilization (ITU) is a common problem for all healthcare systems as it costs substantial expenses for the health systems and impacts optimal patient care. Purpose: The present small-scale survey aims to highlight the current practice of ITU among the labs and physicians, and investigate the actions of diagnostic laboratories towards ITU, and identify the reasons affecting test ordering decisions among physicians. Methods: A cross sectional study based on two different surveys was developed and distributed from March 2017 to April 2017 to laboratory supervisors and physicians (clinicians) at Hamad Medical Corporation (HMC), Qatar. Fourteen laboratory supervisors and eighty-nine physicians were surveyed about improper test utilization practices. The overall results are descriptive data. Results: The overall proportion of improperly utilized tests detected by the laboratory supervisors were 50.0%, 35.7%, and 14.3% for overused, misused, and underused lab tests, respectively. Among the physicians, 91% used the electronic ordering template to select the appropriate tests. Moreover, 78.7% of the physicians used the clinical guidelines, while 73% were not employing the laboratory handbook. Furthermore, 95.5%% of the physicians preferred to get feedback about inappropriate tests, while 51.1% were not receiving any, and 40.9% were rarely receiving. Finally, 67.4% were unaware of the tests' costs among surveyed physicians, and 63.6% showed a willingness to reduce their orders if the cost was high and unnecessary. Conclusion: The physician's and the laboratories' communication were inadequate and not systematized, causing ITU practices. The improvement strategy should focus on the communication between clinical labs and physicians and enhance physician implementation to order appropriate lab tests. This could be achieved by conducting legitimate educational methodologies, continuous feedback reviews, ongoing audits, executing health information technology instruments, engaging laboratory practice guidelines, and applying demand management and testing algorithms.

15.
J Clin Med ; 11(3)2022 Feb 04.
Artículo en Inglés | MEDLINE | ID: mdl-35160283

RESUMEN

Limited data are available on test utilization and intraindividual changes in rheumatoid factor (RF) and anti-cyclic citrullinated peptide antibody (anti-CCP) in Korean patients that visit local clinics and hospitals. We retrospectively reviewed longitudinally measured RF and anti-CCP data in Korean patients to investigate the utilization and changes in test results through a laboratory information system. During the 10-year study period, 256,259 specimens were tested for RF. Among them, 32,567 (12.7%) specimens from 31,110 Korean adults had simultaneously measured anti-CCP results. Among them, 1110 (3.6%) subjects had follow-up test results. Among 351 patients with initial positive RF results, 290 (82.6%) had no qualitative change in RF from positive to negative values during follow-up. About 3.8% (29/759) of patients with initial negative results experienced qualitative changes in RF that were positive on follow-up. Among 182 patients with an anti-CCP-positive result at initial measurement, 174 (95.6%) had no qualitative change in anti-CCP from positive to negative or equivocal results during follow-up. About 0.5% (5/928) of patients with initial negative values experienced qualitative changes in anti-CCP to positive values on follow-up. The agreement of qualitative results between RF and anti-CCP was 80.8% (95% confidence interval 78.4-83.1%) at initial measurement and 80.6% (95% confidence interval 79.0-82.1%) overall. The results of this study can help inform utilization of RF and anti-CCP testing for Korean patients visiting local clinics and hospitals.

16.
Crit Rev Clin Lab Sci ; 59(4): 278-296, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35076343

RESUMEN

Preterm labor (PTL) is a severe issue of neonatal healthcare because its related to preterm birth (PTB) is the leading cause of neonatal mortality and the most common reason for antenatal hospitalizations. The PTB rate is about 11% globally and it is similar in the United States. PTB poses a significant economic burden on the healthcare system. Early diagnosis of PTL is the key to reducing PTB rate, neonatal mortality, and long-term neurological impairment in children. The diagnosis of PTL is usually based on clinical criteria, but the accuracy of the diagnosis is poor. To predict the risk of PTL more accurately, tests of biomarkers with variable clinical diagnostic performances have been developed and some of them have been applied clinically. In this article, we analyze the performance characteristics of these biomarkers, such as sensitivity, specificity, positive predictive value, and negative predictive value, as well as the clinical utility of current biomarkers so that clinical laboratorians and clinicians can better understand the limitations of these tests and utilize them wisely. We also summarize the current recommendations on clinical utilization of PTL biomarkers. Finally, we explore the prospects of future omics-based novel biomarkers, which may improve prediction of PTL in the future.


Asunto(s)
Trabajo de Parto Prematuro , Nacimiento Prematuro , Biomarcadores , Niño , Femenino , Humanos , Recién Nacido , Trabajo de Parto Prematuro/diagnóstico , Valor Predictivo de las Pruebas , Embarazo , Nacimiento Prematuro/diagnóstico
17.
Am J Clin Pathol ; 157(4): 561-565, 2022 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-34617986

RESUMEN

OBJECTIVES: A CBC with WBC differential is often ordered when a CBC alone would be sufficient for patient care. Performing unnecessary WBC differentials adds to costs in the laboratory. Our objective was to implement a laboratory middleware algorithm to cancel repeat, same-day WBC differentials to achieve lasting improvements in laboratory resource allocation. METHODS: Repeat same-day WBC differentials were first canceled only on intensive care unit samples; after a successful trial period, the algorithm was applied hospital-wide. We retrospectively reviewed CBC with differential orders from pre- and postimplementation periods to estimate the reduction in WBC differentials and potential cost savings. RESULTS: The algorithm led to a monthly WBC differential cancellation rate of 5.40% for a total of 10,195 canceled WBC differentials during the cumulative postimplementation period (September 25, 2019, to December 31, 2020). Nearly all (99.94%) differentials remained canceled. Most patients only had one WBC differential canceled (range, 1-38). Savings estimates showed savings of $0.99 CAD per canceled differential and 1,060 minutes (17.7 hours) of technologist time. CONCLUSIONS: A middleware algorithm to cancel repeat, same-day WBC differentials is a simple and sustainable way to achieve lasting improvements in laboratory utilization.


Asunto(s)
Unidades de Cuidados Intensivos , Laboratorios , Ahorro de Costo , Humanos , Unidades de Cuidados Intensivos/economía , Laboratorios/economía , Recuento de Leucocitos/economía , Estudios Retrospectivos , Centros de Atención Terciaria/economía
18.
Clin Chem ; 68(3): 402-412, 2022 03 04.
Artículo en Inglés | MEDLINE | ID: mdl-34871351

RESUMEN

BACKGROUND: As technology enables new and increasingly complex laboratory tests, test utilization presents a growing challenge for healthcare systems. Clinical decision support (CDS) refers to digital tools that present providers with clinically relevant information and recommendations, which have been shown to improve test utilization. Nevertheless, individual CDS applications often fail, and implementation remains challenging. CONTENT: We review common classes of CDS tools grounded in examples from the literature as well as our own institutional experience. In addition, we present a practical framework and specific recommendations for effective CDS implementation. SUMMARY: CDS encompasses a rich set of tools that have the potential to drive significant improvements in laboratory testing, especially with respect to test utilization. Deploying CDS effectively requires thoughtful design and careful maintenance, and structured processes focused on quality improvement and change management play an important role in achieving these goals.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Atención a la Salud , Humanos , Cuidados Paliativos
19.
J Clin Lab Anal ; 35(12): e23913, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34689365

RESUMEN

BACKGROUND: Test utilization for the diagnosis of celiac disease may affect the prevalence and incidence of the disease in Korea. We aimed to investigate the test utilization of serological biomarkers for celiac disease in Korea. METHODS: We retrospectively investigated the test utilization of tissue transglutaminase IgA, gliadin IgA and IgG, and endomysial IgA antibody (Ab) assays between January 2011 and June 2020. RESULTS: During a nine-year-and-six-month study period, overall 307,322,606 clinical tests were requested from different clinical settings, such as local clinics, hospitals, university hospitals, and tertiary medical centers. Among them, only 58 tissue transglutaminase IgA, 22 gliadin IgA, 12 gliadin IgG, and 16 endomysial IgA Ab tests were performed on 79 Korean patients. Among them, one patient had positive transglutaminase IgA Ab result (1.3%). CONCLUSION: Low prevalence and incidence of celiac disease in Korea may be due to an underutilization of diagnostic assays.


Asunto(s)
Enfermedad Celíaca/diagnóstico , Pruebas Serológicas/estadística & datos numéricos , Enfermedad Celíaca/epidemiología , Pruebas Diagnósticas de Rutina , Gliadina/inmunología , Humanos , Inmunoglobulina A/sangre , Inmunoglobulina G/sangre , Proteína Glutamina Gamma Glutamiltransferasa 2/inmunología , República de Corea/epidemiología , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA