Your browser doesn't support javascript.
loading
Measuring Implicit Bias in ICU Notes Using Word-Embedding Neural Network Models.
Cobert, Julien; Mills, Hunter; Lee, Albert; Gologorskaya, Oksana; Espejo, Edie; Jeon, Sun Young; Boscardin, W John; Heintz, Timothy A; Kennedy, Christopher J; Ashana, Deepshikha C; Chapman, Allyson Cook; Raghunathan, Karthik; Smith, Alex K; Lee, Sei J.
Afiliación
  • Cobert J; Anesthesia Service, San Francisco VA Health Care System, University of California, San Francisco, San Francisco, CA; Department of Anesthesia and Perioperative Care, University of California, San Francisco, San Francisco, CA. Electronic address: Julien.cobert@ucsf.edu.
  • Mills H; Bakar Computational Health Sciences Institute, University of California, San Francisco, San Francisco, CA.
  • Lee A; Bakar Computational Health Sciences Institute, University of California, San Francisco, San Francisco, CA.
  • Gologorskaya O; Bakar Computational Health Sciences Institute, University of California, San Francisco, San Francisco, CA.
  • Espejo E; Division of Geriatrics, University of California, San Francisco, San Francisco, CA.
  • Jeon SY; Division of Geriatrics, University of California, San Francisco, San Francisco, CA.
  • Boscardin WJ; Division of Geriatrics, University of California, San Francisco, San Francisco, CA.
  • Heintz TA; School of Medicine, University of California, San Diego, San Diego, CA.
  • Kennedy CJ; Department of Psychiatry, Harvard Medical School, Boston, MA; Center for Precision Psychiatry, Massachusetts General Hospital, Boston, MA.
  • Ashana DC; Division of Pulmonary, Allergy, and Critical Care Medicine, Duke University, Durham, NC.
  • Chapman AC; Department of Medicine, the Division of Critical Care and Palliative Medicine, University of California, San Francisco, San Francisco, CA; Department of Surgery, University of California, San Francisco, San Francisco, CA.
  • Raghunathan K; Department of Anesthesia and Perioperative Care, Duke University, Durham, NC.
  • Smith AK; Department of Geriatrics, Palliative, and Extended Care, Veterans Affairs Medical Center, University of California, San Francisco, San Francisco, CA; Division of Geriatrics, University of California, San Francisco, San Francisco, CA.
  • Lee SJ; Division of Geriatrics, University of California, San Francisco, San Francisco, CA.
Chest ; 165(6): 1481-1490, 2024 Jun.
Article en En | MEDLINE | ID: mdl-38199323
ABSTRACT

BACKGROUND:

Language in nonmedical data sets is known to transmit human-like biases when used in natural language processing (NLP) algorithms that can reinforce disparities. It is unclear if NLP algorithms of medical notes could lead to similar transmissions of biases. RESEARCH QUESTION Can we identify implicit bias in clinical notes, and are biases stable across time and geography? STUDY DESIGN AND

METHODS:

To determine whether different racial and ethnic descriptors are similar contextually to stigmatizing language in ICU notes and whether these relationships are stable across time and geography, we identified notes on critically ill adults admitted to the University of California, San Francisco (UCSF), from 2012 through 2022 and to Beth Israel Deaconess Hospital (BIDMC) from 2001 through 2012. Because word meaning is derived largely from context, we trained unsupervised word-embedding algorithms to measure the similarity (cosine similarity) quantitatively of the context between a racial or ethnic descriptor (eg, African-American) and a stigmatizing target word (eg, nonco-operative) or group of words (violence, passivity, noncompliance, nonadherence).

RESULTS:

In UCSF notes, Black descriptors were less likely to be similar contextually to violent words compared with White descriptors. Contrastingly, in BIDMC notes, Black descriptors were more likely to be similar contextually to violent words compared with White descriptors. The UCSF data set also showed that Black descriptors were more similar contextually to passivity and noncompliance words compared with Latinx descriptors.

INTERPRETATION:

Implicit bias is identifiable in ICU notes. Racial and ethnic group descriptors carry different contextual relationships to stigmatizing words, depending on when and where notes were written. Because NLP models seem able to transmit implicit bias from training data, use of NLP algorithms in clinical prediction could reinforce disparities. Active debiasing strategies may be necessary to achieve algorithmic fairness when using language models in clinical research.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Procesamiento de Lenguaje Natural / Redes Neurales de la Computación / Unidades de Cuidados Intensivos Tipo de estudio: Prognostic_studies Límite: Female / Humans / Male Idioma: En Revista: Chest Año: 2024 Tipo del documento: Article Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Procesamiento de Lenguaje Natural / Redes Neurales de la Computación / Unidades de Cuidados Intensivos Tipo de estudio: Prognostic_studies Límite: Female / Humans / Male Idioma: En Revista: Chest Año: 2024 Tipo del documento: Article Pais de publicación: Estados Unidos