Your browser doesn't support javascript.
loading
A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health.
Timmons, Adela C; Duong, Jacqueline B; Simo Fiallo, Natalia; Lee, Theodore; Vo, Huong Phuc Quynh; Ahle, Matthew W; Comer, Jonathan S; Brewer, LaPrincess C; Frazier, Stacy L; Chaspari, Theodora.
Afiliación
  • Timmons AC; Department of Psychology, University of Texas at Austin Institute for Mental Health Research.
  • Duong JB; Colliga Apps Corporation, Austin, Texas.
  • Simo Fiallo N; Department of Psychology, University of Texas at Austin Institute for Mental Health Research.
  • Lee T; Department of Psychology, Florida International University.
  • Vo HPQ; Department of Psychology, Florida International University.
  • Ahle MW; Department of Computer Science & Engineering, Texas A&M University.
  • Comer JS; Colliga Apps Corporation, Austin, Texas.
  • Brewer LC; Department of Psychology, Florida International University.
  • Frazier SL; Department of Cardiovascular Medicine, Mayo Clinic College of Medicine.
  • Chaspari T; Center for Health Equity and Community Engagement Research, Mayo Clinic, Rochester, Minnesota.
Perspect Psychol Sci ; 18(5): 1062-1096, 2023 09.
Article en En | MEDLINE | ID: mdl-36490369
Advances in computer science and data-analytic methods are driving a new era in mental health research and application. Artificial intelligence (AI) technologies hold the potential to enhance the assessment, diagnosis, and treatment of people experiencing mental health problems and to increase the reach and impact of mental health care. However, AI applications will not mitigate mental health disparities if they are built from historical data that reflect underlying social biases and inequities. AI models biased against sensitive classes could reinforce and even perpetuate existing inequities if these models create legacies that differentially impact who is diagnosed and treated, and how effectively. The current article reviews the health-equity implications of applying AI to mental health problems, outlines state-of-the-art methods for assessing and mitigating algorithmic bias, and presents a call to action to guide the development of fair-aware AI in psychological science.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Inteligencia Artificial / Salud Mental Tipo de estudio: Prognostic_studies Límite: Humans Idioma: En Revista: Perspect Psychol Sci Año: 2023 Tipo del documento: Article Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Inteligencia Artificial / Salud Mental Tipo de estudio: Prognostic_studies Límite: Humans Idioma: En Revista: Perspect Psychol Sci Año: 2023 Tipo del documento: Article Pais de publicación: Estados Unidos