A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health.
Perspect Psychol Sci
; 18(5): 1062-1096, 2023 09.
Article
en En
| MEDLINE
| ID: mdl-36490369
Advances in computer science and data-analytic methods are driving a new era in mental health research and application. Artificial intelligence (AI) technologies hold the potential to enhance the assessment, diagnosis, and treatment of people experiencing mental health problems and to increase the reach and impact of mental health care. However, AI applications will not mitigate mental health disparities if they are built from historical data that reflect underlying social biases and inequities. AI models biased against sensitive classes could reinforce and even perpetuate existing inequities if these models create legacies that differentially impact who is diagnosed and treated, and how effectively. The current article reviews the health-equity implications of applying AI to mental health problems, outlines state-of-the-art methods for assessing and mitigating algorithmic bias, and presents a call to action to guide the development of fair-aware AI in psychological science.
Palabras clave
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Asunto principal:
Inteligencia Artificial
/
Salud Mental
Tipo de estudio:
Prognostic_studies
Límite:
Humans
Idioma:
En
Revista:
Perspect Psychol Sci
Año:
2023
Tipo del documento:
Article
Pais de publicación:
Estados Unidos