Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Stud Health Technol Inform ; 316: 565-569, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176805

RESUMEN

This paper establishes requirements for assessing the usability of Explainable Artificial Intelligence (XAI) methods, focusing on non-AI experts like healthcare professionals. Through a synthesis of literature and empirical findings, it emphasizes achieving optimal cognitive load, task performance, and task time in XAI explanations. Key components include tailoring explanations to user expertise, integrating domain knowledge, and using non-propositional representations for comprehension. The paper highlights the critical role of relevance, accuracy, and truthfulness in fostering user trust. Practical guidelines are provided for designing transparent and user-friendly XAI explanations, especially in high-stakes contexts like healthcare. Overall, the paper's primary contribution lies in delineating clear requirements for effective XAI explanations, facilitating human-AI collaboration across diverse domains.


Asunto(s)
Inteligencia Artificial , Humanos , Comprensión
2.
Stud Health Technol Inform ; 305: 32-35, 2023 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-37386950

RESUMEN

The YOLO series of object detection algorithms, including YOLOv4 and YOLOv5, have shown superior performance in various medical diagnostic tasks, surpassing human ability in some cases. However, their black-box nature has limited their adoption in medical applications that require trust and explainability of model decisions. To address this issue, visual explanations for AI models, known as visual XAI, have been proposed in the form of heatmaps that highlight regions in the input that contributed most to a particular decision. Gradient-based approaches, such as Grad-CAM [1], and non-gradient-based approaches, such as Eigen-CAM [2], are applicable to YOLO models and do not require new layer implementation. This paper evaluates the performance of Grad-CAM and Eigen-CAM on the VinDrCXR Chest X-ray Abnormalities Detection dataset [3] and discusses the limitations of these methods for explaining model decisions to data scientists.


Asunto(s)
Algoritmos , Médicos , Humanos , Reproducibilidad de los Resultados , Rayos X , Confianza
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA