Your browser doesn't support javascript.
loading
Towards reconciling usability and usefulness of policy explanations for sequential decision-making systems.
Tambwekar, Pradyumna; Gombolay, Matthew.
Afiliación
  • Tambwekar P; School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, United States.
  • Gombolay M; School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, United States.
Front Robot AI ; 11: 1375490, 2024.
Article en En | MEDLINE | ID: mdl-39104806
ABSTRACT
Safefy-critical domains often employ autonomous agents which follow a sequential decision-making setup, whereby the agent follows a policy to dictate the appropriate action at each step. AI-practitioners often employ reinforcement learning algorithms to allow an agent to find the best policy. However, sequential systems often lack clear and immediate signs of wrong actions, with consequences visible only in hindsight, making it difficult to humans to understand system failure. In reinforcement learning, this is referred to as the credit assignment problem. To effectively collaborate with an autonomous system, particularly in a safety-critical setting, explanations should enable a user to better understand the policy of the agent and predict system behavior so that users are cognizant of potential failures and these failures can be diagnosed and mitigated. However, humans are diverse and have innate biases or preferences which may enhance or impair the utility of a policy explanation of a sequential agent. Therefore, in this paper, we designed and conducted human-subjects experiment to identify the factors which influence the perceived usability with the objective usefulness of policy explanations for reinforcement learning agents in a sequential setting. Our study had two factors the modality of policy explanation shown to the user (Tree, Text, Modified Text, and Programs) and the "first impression" of the agent, i.e., whether the user saw the agent succeed or fail in the introductory calibration video. Our findings characterize a preference-performance tradeoff wherein participants perceived language-based policy explanations to be significantly more useable; however, participants were better able to objectively predict the agent's behavior when provided an explanation in the form of a decision tree. Our results demonstrate that user-specific factors, such as computer science experience (p < 0.05), and situational factors, such as watching agent crash (p < 0.05), can significantly impact the perception and usefulness of the explanation. This research provides key insights to alleviate prevalent issues regarding innapropriate compliance and reliance, which are exponentially more detrimental in safety-critical settings, providing a path forward for XAI developers for future work on policy-explanations.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Front Robot AI Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos Pais de publicación: Suiza

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Front Robot AI Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos Pais de publicación: Suiza