Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Int J Soc Robot ; 14(5): 1323-1338, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35432627

RESUMEN

Autonomous agents (AA) will increasingly be deployed as teammates instead of tools. In many operational situations, flawless performance from AA cannot be guaranteed. This may lead to a breach in the human's trust, which can compromise collaboration. This highlights the importance of thinking about how to deal with error and trust violations when designing AA. The aim of this study was to explore the influence of uncertainty communication and apology on the development of trust in a Human-Agent Team (HAT) when there is a trust violation. Two experimental studies following the same method were performed with (I) a civilian group and (II) a military group of participants. The online task environment resembled a house search in which the participant was accompanied and advised by an AA as their artificial team member. Halfway during the task, an incorrect advice evoked a trust violation. Uncertainty communication was manipulated within-subjects, apology between-subjects. Our results showed that (a) communicating uncertainty led to higher levels of trust in both studies, (b) an incorrect advice by the agent led to a less severe decline in trust when that advice included a measure of uncertainty, and (c) after a trust violation, trust recovered significantly more when the agent offered an apology. The two latter effects were only found in the civilian study. We conclude that tailored agent communication is a key factor in minimizing trust reduction in face of agent failure to maintain effective long-term relationships in HATs. The difference in findings between participant groups emphasizes the importance of considering the (organizational) culture when designing artificial team members.

2.
Sci Justice ; 58(4): 258-263, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-29895457

RESUMEN

In 2015 and 2016 the Central Unit of the Dutch National Police created and submitted 21 cartridge case comparison tests as real cases to the Netherlands Forensic Institute (NFI), under supervision of the University of Twente (UT). A total of 53 conclusions were drawn in these 21 tests. For 31 conclusions the underlying ground truth was "positive", in the sense that it addressed a cluster of cartridge cases that was fired from the same firearm. For 22 conclusions the ground truth was "negative", in the sense that the cartridge cases were fired from different firearms. In none of the conclusions, resulting from examinations under casework conditions, misleading evidence was reported. All conclusions supported the hypothesis reflecting the ground truth. This article discusses the design and results of the tests in more detail.

3.
Ergonomics ; 43(9): 1371-89, 2000 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-11014759

RESUMEN

Automation has changed the role of human operators from direct manual control to supervision. Their main task is to monitor whether system performance remains within pre-specified ranges and intervention is only required in unusual situations. One of the consequences is a loss of situation awareness, which significantly affects performance in abnormal, time-critical situation. The present study reports two experiments, both dealing with fault management in a maritime supervisory control task. The first experiment investigated to what extent false alarms would affect performance and diagnosis behaviour when multiple disturbances occurred. Thirty-nine students from maritime curricula diagnosed disturbances that could either be real or turn out to be a false alarm. The presence of false alarms not only affected the rate with which the subsystems under control were sampled, but it also increased problem-solving time. One of the reasons for suboptimal performance in dealing with fault propagation was tunnel vision: participants had a tendency to deal with disturbances sequentially. In the second experiment the effect of support on performance and diagnosis behaviour was investigated. Two types of support were distinguished: interactive support requiring participants to provide the symptom values and automatic support that directly provided the correct action. Thirty students from maritime curricula diagnosed disturbances with the help of either the interactive or the noninteractive support tool. The results indicated that even though both support tools gave the same advice on how to act, more incorrect actions were taken in the non-interactive support condition. Even though no differences in performance were found after the tool had been removed, it was shown that participants who were used to interactive support used a more structured problem-solving strategy than participants used to the non-interactive support. Consequences for system design are discussed.


Asunto(s)
Toma de Decisiones Asistida por Computador , Sistemas Hombre-Máquina , Solución de Problemas , Seguridad , Adulto , Análisis de Varianza , Falla de Equipo , Femenino , Humanos , Masculino , Tiempo de Reacción
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA