Loading…

Evaluating Effects of Enhanced Autonomy Transparency on Trust, Dependence, and Human-Autonomy Team Performance over Time

As autonomous systems become more complicated, humans may have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency contributes to the lack of trust in autonomy and suboptimal team performance. In response to this...

Full description

Saved in:
Bibliographic Details
Published in:International journal of human-computer interaction 2022-12, Vol.38 (18-20), p.1962-1971
Main Authors: Luo, Ruikun, Du, Na, Yang, X. Jessie
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As autonomous systems become more complicated, humans may have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency contributes to the lack of trust in autonomy and suboptimal team performance. In response to this concern, researchers have proposed various methods to enhance autonomy transparency and evaluated how enhanced transparency could affect the people's trust and the human-autonomy team performance. However, the majority of prior studies measured trust at the end of the experiment and averaged behavioral and performance measures across all trials in an experiment, yet overlooked the temporal dynamics of those variables. We have little understanding of how autonomy transparency affects trust, dependence, and performance over time. The present study, therefore, aims to fill the gap and examine such temporal dynamics. We develop a game Treasure Hunter wherein a human uncovers a map for treasures with the help from an intelligent assistant. The intelligent assistant recommends where the human should go next. The rationale behind each recommendation could be conveyed in a display that explicitly lists the option space (i.e., all the possible actions) and the reason why a particular action is the most appropriate in a given context. Results from a human-in-the-loop experiment with 28 participants indicate that by conveying the intelligent assistant's decision-making rationale via the display, participants' trust increases significantly and becomes more calibrated over time. Using the display also leads to a higher acceptance of recommendations from the intelligent agent.
ISSN:1044-7318
1532-7590
1044-7318
DOI:10.1080/10447318.2022.2097602