Loading…

Trusting an Algorithm Can Be a Tricky and Sticky Thing

What information guides individuals to trust an algorithm? We examine this question across four experiments that consistently found explanations and relative performance information increased ratings of trust in an algorithm relative to a human expert. When participants learn of the algorithm's...

Full description

Saved in:
Bibliographic Details
Published in:Decision (Washington, D.C.) D.C.), 2024-07, Vol.11 (3), p.404-419
Main Authors: Liang, Garston, Li, Amy X., Newell, Ben R.
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:What information guides individuals to trust an algorithm? We examine this question across four experiments that consistently found explanations and relative performance information increased ratings of trust in an algorithm relative to a human expert. When participants learn of the algorithm's shortcomings, we find that trust can be broken but, importantly, restored. Strikingly, despite these increases and restorations of trust, few individuals changed their overall preferred agent for each commonplace task (e.g., driving a car), suggesting a conceptual ceiling to the extent to which people will trust algorithmic decision aids. Thus, initial preferences for an algorithm were "sticky" and largely resistant, despite large numeric shifts in trust ratings. We discuss theoretical and practical implications of this work for researching trust in algorithms and identify important contributions to understanding when information can improve people's willingness to trust decision aid algorithms.
ISSN:2325-9965
2325-9973
DOI:10.1037/dec0000229