Loading…

Algorithm appreciation: People prefer algorithmic to human judgment

•We challenge prevailing idea that people prefer human to algorithmic judgment.•In head-to-head comparisons, people use algorithmic advice more than human advice.•We compare usage of advice using the continuous weighting of advice (WOA) measure.•People appreciate algorithmic advice despite blindness...

Full description

Saved in:
Bibliographic Details
Published in:Organizational behavior and human decision processes 2019-03, Vol.151, p.90-103
Main Authors: Logg, Jennifer M., Minson, Julia A., Moore, Don A.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•We challenge prevailing idea that people prefer human to algorithmic judgment.•In head-to-head comparisons, people use algorithmic advice more than human advice.•We compare usage of advice using the continuous weighting of advice (WOA) measure.•People appreciate algorithmic advice despite blindness to algorithm’s process.•Algorithm appreciation holds even as people underweight advice more generally. Even though computational algorithms often outperform human judgment, received wisdom suggests that people may be skeptical of relying on them (Dawes, 1979). Counter to this notion, results from six experiments show that lay people adhere more to advice when they think it comes from an algorithm than from a person. People showed this effect, what we call algorithm appreciation, when making numeric estimates about a visual stimulus (Experiment 1A) and forecasts about the popularity of songs and romantic attraction (Experiments 1B and 1C). Yet, researchers predicted the opposite result (Experiment 1D). Algorithm appreciation persisted when advice appeared jointly or separately (Experiment 2). However, algorithm appreciation waned when: people chose between an algorithm’s estimate and their own (versus an external advisor’s; Experiment 3) and they had expertise in forecasting (Experiment 4). Paradoxically, experienced professionals, who make forecasts on a regular basis, relied less on algorithmic advice than lay people did, which hurt their accuracy. These results shed light on the important question of when people rely on algorithmic advice over advice from people and have implications for the use of “big data” and algorithmic advice it generates.
ISSN:0749-5978
1095-9920
DOI:10.1016/j.obhdp.2018.12.005