Loading…

How consumers respond to service failures caused by algorithmic mistakes: The role of algorithmic interpretability

Despite the advancement of algorithm-based AI transforming business and society, there is growing evidence of service failures caused by algorithmic mistakes. Due to the “black box” nature of algorithmic decisions, consumers are frustrated not only by the mistakes themselves but also by the lack of...

Full description

Saved in:
Bibliographic Details
Published in:Journal of business research 2024-04, Vol.176, p.114610, Article 114610
Main Author: Chen, Changdong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Despite the advancement of algorithm-based AI transforming business and society, there is growing evidence of service failures caused by algorithmic mistakes. Due to the “black box” nature of algorithmic decisions, consumers are frustrated not only by the mistakes themselves but also by the lack of interpretability of algorithmic decisions. Thus, the current research focuses on the impact of enhanced algorithmic interpretability through Explainable Artificial Intelligence (XAI) approaches (e.g., post-hoc explanations) on consumer reactions to service failures resulting from algorithmic mistakes. Across four experimental studies, the authors demonstrate that consumers react less negatively to service failures caused by algorithmic (rather than human) mistakes when algorithmic interpretability is enhanced. This effect is primarily due to reduced blame assigned to algorithms. Furthermore, they show that the beneficial effect disappears when algorithms are employed for an objective (vs. a subjective) task and when algorithms are at a weak (vs. strong) intelligence stage.
ISSN:0148-2963
1873-7978
DOI:10.1016/j.jbusres.2024.114610