Loading…
Explainability pitfalls: Beyond dark patterns in explainable AI
To make explainable artificial intelligence (XAI) systems trustworthy, understanding harmful effects is important. In this paper, we address an important yet unarticulated type of negative effect in XAI. We introduce explainability pitfalls (EPs), unanticipated negative downstream effects from AI ex...
Saved in:
Published in: | Patterns (New York, N.Y.) N.Y.), 2024-06, Vol.5 (6), p.100971, Article 100971 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | To make explainable artificial intelligence (XAI) systems trustworthy, understanding harmful effects is important. In this paper, we address an important yet unarticulated type of negative effect in XAI. We introduce explainability pitfalls (EPs), unanticipated negative downstream effects from AI explanations manifesting even when there is no intention to manipulate users. EPs are different from dark patterns, which are intentionally deceptive practices. We articulate the concept of EPs by demarcating it from dark patterns and highlighting the challenges arising from uncertainties around pitfalls. We situate and operationalize the concept using a case study that showcases how, despite best intentions, unsuspecting negative effects, such as unwarranted trust in numerical explanations, can emerge. We propose proactive and preventative strategies to address EPs at three interconnected levels: research, design, and organizational. We discuss design and societal implications around reframing AI adoption, recalibrating stakeholder empowerment, and resisting the “move fast and break things” mindset.
Explainability pitfalls (EPs) are negative effects of AI systems that arise without the intention to deceive end users. Defining EPs and differentiating them from dark patterns, which, in contrast, are negative features intentionally designed to manipulate and deceive users, sets the stage for designing and implementing strategies to safely adopt explainable AI technologies.
The authors define the concept of explainability pitfalls (EPs), including a case study to qualitatively analyze the perception of users of AI explanations, and present several strategies to address the adverse effects of EPs. |
---|---|
ISSN: | 2666-3899 2666-3899 |
DOI: | 10.1016/j.patter.2024.100971 |