Loading…

Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems

The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explaina...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on cognitive and developmental systems 2021-09, Vol.13 (3), p.717-728
Main Authors: Rohlfing, Katharina J., Cimiano, Philipp, Scharlau, Ingrid, Matzner, Tobias, Buhl, Heike M., Buschmeier, Hendrik, Esposito, Elena, Grimminger, Angela, Hammer, Barbara, Hab-Umbach, Reinhold, Horwath, Ilona, Hullermeier, Eyke, Kern, Friederike, Kopp, Stefan, Thommes, Kirsten, Ngonga Ngomo, Axel-Cyrille, Schulte, Carsten, Wachsmuth, Henning, Wagner, Petra, Wrede, Britta
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee's understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee's understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.
ISSN:2379-8920
2379-8939
DOI:10.1109/TCDS.2020.3044366