Loading…

Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations

Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct use...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-12
Main Authors: Yao Rong, Leemann, Tobias, Thai-trang Nguyen, Fiedler, Lisa, Qian, Peizhu, Unhelkar, Vaibhav, Seidel, Tina, Kasneci, Gjergji, Kasneci, Enkelejda
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Yao Rong
Leemann, Tobias
Thai-trang Nguyen
Fiedler, Lisa
Qian, Peizhu
Unhelkar, Vaibhav
Seidel, Tina
Kasneci, Gjergji
Kasneci, Enkelejda
description Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 97core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, usability, and human-AI collaboration performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2727853918</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2727853918</sourcerecordid><originalsourceid>FETCH-proquest_journals_27278539183</originalsourceid><addsrcrecordid>eNqNi8sKgkAUQIcgSMp_uNBa0JlMaydh2KKV1Vam5grKNGPz6PH3BfUBrc7inDMiAWUsifIFpRMSWtvHcUyXGU1TFpDTQT-4ERYqf-UquqByaFBA-Rwk7xQ_S4Rit4YCam_u-ALdwtGigdp50aGFVhvYa4HyuyjuOq3sjIxbLi2GP07JfFseNlU0GH3zaF3Ta2_URzU0o1meslWSs_-qN5guQHI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2727853918</pqid></control><display><type>article</type><title>Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations</title><source>Publicly Available Content Database</source><creator>Yao Rong ; Leemann, Tobias ; Thai-trang Nguyen ; Fiedler, Lisa ; Qian, Peizhu ; Unhelkar, Vaibhav ; Seidel, Tina ; Kasneci, Gjergji ; Kasneci, Enkelejda</creator><creatorcontrib>Yao Rong ; Leemann, Tobias ; Thai-trang Nguyen ; Fiedler, Lisa ; Qian, Peizhu ; Unhelkar, Vaibhav ; Seidel, Tina ; Kasneci, Gjergji ; Kasneci, Enkelejda</creatorcontrib><description>Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 97core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, usability, and human-AI collaboration performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Best practice ; Human performance ; Literature reviews ; Recommender systems</subject><ispartof>arXiv.org, 2023-12</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2727853918?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25732,36991,44569</link.rule.ids></links><search><creatorcontrib>Yao Rong</creatorcontrib><creatorcontrib>Leemann, Tobias</creatorcontrib><creatorcontrib>Thai-trang Nguyen</creatorcontrib><creatorcontrib>Fiedler, Lisa</creatorcontrib><creatorcontrib>Qian, Peizhu</creatorcontrib><creatorcontrib>Unhelkar, Vaibhav</creatorcontrib><creatorcontrib>Seidel, Tina</creatorcontrib><creatorcontrib>Kasneci, Gjergji</creatorcontrib><creatorcontrib>Kasneci, Enkelejda</creatorcontrib><title>Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations</title><title>arXiv.org</title><description>Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 97core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, usability, and human-AI collaboration performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.</description><subject>Best practice</subject><subject>Human performance</subject><subject>Literature reviews</subject><subject>Recommender systems</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi8sKgkAUQIcgSMp_uNBa0JlMaydh2KKV1Vam5grKNGPz6PH3BfUBrc7inDMiAWUsifIFpRMSWtvHcUyXGU1TFpDTQT-4ERYqf-UquqByaFBA-Rwk7xQ_S4Rit4YCam_u-ALdwtGigdp50aGFVhvYa4HyuyjuOq3sjIxbLi2GP07JfFseNlU0GH3zaF3Ta2_URzU0o1meslWSs_-qN5guQHI</recordid><startdate>20231219</startdate><enddate>20231219</enddate><creator>Yao Rong</creator><creator>Leemann, Tobias</creator><creator>Thai-trang Nguyen</creator><creator>Fiedler, Lisa</creator><creator>Qian, Peizhu</creator><creator>Unhelkar, Vaibhav</creator><creator>Seidel, Tina</creator><creator>Kasneci, Gjergji</creator><creator>Kasneci, Enkelejda</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231219</creationdate><title>Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations</title><author>Yao Rong ; Leemann, Tobias ; Thai-trang Nguyen ; Fiedler, Lisa ; Qian, Peizhu ; Unhelkar, Vaibhav ; Seidel, Tina ; Kasneci, Gjergji ; Kasneci, Enkelejda</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27278539183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Best practice</topic><topic>Human performance</topic><topic>Literature reviews</topic><topic>Recommender systems</topic><toplevel>online_resources</toplevel><creatorcontrib>Yao Rong</creatorcontrib><creatorcontrib>Leemann, Tobias</creatorcontrib><creatorcontrib>Thai-trang Nguyen</creatorcontrib><creatorcontrib>Fiedler, Lisa</creatorcontrib><creatorcontrib>Qian, Peizhu</creatorcontrib><creatorcontrib>Unhelkar, Vaibhav</creatorcontrib><creatorcontrib>Seidel, Tina</creatorcontrib><creatorcontrib>Kasneci, Gjergji</creatorcontrib><creatorcontrib>Kasneci, Enkelejda</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yao Rong</au><au>Leemann, Tobias</au><au>Thai-trang Nguyen</au><au>Fiedler, Lisa</au><au>Qian, Peizhu</au><au>Unhelkar, Vaibhav</au><au>Seidel, Tina</au><au>Kasneci, Gjergji</au><au>Kasneci, Enkelejda</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations</atitle><jtitle>arXiv.org</jtitle><date>2023-12-19</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 97core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, usability, and human-AI collaboration performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2727853918
source Publicly Available Content Database
subjects Best practice
Human performance
Literature reviews
Recommender systems
title Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T15%3A54%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Towards%20Human-centered%20Explainable%20AI:%20A%20Survey%20of%20User%20Studies%20for%20Model%20Explanations&rft.jtitle=arXiv.org&rft.au=Yao%20Rong&rft.date=2023-12-19&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2727853918%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_27278539183%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2727853918&rft_id=info:pmid/&rfr_iscdi=true