Loading…
Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
While Explainable AI (XAI) aims to make AI understandable and useful to humans, it has been criticised for relying too much on formalism and solutionism, focusing more on mathematical soundness than user needs. We propose an alternative to this bottom-up approach inspired by design thinking: the XAI...
Saved in:
Published in: | arXiv.org 2024-09 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Nguyen, Elisa Bertram, Johannes Kortukov, Evgenii Song, Jean Y Oh, Seong Joon |
description | While Explainable AI (XAI) aims to make AI understandable and useful to humans, it has been criticised for relying too much on formalism and solutionism, focusing more on mathematical soundness than user needs. We propose an alternative to this bottom-up approach inspired by design thinking: the XAI research community should adopt a top-down, user-focused perspective to ensure user relevance. We illustrate this with a relatively young subfield of XAI, Training Data Attribution (TDA). With the surge in TDA research and growing competition, the field risks repeating the same patterns of solutionism. We conducted a needfinding study with a diverse group of AI practitioners to identify potential user needs related to TDA. Through interviews (N=10) and a systematic survey (N=31), we uncovered new TDA tasks that are currently largely overlooked. We invite the TDA and XAI communities to consider these novel tasks and improve the user relevance of their research outcomes. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3110115786</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3110115786</sourcerecordid><originalsourceid>FETCH-proquest_journals_31101157863</originalsourceid><addsrcrecordid>eNqNi0sKwjAUAIMgKNo7PHAdaBLbuhU_6FbqTihp-6qRmuhLgh7fLjyAq1nMzIhNpVKCr5ZSTlji_T1NU5kXMsvUlF1K99bUejh7JL53TfTYwgk9ampuYCyUpI019gpbHTSsQyBTx2Cchc4RHOJDW75BG5CGcfd59kOu6x5hfZyzcad7j8mPM7bY78rNgT_JvSL6UN1dJDuoSgmRCpEVq1z9V30BcwNCsQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3110115786</pqid></control><display><type>article</type><title>Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI</title><source>Publicly Available Content (ProQuest)</source><creator>Nguyen, Elisa ; Bertram, Johannes ; Kortukov, Evgenii ; Song, Jean Y ; Oh, Seong Joon</creator><creatorcontrib>Nguyen, Elisa ; Bertram, Johannes ; Kortukov, Evgenii ; Song, Jean Y ; Oh, Seong Joon</creatorcontrib><description>While Explainable AI (XAI) aims to make AI understandable and useful to humans, it has been criticised for relying too much on formalism and solutionism, focusing more on mathematical soundness than user needs. We propose an alternative to this bottom-up approach inspired by design thinking: the XAI research community should adopt a top-down, user-focused perspective to ensure user relevance. We illustrate this with a relatively young subfield of XAI, Training Data Attribution (TDA). With the surge in TDA research and growing competition, the field risks repeating the same patterns of solutionism. We conducted a needfinding study with a diverse group of AI practitioners to identify potential user needs related to TDA. Through interviews (N=10) and a systematic survey (N=31), we uncovered new TDA tasks that are currently largely overlooked. We invite the TDA and XAI communities to consider these novel tasks and improve the user relevance of their research outcomes.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Explainable artificial intelligence</subject><ispartof>arXiv.org, 2024-09</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3110115786?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>778,782,25740,36999,44577</link.rule.ids></links><search><creatorcontrib>Nguyen, Elisa</creatorcontrib><creatorcontrib>Bertram, Johannes</creatorcontrib><creatorcontrib>Kortukov, Evgenii</creatorcontrib><creatorcontrib>Song, Jean Y</creatorcontrib><creatorcontrib>Oh, Seong Joon</creatorcontrib><title>Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI</title><title>arXiv.org</title><description>While Explainable AI (XAI) aims to make AI understandable and useful to humans, it has been criticised for relying too much on formalism and solutionism, focusing more on mathematical soundness than user needs. We propose an alternative to this bottom-up approach inspired by design thinking: the XAI research community should adopt a top-down, user-focused perspective to ensure user relevance. We illustrate this with a relatively young subfield of XAI, Training Data Attribution (TDA). With the surge in TDA research and growing competition, the field risks repeating the same patterns of solutionism. We conducted a needfinding study with a diverse group of AI practitioners to identify potential user needs related to TDA. Through interviews (N=10) and a systematic survey (N=31), we uncovered new TDA tasks that are currently largely overlooked. We invite the TDA and XAI communities to consider these novel tasks and improve the user relevance of their research outcomes.</description><subject>Explainable artificial intelligence</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi0sKwjAUAIMgKNo7PHAdaBLbuhU_6FbqTihp-6qRmuhLgh7fLjyAq1nMzIhNpVKCr5ZSTlji_T1NU5kXMsvUlF1K99bUejh7JL53TfTYwgk9ampuYCyUpI019gpbHTSsQyBTx2Cchc4RHOJDW75BG5CGcfd59kOu6x5hfZyzcad7j8mPM7bY78rNgT_JvSL6UN1dJDuoSgmRCpEVq1z9V30BcwNCsQ</recordid><startdate>20240925</startdate><enddate>20240925</enddate><creator>Nguyen, Elisa</creator><creator>Bertram, Johannes</creator><creator>Kortukov, Evgenii</creator><creator>Song, Jean Y</creator><creator>Oh, Seong Joon</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240925</creationdate><title>Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI</title><author>Nguyen, Elisa ; Bertram, Johannes ; Kortukov, Evgenii ; Song, Jean Y ; Oh, Seong Joon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31101157863</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Explainable artificial intelligence</topic><toplevel>online_resources</toplevel><creatorcontrib>Nguyen, Elisa</creatorcontrib><creatorcontrib>Bertram, Johannes</creatorcontrib><creatorcontrib>Kortukov, Evgenii</creatorcontrib><creatorcontrib>Song, Jean Y</creatorcontrib><creatorcontrib>Oh, Seong Joon</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nguyen, Elisa</au><au>Bertram, Johannes</au><au>Kortukov, Evgenii</au><au>Song, Jean Y</au><au>Oh, Seong Joon</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI</atitle><jtitle>arXiv.org</jtitle><date>2024-09-25</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>While Explainable AI (XAI) aims to make AI understandable and useful to humans, it has been criticised for relying too much on formalism and solutionism, focusing more on mathematical soundness than user needs. We propose an alternative to this bottom-up approach inspired by design thinking: the XAI research community should adopt a top-down, user-focused perspective to ensure user relevance. We illustrate this with a relatively young subfield of XAI, Training Data Attribution (TDA). With the surge in TDA research and growing competition, the field risks repeating the same patterns of solutionism. We conducted a needfinding study with a diverse group of AI practitioners to identify potential user needs related to TDA. Through interviews (N=10) and a systematic survey (N=31), we uncovered new TDA tasks that are currently largely overlooked. We invite the TDA and XAI communities to consider these novel tasks and improve the user relevance of their research outcomes.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-09 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3110115786 |
source | Publicly Available Content (ProQuest) |
subjects | Explainable artificial intelligence |
title | Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T23%3A34%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Towards%20User-Focused%20Research%20in%20Training%20Data%20Attribution%20for%20Human-Centered%20Explainable%20AI&rft.jtitle=arXiv.org&rft.au=Nguyen,%20Elisa&rft.date=2024-09-25&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3110115786%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_31101157863%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3110115786&rft_id=info:pmid/&rfr_iscdi=true |