Loading…

Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability

During a research project in which we developed a machine learning (ML) driven visualization system for non-ML experts, we reflected on interpretability research in ML, computer-supported collaborative work and human-computer interaction. We found that while there are manifold technical approaches,...

Full description

Saved in:
Bibliographic Details
Published in:Proceedings of the ACM on human-computer interaction 2022-01, Vol.6 (GROUP), p.1-25, Article 39
Main Authors: Benjamin, Jesse Josua, Kinkeldey, Christoph, Müller-Birn, Claudia, Korjakow, Tim, Herbst, Eva-Maria
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-a1928-ad94fec98a1cb1c7aed521e48b2039fc21eedbc988d2d6cb5cd3821dbbcc91233
cites cdi_FETCH-LOGICAL-a1928-ad94fec98a1cb1c7aed521e48b2039fc21eedbc988d2d6cb5cd3821dbbcc91233
container_end_page 25
container_issue GROUP
container_start_page 1
container_title Proceedings of the ACM on human-computer interaction
container_volume 6
creator Benjamin, Jesse Josua
Kinkeldey, Christoph
Müller-Birn, Claudia
Korjakow, Tim
Herbst, Eva-Maria
description During a research project in which we developed a machine learning (ML) driven visualization system for non-ML experts, we reflected on interpretability research in ML, computer-supported collaborative work and human-computer interaction. We found that while there are manifold technical approaches, these often focus on ML experts and are evaluated in decontextualized empirical studies. We hypothesized that participatory design research may support the understanding of stakeholders' situated sense-making in our project, yet, found guidance regarding ML interpretability inexhaustive. Building on philosophy of technology, we formulated explanation strategies as an empirical-analytical lens explicating how technical explanations mediate the contextual preferences concerning people's interpretations. In this paper, we contribute a report of our proof-of-concept use of explanation strategies to analyze a co-design workshop with non-ML experts, methodological implications for participatory design research, design implications for explanations for non-ML experts and suggest further investigation of technological mediation theories in the ML interpretability space.
doi_str_mv 10.1145/3492858
format article
fullrecord <record><control><sourceid>acm_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3492858</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3492858</sourcerecordid><originalsourceid>FETCH-LOGICAL-a1928-ad94fec98a1cb1c7aed521e48b2039fc21eedbc988d2d6cb5cd3821dbbcc91233</originalsourceid><addsrcrecordid>eNpNkM1LAzEQxYMoWGrx7ik3T6tJdmOzx1KqFioeWs_L5GPbyDZZkgitN_9zU1tFGJgH7zcP5iF0TckdpRW_L6uaCS7O0IDxcVkQWrHzf_oSjWJ8J4RQwQmv2QB9zXZ9Bw6S9Q4vU4Bk1tZEDHkcnm17G6yCrpg46PbpIPHCuIhbH_DSK-uLlVEb92NMvUtmlz6gs5_HQN_iF1Ab60y-guCsW-N5hkIfTAJpO5v2V-iihS6a0WkP0dvjbDV9LhavT_PpZFEAzT8VoOuqNaoWQJWkagxGc0ZNJSQjZd2qrI2W2Rea6QcludKlYFRLqVRNWVkO0e0xVwUfYzBt0we7hbBvKGkO5TWn8jJ5cyRBbf-gX_MbAR5tAw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability</title><source>Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)</source><creator>Benjamin, Jesse Josua ; Kinkeldey, Christoph ; Müller-Birn, Claudia ; Korjakow, Tim ; Herbst, Eva-Maria</creator><creatorcontrib>Benjamin, Jesse Josua ; Kinkeldey, Christoph ; Müller-Birn, Claudia ; Korjakow, Tim ; Herbst, Eva-Maria</creatorcontrib><description>During a research project in which we developed a machine learning (ML) driven visualization system for non-ML experts, we reflected on interpretability research in ML, computer-supported collaborative work and human-computer interaction. We found that while there are manifold technical approaches, these often focus on ML experts and are evaluated in decontextualized empirical studies. We hypothesized that participatory design research may support the understanding of stakeholders' situated sense-making in our project, yet, found guidance regarding ML interpretability inexhaustive. Building on philosophy of technology, we formulated explanation strategies as an empirical-analytical lens explicating how technical explanations mediate the contextual preferences concerning people's interpretations. In this paper, we contribute a report of our proof-of-concept use of explanation strategies to analyze a co-design workshop with non-ML experts, methodological implications for participatory design research, design implications for explanations for non-ML experts and suggest further investigation of technological mediation theories in the ML interpretability space.</description><identifier>ISSN: 2573-0142</identifier><identifier>EISSN: 2573-0142</identifier><identifier>DOI: 10.1145/3492858</identifier><language>eng</language><publisher>New York, NY, USA: ACM</publisher><subject>Computing methodologies ; Human-centered computing ; Human-centered computing / Human computer interaction (HCI)</subject><ispartof>Proceedings of the ACM on human-computer interaction, 2022-01, Vol.6 (GROUP), p.1-25, Article 39</ispartof><rights>Owner/Author</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a1928-ad94fec98a1cb1c7aed521e48b2039fc21eedbc988d2d6cb5cd3821dbbcc91233</citedby><cites>FETCH-LOGICAL-a1928-ad94fec98a1cb1c7aed521e48b2039fc21eedbc988d2d6cb5cd3821dbbcc91233</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Benjamin, Jesse Josua</creatorcontrib><creatorcontrib>Kinkeldey, Christoph</creatorcontrib><creatorcontrib>Müller-Birn, Claudia</creatorcontrib><creatorcontrib>Korjakow, Tim</creatorcontrib><creatorcontrib>Herbst, Eva-Maria</creatorcontrib><title>Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability</title><title>Proceedings of the ACM on human-computer interaction</title><addtitle>ACM PACMHCI</addtitle><description>During a research project in which we developed a machine learning (ML) driven visualization system for non-ML experts, we reflected on interpretability research in ML, computer-supported collaborative work and human-computer interaction. We found that while there are manifold technical approaches, these often focus on ML experts and are evaluated in decontextualized empirical studies. We hypothesized that participatory design research may support the understanding of stakeholders' situated sense-making in our project, yet, found guidance regarding ML interpretability inexhaustive. Building on philosophy of technology, we formulated explanation strategies as an empirical-analytical lens explicating how technical explanations mediate the contextual preferences concerning people's interpretations. In this paper, we contribute a report of our proof-of-concept use of explanation strategies to analyze a co-design workshop with non-ML experts, methodological implications for participatory design research, design implications for explanations for non-ML experts and suggest further investigation of technological mediation theories in the ML interpretability space.</description><subject>Computing methodologies</subject><subject>Human-centered computing</subject><subject>Human-centered computing / Human computer interaction (HCI)</subject><issn>2573-0142</issn><issn>2573-0142</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNpNkM1LAzEQxYMoWGrx7ik3T6tJdmOzx1KqFioeWs_L5GPbyDZZkgitN_9zU1tFGJgH7zcP5iF0TckdpRW_L6uaCS7O0IDxcVkQWrHzf_oSjWJ8J4RQwQmv2QB9zXZ9Bw6S9Q4vU4Bk1tZEDHkcnm17G6yCrpg46PbpIPHCuIhbH_DSK-uLlVEb92NMvUtmlz6gs5_HQN_iF1Ab60y-guCsW-N5hkIfTAJpO5v2V-iihS6a0WkP0dvjbDV9LhavT_PpZFEAzT8VoOuqNaoWQJWkagxGc0ZNJSQjZd2qrI2W2Rea6QcludKlYFRLqVRNWVkO0e0xVwUfYzBt0we7hbBvKGkO5TWn8jJ5cyRBbf-gX_MbAR5tAw</recordid><startdate>20220114</startdate><enddate>20220114</enddate><creator>Benjamin, Jesse Josua</creator><creator>Kinkeldey, Christoph</creator><creator>Müller-Birn, Claudia</creator><creator>Korjakow, Tim</creator><creator>Herbst, Eva-Maria</creator><general>ACM</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20220114</creationdate><title>Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability</title><author>Benjamin, Jesse Josua ; Kinkeldey, Christoph ; Müller-Birn, Claudia ; Korjakow, Tim ; Herbst, Eva-Maria</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a1928-ad94fec98a1cb1c7aed521e48b2039fc21eedbc988d2d6cb5cd3821dbbcc91233</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computing methodologies</topic><topic>Human-centered computing</topic><topic>Human-centered computing / Human computer interaction (HCI)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Benjamin, Jesse Josua</creatorcontrib><creatorcontrib>Kinkeldey, Christoph</creatorcontrib><creatorcontrib>Müller-Birn, Claudia</creatorcontrib><creatorcontrib>Korjakow, Tim</creatorcontrib><creatorcontrib>Herbst, Eva-Maria</creatorcontrib><collection>CrossRef</collection><jtitle>Proceedings of the ACM on human-computer interaction</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Benjamin, Jesse Josua</au><au>Kinkeldey, Christoph</au><au>Müller-Birn, Claudia</au><au>Korjakow, Tim</au><au>Herbst, Eva-Maria</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability</atitle><jtitle>Proceedings of the ACM on human-computer interaction</jtitle><stitle>ACM PACMHCI</stitle><date>2022-01-14</date><risdate>2022</risdate><volume>6</volume><issue>GROUP</issue><spage>1</spage><epage>25</epage><pages>1-25</pages><artnum>39</artnum><issn>2573-0142</issn><eissn>2573-0142</eissn><abstract>During a research project in which we developed a machine learning (ML) driven visualization system for non-ML experts, we reflected on interpretability research in ML, computer-supported collaborative work and human-computer interaction. We found that while there are manifold technical approaches, these often focus on ML experts and are evaluated in decontextualized empirical studies. We hypothesized that participatory design research may support the understanding of stakeholders' situated sense-making in our project, yet, found guidance regarding ML interpretability inexhaustive. Building on philosophy of technology, we formulated explanation strategies as an empirical-analytical lens explicating how technical explanations mediate the contextual preferences concerning people's interpretations. In this paper, we contribute a report of our proof-of-concept use of explanation strategies to analyze a co-design workshop with non-ML experts, methodological implications for participatory design research, design implications for explanations for non-ML experts and suggest further investigation of technological mediation theories in the ML interpretability space.</abstract><cop>New York, NY, USA</cop><pub>ACM</pub><doi>10.1145/3492858</doi><tpages>25</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2573-0142
ispartof Proceedings of the ACM on human-computer interaction, 2022-01, Vol.6 (GROUP), p.1-25, Article 39
issn 2573-0142
2573-0142
language eng
recordid cdi_crossref_primary_10_1145_3492858
source Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)
subjects Computing methodologies
Human-centered computing
Human-centered computing / Human computer interaction (HCI)
title Explanation Strategies as an Empirical-Analytical Lens for Socio-Technical Contextualization of Machine Learning Interpretability
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T17%3A28%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Explanation%20Strategies%20as%20an%20Empirical-Analytical%20Lens%20for%20Socio-Technical%20Contextualization%20of%20Machine%20Learning%20Interpretability&rft.jtitle=Proceedings%20of%20the%20ACM%20on%20human-computer%20interaction&rft.au=Benjamin,%20Jesse%20Josua&rft.date=2022-01-14&rft.volume=6&rft.issue=GROUP&rft.spage=1&rft.epage=25&rft.pages=1-25&rft.artnum=39&rft.issn=2573-0142&rft.eissn=2573-0142&rft_id=info:doi/10.1145/3492858&rft_dat=%3Cacm_cross%3E3492858%3C/acm_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a1928-ad94fec98a1cb1c7aed521e48b2039fc21eedbc988d2d6cb5cd3821dbbcc91233%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true