Loading…

Explainable Predictive Process Monitoring: A User Evaluation

Explainability is motivated by the lack of transparency of black-box Machine Learning approaches, which do not foster trust and acceptance of Machine Learning algorithms. This also happens in the Predictive Process Monitoring field, where predictions, obtained by applying Machine Learning techniques...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2022-02
Main Authors: Williams, Rizzi, Comuzzi, Marco, Chiara Di Francescomarino, Ghidini, Chiara, Lee, Suhwan, Maggi, Fabrizio Maria, Nolte, Alexander
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Williams, Rizzi
Comuzzi, Marco
Chiara Di Francescomarino
Ghidini, Chiara
Lee, Suhwan
Maggi, Fabrizio Maria
Nolte, Alexander
description Explainability is motivated by the lack of transparency of black-box Machine Learning approaches, which do not foster trust and acceptance of Machine Learning algorithms. This also happens in the Predictive Process Monitoring field, where predictions, obtained by applying Machine Learning techniques, need to be explained to users, so as to gain their trust and acceptance. In this work, we carry on a user evaluation on explanation approaches for Predictive Process Monitoring aiming at investigating whether and how the explanations provided (i) are understandable; (ii) are useful in decision making tasks;(iii) can be further improved for process analysts, with different Machine Learning expertise levels. The results of the user evaluation show that, although explanation plots are overall understandable and useful for decision making tasks for Business Process Management users -- with and without experience in Machine Learning -- differences exist in the comprehension and usage of different plots, as well as in the way users with different Machine Learning expertise understand and use them.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2629520362</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2629520362</sourcerecordid><originalsourceid>FETCH-proquest_journals_26295203623</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwca0oyEnMzEtMyklVCChKTclMLsksAzHzk1OLixV88_MyS_KLMvPSrRQcFUKLU4sUXMsSc0oTSzLz83gYWNMSc4pTeaE0N4Oym2uIs4duQVF-YWlqcUl8Vn5pUR5QKt7IzMjS1MjA2MzImDhVADmHN3Y</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2629520362</pqid></control><display><type>article</type><title>Explainable Predictive Process Monitoring: A User Evaluation</title><source>Publicly Available Content (ProQuest)</source><creator>Williams, Rizzi ; Comuzzi, Marco ; Chiara Di Francescomarino ; Ghidini, Chiara ; Lee, Suhwan ; Maggi, Fabrizio Maria ; Nolte, Alexander</creator><creatorcontrib>Williams, Rizzi ; Comuzzi, Marco ; Chiara Di Francescomarino ; Ghidini, Chiara ; Lee, Suhwan ; Maggi, Fabrizio Maria ; Nolte, Alexander</creatorcontrib><description>Explainability is motivated by the lack of transparency of black-box Machine Learning approaches, which do not foster trust and acceptance of Machine Learning algorithms. This also happens in the Predictive Process Monitoring field, where predictions, obtained by applying Machine Learning techniques, need to be explained to users, so as to gain their trust and acceptance. In this work, we carry on a user evaluation on explanation approaches for Predictive Process Monitoring aiming at investigating whether and how the explanations provided (i) are understandable; (ii) are useful in decision making tasks;(iii) can be further improved for process analysts, with different Machine Learning expertise levels. The results of the user evaluation show that, although explanation plots are overall understandable and useful for decision making tasks for Business Process Management users -- with and without experience in Machine Learning -- differences exist in the comprehension and usage of different plots, as well as in the way users with different Machine Learning expertise understand and use them.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Acceptance ; Algorithms ; Business process management ; Decision analysis ; Decision making ; Machine learning ; Monitoring ; Predictions</subject><ispartof>arXiv.org, 2022-02</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2629520362?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Williams, Rizzi</creatorcontrib><creatorcontrib>Comuzzi, Marco</creatorcontrib><creatorcontrib>Chiara Di Francescomarino</creatorcontrib><creatorcontrib>Ghidini, Chiara</creatorcontrib><creatorcontrib>Lee, Suhwan</creatorcontrib><creatorcontrib>Maggi, Fabrizio Maria</creatorcontrib><creatorcontrib>Nolte, Alexander</creatorcontrib><title>Explainable Predictive Process Monitoring: A User Evaluation</title><title>arXiv.org</title><description>Explainability is motivated by the lack of transparency of black-box Machine Learning approaches, which do not foster trust and acceptance of Machine Learning algorithms. This also happens in the Predictive Process Monitoring field, where predictions, obtained by applying Machine Learning techniques, need to be explained to users, so as to gain their trust and acceptance. In this work, we carry on a user evaluation on explanation approaches for Predictive Process Monitoring aiming at investigating whether and how the explanations provided (i) are understandable; (ii) are useful in decision making tasks;(iii) can be further improved for process analysts, with different Machine Learning expertise levels. The results of the user evaluation show that, although explanation plots are overall understandable and useful for decision making tasks for Business Process Management users -- with and without experience in Machine Learning -- differences exist in the comprehension and usage of different plots, as well as in the way users with different Machine Learning expertise understand and use them.</description><subject>Acceptance</subject><subject>Algorithms</subject><subject>Business process management</subject><subject>Decision analysis</subject><subject>Decision making</subject><subject>Machine learning</subject><subject>Monitoring</subject><subject>Predictions</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwca0oyEnMzEtMyklVCChKTclMLsksAzHzk1OLixV88_MyS_KLMvPSrRQcFUKLU4sUXMsSc0oTSzLz83gYWNMSc4pTeaE0N4Oym2uIs4duQVF-YWlqcUl8Vn5pUR5QKt7IzMjS1MjA2MzImDhVADmHN3Y</recordid><startdate>20220215</startdate><enddate>20220215</enddate><creator>Williams, Rizzi</creator><creator>Comuzzi, Marco</creator><creator>Chiara Di Francescomarino</creator><creator>Ghidini, Chiara</creator><creator>Lee, Suhwan</creator><creator>Maggi, Fabrizio Maria</creator><creator>Nolte, Alexander</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220215</creationdate><title>Explainable Predictive Process Monitoring: A User Evaluation</title><author>Williams, Rizzi ; Comuzzi, Marco ; Chiara Di Francescomarino ; Ghidini, Chiara ; Lee, Suhwan ; Maggi, Fabrizio Maria ; Nolte, Alexander</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26295203623</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Acceptance</topic><topic>Algorithms</topic><topic>Business process management</topic><topic>Decision analysis</topic><topic>Decision making</topic><topic>Machine learning</topic><topic>Monitoring</topic><topic>Predictions</topic><toplevel>online_resources</toplevel><creatorcontrib>Williams, Rizzi</creatorcontrib><creatorcontrib>Comuzzi, Marco</creatorcontrib><creatorcontrib>Chiara Di Francescomarino</creatorcontrib><creatorcontrib>Ghidini, Chiara</creatorcontrib><creatorcontrib>Lee, Suhwan</creatorcontrib><creatorcontrib>Maggi, Fabrizio Maria</creatorcontrib><creatorcontrib>Nolte, Alexander</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Williams, Rizzi</au><au>Comuzzi, Marco</au><au>Chiara Di Francescomarino</au><au>Ghidini, Chiara</au><au>Lee, Suhwan</au><au>Maggi, Fabrizio Maria</au><au>Nolte, Alexander</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Explainable Predictive Process Monitoring: A User Evaluation</atitle><jtitle>arXiv.org</jtitle><date>2022-02-15</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Explainability is motivated by the lack of transparency of black-box Machine Learning approaches, which do not foster trust and acceptance of Machine Learning algorithms. This also happens in the Predictive Process Monitoring field, where predictions, obtained by applying Machine Learning techniques, need to be explained to users, so as to gain their trust and acceptance. In this work, we carry on a user evaluation on explanation approaches for Predictive Process Monitoring aiming at investigating whether and how the explanations provided (i) are understandable; (ii) are useful in decision making tasks;(iii) can be further improved for process analysts, with different Machine Learning expertise levels. The results of the user evaluation show that, although explanation plots are overall understandable and useful for decision making tasks for Business Process Management users -- with and without experience in Machine Learning -- differences exist in the comprehension and usage of different plots, as well as in the way users with different Machine Learning expertise understand and use them.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2629520362
source Publicly Available Content (ProQuest)
subjects Acceptance
Algorithms
Business process management
Decision analysis
Decision making
Machine learning
Monitoring
Predictions
title Explainable Predictive Process Monitoring: A User Evaluation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T14%3A21%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Explainable%20Predictive%20Process%20Monitoring:%20A%20User%20Evaluation&rft.jtitle=arXiv.org&rft.au=Williams,%20Rizzi&rft.date=2022-02-15&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2629520362%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_26295203623%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2629520362&rft_id=info:pmid/&rfr_iscdi=true