Loading…
Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain
In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by...
Saved in:
Published in: | Machine learning and knowledge extraction 2021-09, Vol.3 (3), p.740-770 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c440t-557045dd26732abd79a06763c7f7c26d824f79e0bafbdc67d08fb3e2cc3c44253 |
---|---|
cites | cdi_FETCH-LOGICAL-c440t-557045dd26732abd79a06763c7f7c26d824f79e0bafbdc67d08fb3e2cc3c44253 |
container_end_page | 770 |
container_issue | 3 |
container_start_page | 740 |
container_title | Machine learning and knowledge extraction |
container_volume | 3 |
creator | Knapič, Samanta Malhi, Avleen Saluja, Rohit Främling, Kary |
description | In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts. |
doi_str_mv | 10.3390/make3030037 |
format | article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_aaf790e6b18b455894c93b2c832cbe8b</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_aaf790e6b18b455894c93b2c832cbe8b</doaj_id><sourcerecordid>2576450933</sourcerecordid><originalsourceid>FETCH-LOGICAL-c440t-557045dd26732abd79a06763c7f7c26d824f79e0bafbdc67d08fb3e2cc3c44253</originalsourceid><addsrcrecordid>eNqNkU1P3DAQhiNUJBDlxB-wxBGlnfgjjo8rFspKVD0AvfRg2c4EvCRxsBNR_n1dFlX0xmlGo2ceaeYtipMKvjCm4OtgHpEBA2ByrzikAnjJlYJP7_qD4jilLQBQqXgF_LD4dfF76o0fje2RrOLsO--86clmnLHv_T2ODkkXIrlaBjOSNTqffBjJzTJNIc7k5iXNOBA_kvkByXdsvcvb6zBk5-divzN9wuO3elTcXV7cnl-V1z--bc5X16XjHOZSCAlctC2tJaPGtlIZqGXNnOyko3XbUN5JhWBNZ1tXyxaazjKkzrEsoIIdFZudtw1mq6foBxNfdDBevw5CvNcmX-Z61MZkFWBtq8ZyIRrFnWKWuoZRZ7Gx2VXuXOkZp8X-Z1v7n6tX2zIsulIgQH6Mf5wfdE5G8Drzpzt-iuFpwTTrbVjimN-jqZA1F6AYy9TZjnIxpBSx--etQP8NW78Lm_0BGHCclg</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2576450933</pqid></control><display><type>article</type><title>Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain</title><source>Publicly Available Content (ProQuest)</source><source>Coronavirus Research Database</source><creator>Knapič, Samanta ; Malhi, Avleen ; Saluja, Rohit ; Främling, Kary</creator><creatorcontrib>Knapič, Samanta ; Malhi, Avleen ; Saluja, Rohit ; Främling, Kary</creatorcontrib><description>In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.</description><identifier>ISSN: 2504-4990</identifier><identifier>EISSN: 2504-4990</identifier><identifier>DOI: 10.3390/make3030037</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Algorithms ; Artificial intelligence ; Artificial neural networks ; Automation ; computer and systems sciences ; data- och systemvetenskap ; Datasets ; Decision analysis ; Decision making ; Decision support systems ; Deep learning ; Explainable artificial intelligence ; human decision support ; Image analysis ; image recognition ; In vivo methods and tests ; Lime ; Machine learning ; medical image analyses ; Medical imaging ; Neural networks ; Trust ; User groups ; Vision systems</subject><ispartof>Machine learning and knowledge extraction, 2021-09, Vol.3 (3), p.740-770</ispartof><rights>2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c440t-557045dd26732abd79a06763c7f7c26d824f79e0bafbdc67d08fb3e2cc3c44253</citedby><cites>FETCH-LOGICAL-c440t-557045dd26732abd79a06763c7f7c26d824f79e0bafbdc67d08fb3e2cc3c44253</cites><orcidid>0000-0001-5926-6151</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2576450933/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2576450933?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,776,780,881,25732,27903,27904,36991,38495,43874,44569,74158,74872</link.rule.ids><backlink>$$Uhttps://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-303546$$DView record from Swedish Publication Index$$Hfree_for_read</backlink><backlink>$$Uhttps://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-190507$$DView record from Swedish Publication Index$$Hfree_for_read</backlink></links><search><creatorcontrib>Knapič, Samanta</creatorcontrib><creatorcontrib>Malhi, Avleen</creatorcontrib><creatorcontrib>Saluja, Rohit</creatorcontrib><creatorcontrib>Främling, Kary</creatorcontrib><title>Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain</title><title>Machine learning and knowledge extraction</title><description>In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.</description><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Artificial neural networks</subject><subject>Automation</subject><subject>computer and systems sciences</subject><subject>data- och systemvetenskap</subject><subject>Datasets</subject><subject>Decision analysis</subject><subject>Decision making</subject><subject>Decision support systems</subject><subject>Deep learning</subject><subject>Explainable artificial intelligence</subject><subject>human decision support</subject><subject>Image analysis</subject><subject>image recognition</subject><subject>In vivo methods and tests</subject><subject>Lime</subject><subject>Machine learning</subject><subject>medical image analyses</subject><subject>Medical imaging</subject><subject>Neural networks</subject><subject>Trust</subject><subject>User groups</subject><subject>Vision systems</subject><issn>2504-4990</issn><issn>2504-4990</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>COVID</sourceid><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNqNkU1P3DAQhiNUJBDlxB-wxBGlnfgjjo8rFspKVD0AvfRg2c4EvCRxsBNR_n1dFlX0xmlGo2ceaeYtipMKvjCm4OtgHpEBA2ByrzikAnjJlYJP7_qD4jilLQBQqXgF_LD4dfF76o0fje2RrOLsO--86clmnLHv_T2ODkkXIrlaBjOSNTqffBjJzTJNIc7k5iXNOBA_kvkByXdsvcvb6zBk5-divzN9wuO3elTcXV7cnl-V1z--bc5X16XjHOZSCAlctC2tJaPGtlIZqGXNnOyko3XbUN5JhWBNZ1tXyxaazjKkzrEsoIIdFZudtw1mq6foBxNfdDBevw5CvNcmX-Z61MZkFWBtq8ZyIRrFnWKWuoZRZ7Gx2VXuXOkZp8X-Z1v7n6tX2zIsulIgQH6Mf5wfdE5G8Drzpzt-iuFpwTTrbVjimN-jqZA1F6AYy9TZjnIxpBSx--etQP8NW78Lm_0BGHCclg</recordid><startdate>20210901</startdate><enddate>20210901</enddate><creator>Knapič, Samanta</creator><creator>Malhi, Avleen</creator><creator>Saluja, Rohit</creator><creator>Främling, Kary</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>COVID</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>ADTPV</scope><scope>AFDQA</scope><scope>AOWAS</scope><scope>D8T</scope><scope>D8V</scope><scope>ZZAVC</scope><scope>ADHXS</scope><scope>D93</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-5926-6151</orcidid></search><sort><creationdate>20210901</creationdate><title>Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain</title><author>Knapič, Samanta ; Malhi, Avleen ; Saluja, Rohit ; Främling, Kary</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c440t-557045dd26732abd79a06763c7f7c26d824f79e0bafbdc67d08fb3e2cc3c44253</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Artificial neural networks</topic><topic>Automation</topic><topic>computer and systems sciences</topic><topic>data- och systemvetenskap</topic><topic>Datasets</topic><topic>Decision analysis</topic><topic>Decision making</topic><topic>Decision support systems</topic><topic>Deep learning</topic><topic>Explainable artificial intelligence</topic><topic>human decision support</topic><topic>Image analysis</topic><topic>image recognition</topic><topic>In vivo methods and tests</topic><topic>Lime</topic><topic>Machine learning</topic><topic>medical image analyses</topic><topic>Medical imaging</topic><topic>Neural networks</topic><topic>Trust</topic><topic>User groups</topic><topic>Vision systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Knapič, Samanta</creatorcontrib><creatorcontrib>Malhi, Avleen</creatorcontrib><creatorcontrib>Saluja, Rohit</creatorcontrib><creatorcontrib>Främling, Kary</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>Coronavirus Research Database</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest advanced technologies & aerospace journals</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>SwePub</collection><collection>SWEPUB Kungliga Tekniska Högskolan full text</collection><collection>SwePub Articles</collection><collection>SWEPUB Freely available online</collection><collection>SWEPUB Kungliga Tekniska Högskolan</collection><collection>SwePub Articles full text</collection><collection>SWEPUB Umeå universitet full text</collection><collection>SWEPUB Umeå universitet</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Machine learning and knowledge extraction</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Knapič, Samanta</au><au>Malhi, Avleen</au><au>Saluja, Rohit</au><au>Främling, Kary</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain</atitle><jtitle>Machine learning and knowledge extraction</jtitle><date>2021-09-01</date><risdate>2021</risdate><volume>3</volume><issue>3</issue><spage>740</spage><epage>770</epage><pages>740-770</pages><issn>2504-4990</issn><eissn>2504-4990</eissn><abstract>In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/make3030037</doi><tpages>31</tpages><orcidid>https://orcid.org/0000-0001-5926-6151</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2504-4990 |
ispartof | Machine learning and knowledge extraction, 2021-09, Vol.3 (3), p.740-770 |
issn | 2504-4990 2504-4990 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_aaf790e6b18b455894c93b2c832cbe8b |
source | Publicly Available Content (ProQuest); Coronavirus Research Database |
subjects | Algorithms Artificial intelligence Artificial neural networks Automation computer and systems sciences data- och systemvetenskap Datasets Decision analysis Decision making Decision support systems Deep learning Explainable artificial intelligence human decision support Image analysis image recognition In vivo methods and tests Lime Machine learning medical image analyses Medical imaging Neural networks Trust User groups Vision systems |
title | Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T01%3A45%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Explainable%20Artificial%20Intelligence%20for%20Human%20Decision%20Support%20System%20in%20the%20Medical%20Domain&rft.jtitle=Machine%20learning%20and%20knowledge%20extraction&rft.au=Knapi%C4%8D,%20Samanta&rft.date=2021-09-01&rft.volume=3&rft.issue=3&rft.spage=740&rft.epage=770&rft.pages=740-770&rft.issn=2504-4990&rft.eissn=2504-4990&rft_id=info:doi/10.3390/make3030037&rft_dat=%3Cproquest_doaj_%3E2576450933%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c440t-557045dd26732abd79a06763c7f7c26d824f79e0bafbdc67d08fb3e2cc3c44253%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2576450933&rft_id=info:pmid/&rfr_iscdi=true |