Loading…

SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability

Interpretability, trustworthiness, and usability are key considerations in high-stake security applications, especially when utilizing deep learning models. While these models are known for their high accuracy, they behave as black boxes in which identifying important features and factors that led t...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-06
Main Authors: Bhusal, Dipkamal, Shin, Rosalyn, Ajay Ashok Shewale, Monish Kumar Manikya Veerabhadran, Clifford, Michael, Rampazzi, Sara, Rastogi, Nidhi
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Bhusal, Dipkamal
Shin, Rosalyn
Ajay Ashok Shewale
Monish Kumar Manikya Veerabhadran
Clifford, Michael
Rampazzi, Sara
Rastogi, Nidhi
description Interpretability, trustworthiness, and usability are key considerations in high-stake security applications, especially when utilizing deep learning models. While these models are known for their high accuracy, they behave as black boxes in which identifying important features and factors that led to a classification or a prediction is difficult. This can lead to uncertainty and distrust, especially when an incorrect prediction results in severe consequences. Thus, explanation methods aim to provide insights into the inner working of deep learning models. However, most explanation methods provide inconsistent explanations, have low fidelity, and are susceptible to adversarial manipulation, which can reduce model trustworthiness. This paper provides a comprehensive analysis of explainable methods and demonstrates their efficacy in three distinct security applications: anomaly detection using system logs, malware prediction, and detection of adversarial images. Our quantitative and qualitative analysis reveals serious limitations and concerns in state-of-the-art explanation methods in all three applications. We show that explanation methods for security applications necessitate distinct characteristics, such as stability, fidelity, robustness, and usability, among others, which we outline as the prerequisites for trustworthy explanation methods.
doi_str_mv 10.48550/arxiv.2210.17376
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2730895288</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2730895288</sourcerecordid><originalsourceid>FETCH-LOGICAL-a956-23d74eaf9e324d372ffc4c0ac9dfcaa36af04d048bbe6471e85a3d78255cd91a3</originalsourceid><addsrcrecordid>eNotT01PAjEUbExMJMgP8NbEK4vdfmy73ghBJWI8gGfy6AeWbLrYdhX-vWvc00xm5r3MIHRXkhlXQpAHiGf_PaO0F0rJZHWFRpSxslCc0hs0SelICKGVpEKwETps2tdH_NYa2_hwwMvzqQEfYO8bny_YB7yxuot_fB6guWSvE3ZtxKuQbTxFm4foFG9jl_JPG_OnDzalKYZg8Eca_Ft07aBJdjLgGG2fltvFS7F-f14t5usCalEVlBnJLbjaMsoNk9Q5zTUBXRunAVgFjnBDuNrvbcVlaZWA_kT1W7SpS2BjdP__9hTbr86mvDu2Xeybpx2VjKhaUKXYL5cWWrI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2730895288</pqid></control><display><type>article</type><title>SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Bhusal, Dipkamal ; Shin, Rosalyn ; Ajay Ashok Shewale ; Monish Kumar Manikya Veerabhadran ; Clifford, Michael ; Rampazzi, Sara ; Rastogi, Nidhi</creator><creatorcontrib>Bhusal, Dipkamal ; Shin, Rosalyn ; Ajay Ashok Shewale ; Monish Kumar Manikya Veerabhadran ; Clifford, Michael ; Rampazzi, Sara ; Rastogi, Nidhi</creatorcontrib><description>Interpretability, trustworthiness, and usability are key considerations in high-stake security applications, especially when utilizing deep learning models. While these models are known for their high accuracy, they behave as black boxes in which identifying important features and factors that led to a classification or a prediction is difficult. This can lead to uncertainty and distrust, especially when an incorrect prediction results in severe consequences. Thus, explanation methods aim to provide insights into the inner working of deep learning models. However, most explanation methods provide inconsistent explanations, have low fidelity, and are susceptible to adversarial manipulation, which can reduce model trustworthiness. This paper provides a comprehensive analysis of explainable methods and demonstrates their efficacy in three distinct security applications: anomaly detection using system logs, malware prediction, and detection of adversarial images. Our quantitative and qualitative analysis reveals serious limitations and concerns in state-of-the-art explanation methods in all three applications. We show that explanation methods for security applications necessitate distinct characteristics, such as stability, fidelity, robustness, and usability, among others, which we outline as the prerequisites for trustworthy explanation methods.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2210.17376</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Anomalies ; Decision making ; Deep learning ; Machine learning ; Monitoring ; Pipeline design ; Privacy ; Security</subject><ispartof>arXiv.org, 2023-06</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2730895288?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,27925,37012,44590</link.rule.ids></links><search><creatorcontrib>Bhusal, Dipkamal</creatorcontrib><creatorcontrib>Shin, Rosalyn</creatorcontrib><creatorcontrib>Ajay Ashok Shewale</creatorcontrib><creatorcontrib>Monish Kumar Manikya Veerabhadran</creatorcontrib><creatorcontrib>Clifford, Michael</creatorcontrib><creatorcontrib>Rampazzi, Sara</creatorcontrib><creatorcontrib>Rastogi, Nidhi</creatorcontrib><title>SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability</title><title>arXiv.org</title><description>Interpretability, trustworthiness, and usability are key considerations in high-stake security applications, especially when utilizing deep learning models. While these models are known for their high accuracy, they behave as black boxes in which identifying important features and factors that led to a classification or a prediction is difficult. This can lead to uncertainty and distrust, especially when an incorrect prediction results in severe consequences. Thus, explanation methods aim to provide insights into the inner working of deep learning models. However, most explanation methods provide inconsistent explanations, have low fidelity, and are susceptible to adversarial manipulation, which can reduce model trustworthiness. This paper provides a comprehensive analysis of explainable methods and demonstrates their efficacy in three distinct security applications: anomaly detection using system logs, malware prediction, and detection of adversarial images. Our quantitative and qualitative analysis reveals serious limitations and concerns in state-of-the-art explanation methods in all three applications. We show that explanation methods for security applications necessitate distinct characteristics, such as stability, fidelity, robustness, and usability, among others, which we outline as the prerequisites for trustworthy explanation methods.</description><subject>Anomalies</subject><subject>Decision making</subject><subject>Deep learning</subject><subject>Machine learning</subject><subject>Monitoring</subject><subject>Pipeline design</subject><subject>Privacy</subject><subject>Security</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotT01PAjEUbExMJMgP8NbEK4vdfmy73ghBJWI8gGfy6AeWbLrYdhX-vWvc00xm5r3MIHRXkhlXQpAHiGf_PaO0F0rJZHWFRpSxslCc0hs0SelICKGVpEKwETps2tdH_NYa2_hwwMvzqQEfYO8bny_YB7yxuot_fB6guWSvE3ZtxKuQbTxFm4foFG9jl_JPG_OnDzalKYZg8Eca_Ft07aBJdjLgGG2fltvFS7F-f14t5usCalEVlBnJLbjaMsoNk9Q5zTUBXRunAVgFjnBDuNrvbcVlaZWA_kT1W7SpS2BjdP__9hTbr86mvDu2Xeybpx2VjKhaUKXYL5cWWrI</recordid><startdate>20230613</startdate><enddate>20230613</enddate><creator>Bhusal, Dipkamal</creator><creator>Shin, Rosalyn</creator><creator>Ajay Ashok Shewale</creator><creator>Monish Kumar Manikya Veerabhadran</creator><creator>Clifford, Michael</creator><creator>Rampazzi, Sara</creator><creator>Rastogi, Nidhi</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230613</creationdate><title>SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability</title><author>Bhusal, Dipkamal ; Shin, Rosalyn ; Ajay Ashok Shewale ; Monish Kumar Manikya Veerabhadran ; Clifford, Michael ; Rampazzi, Sara ; Rastogi, Nidhi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a956-23d74eaf9e324d372ffc4c0ac9dfcaa36af04d048bbe6471e85a3d78255cd91a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Anomalies</topic><topic>Decision making</topic><topic>Deep learning</topic><topic>Machine learning</topic><topic>Monitoring</topic><topic>Pipeline design</topic><topic>Privacy</topic><topic>Security</topic><toplevel>online_resources</toplevel><creatorcontrib>Bhusal, Dipkamal</creatorcontrib><creatorcontrib>Shin, Rosalyn</creatorcontrib><creatorcontrib>Ajay Ashok Shewale</creatorcontrib><creatorcontrib>Monish Kumar Manikya Veerabhadran</creatorcontrib><creatorcontrib>Clifford, Michael</creatorcontrib><creatorcontrib>Rampazzi, Sara</creatorcontrib><creatorcontrib>Rastogi, Nidhi</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bhusal, Dipkamal</au><au>Shin, Rosalyn</au><au>Ajay Ashok Shewale</au><au>Monish Kumar Manikya Veerabhadran</au><au>Clifford, Michael</au><au>Rampazzi, Sara</au><au>Rastogi, Nidhi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability</atitle><jtitle>arXiv.org</jtitle><date>2023-06-13</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Interpretability, trustworthiness, and usability are key considerations in high-stake security applications, especially when utilizing deep learning models. While these models are known for their high accuracy, they behave as black boxes in which identifying important features and factors that led to a classification or a prediction is difficult. This can lead to uncertainty and distrust, especially when an incorrect prediction results in severe consequences. Thus, explanation methods aim to provide insights into the inner working of deep learning models. However, most explanation methods provide inconsistent explanations, have low fidelity, and are susceptible to adversarial manipulation, which can reduce model trustworthiness. This paper provides a comprehensive analysis of explainable methods and demonstrates their efficacy in three distinct security applications: anomaly detection using system logs, malware prediction, and detection of adversarial images. Our quantitative and qualitative analysis reveals serious limitations and concerns in state-of-the-art explanation methods in all three applications. We show that explanation methods for security applications necessitate distinct characteristics, such as stability, fidelity, robustness, and usability, among others, which we outline as the prerequisites for trustworthy explanation methods.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2210.17376</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_2730895288
source Publicly Available Content Database (Proquest) (PQ_SDU_P3)
subjects Anomalies
Decision making
Deep learning
Machine learning
Monitoring
Pipeline design
Privacy
Security
title SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T08%3A45%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SoK:%20Modeling%20Explainability%20in%20Security%20Analytics%20for%20Interpretability,%20Trustworthiness,%20and%20Usability&rft.jtitle=arXiv.org&rft.au=Bhusal,%20Dipkamal&rft.date=2023-06-13&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2210.17376&rft_dat=%3Cproquest%3E2730895288%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a956-23d74eaf9e324d372ffc4c0ac9dfcaa36af04d048bbe6471e85a3d78255cd91a3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2730895288&rft_id=info:pmid/&rfr_iscdi=true