Loading…
Towards Causal Models for Adversary Distractions
Automated adversary emulation is becoming an indispensable tool of network security operators in testing and evaluating their cyber defenses. At the same time, it has exposed how quickly adversaries can propagate through the network. While research has greatly progressed on quality decoy generation...
Saved in:
Published in: | arXiv.org 2021-04 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Alford, Ron Applebaum, Andy |
description | Automated adversary emulation is becoming an indispensable tool of network security operators in testing and evaluating their cyber defenses. At the same time, it has exposed how quickly adversaries can propagate through the network. While research has greatly progressed on quality decoy generation to fool human adversaries, we may need different strategies to slow computer agents. In this paper, we show that decoy generation can slow an automated agent's decision process, but that the degree to which it is inhibited is greatly dependent on the types of objects used. This points to the need to explicitly evaluate decoy generation and placement strategies against fast moving, automated adversaries. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2516596540</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2516596540</sourcerecordid><originalsourceid>FETCH-proquest_journals_25165965403</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwCMkvTyxKKVZwTiwtTsxR8M1PSc0pVkjLL1JwTClLLSpOLKpUcMksLilKTC7JzM8r5mFgTUvMKU7lhdLcDMpuriHOHroFRfmFpanFJfFZ-aVFeUCpeCNTQzNTSzNTEwNj4lQBAHCYMzs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2516596540</pqid></control><display><type>article</type><title>Towards Causal Models for Adversary Distractions</title><source>Publicly Available Content (ProQuest)</source><creator>Alford, Ron ; Applebaum, Andy</creator><creatorcontrib>Alford, Ron ; Applebaum, Andy</creatorcontrib><description>Automated adversary emulation is becoming an indispensable tool of network security operators in testing and evaluating their cyber defenses. At the same time, it has exposed how quickly adversaries can propagate through the network. While research has greatly progressed on quality decoy generation to fool human adversaries, we may need different strategies to slow computer agents. In this paper, we show that decoy generation can slow an automated agent's decision process, but that the degree to which it is inhibited is greatly dependent on the types of objects used. This points to the need to explicitly evaluate decoy generation and placement strategies against fast moving, automated adversaries.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Automation</subject><ispartof>arXiv.org, 2021-04</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2516596540?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Alford, Ron</creatorcontrib><creatorcontrib>Applebaum, Andy</creatorcontrib><title>Towards Causal Models for Adversary Distractions</title><title>arXiv.org</title><description>Automated adversary emulation is becoming an indispensable tool of network security operators in testing and evaluating their cyber defenses. At the same time, it has exposed how quickly adversaries can propagate through the network. While research has greatly progressed on quality decoy generation to fool human adversaries, we may need different strategies to slow computer agents. In this paper, we show that decoy generation can slow an automated agent's decision process, but that the degree to which it is inhibited is greatly dependent on the types of objects used. This points to the need to explicitly evaluate decoy generation and placement strategies against fast moving, automated adversaries.</description><subject>Automation</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwCMkvTyxKKVZwTiwtTsxR8M1PSc0pVkjLL1JwTClLLSpOLKpUcMksLilKTC7JzM8r5mFgTUvMKU7lhdLcDMpuriHOHroFRfmFpanFJfFZ-aVFeUCpeCNTQzNTSzNTEwNj4lQBAHCYMzs</recordid><startdate>20210421</startdate><enddate>20210421</enddate><creator>Alford, Ron</creator><creator>Applebaum, Andy</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210421</creationdate><title>Towards Causal Models for Adversary Distractions</title><author>Alford, Ron ; Applebaum, Andy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25165965403</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Automation</topic><toplevel>online_resources</toplevel><creatorcontrib>Alford, Ron</creatorcontrib><creatorcontrib>Applebaum, Andy</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Alford, Ron</au><au>Applebaum, Andy</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Towards Causal Models for Adversary Distractions</atitle><jtitle>arXiv.org</jtitle><date>2021-04-21</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Automated adversary emulation is becoming an indispensable tool of network security operators in testing and evaluating their cyber defenses. At the same time, it has exposed how quickly adversaries can propagate through the network. While research has greatly progressed on quality decoy generation to fool human adversaries, we may need different strategies to slow computer agents. In this paper, we show that decoy generation can slow an automated agent's decision process, but that the degree to which it is inhibited is greatly dependent on the types of objects used. This points to the need to explicitly evaluate decoy generation and placement strategies against fast moving, automated adversaries.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-04 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2516596540 |
source | Publicly Available Content (ProQuest) |
subjects | Automation |
title | Towards Causal Models for Adversary Distractions |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T16%3A07%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Towards%20Causal%20Models%20for%20Adversary%20Distractions&rft.jtitle=arXiv.org&rft.au=Alford,%20Ron&rft.date=2021-04-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2516596540%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_25165965403%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2516596540&rft_id=info:pmid/&rfr_iscdi=true |