Loading…

An Empirical Evaluation of Adversarial Robustness under Transfer Learning

In this work, we evaluate adversarial robustness in the context of transfer learning from a source trained on CIFAR 100 to a target network trained on CIFAR 10. Specifically, we study the effects of using robust optimisation in the source and target networks. This allows us to identify transfer lear...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2019-06
Main Authors: Davchev, Todor, Korres, Timos, Fotiadis, Stathi, Antonopoulos, Nick, Ramamoorthy, Subramanian
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Davchev, Todor
Korres, Timos
Fotiadis, Stathi
Antonopoulos, Nick
Ramamoorthy, Subramanian
description In this work, we evaluate adversarial robustness in the context of transfer learning from a source trained on CIFAR 100 to a target network trained on CIFAR 10. Specifically, we study the effects of using robust optimisation in the source and target networks. This allows us to identify transfer learning strategies under which adversarial defences are successfully retained, in addition to revealing potential vulnerabilities. We study the extent to which features learnt by a fast gradient sign method (FGSM) and its iterative alternative (PGD) can preserve their defence properties against black and white-box attacks under three different transfer learning strategies. We find that using PGD examples during training on the source task leads to more general robust features that are easier to transfer. Furthermore, under successful transfer, it achieves 5.2% more accuracy against white-box PGD attacks than suitable baselines. Overall, our empirical evaluations give insights on how well adversarial robustness under transfer learning can generalise.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2221587962</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2221587962</sourcerecordid><originalsourceid>FETCH-proquest_journals_22215879623</originalsourceid><addsrcrecordid>eNqNissKgkAUQIcgSMp_GGgt6J18tJQwClqFe5lyjBG7Y_c6fn8u-oBW58A5KxGAUklUHAA2ImTu4ziGLIc0VYG4liir92jJPvUgq1kPXk_WoXSdLNvZEGuyS7m7h-cJDbP02BqSNWnkbpGb0YQWXzux7vTAJvxxK_bnqj5dopHcxxuemt55wiU1AJCkRX7MQP13fQGM8jyR</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2221587962</pqid></control><display><type>article</type><title>An Empirical Evaluation of Adversarial Robustness under Transfer Learning</title><source>ProQuest - Publicly Available Content Database</source><creator>Davchev, Todor ; Korres, Timos ; Fotiadis, Stathi ; Antonopoulos, Nick ; Ramamoorthy, Subramanian</creator><creatorcontrib>Davchev, Todor ; Korres, Timos ; Fotiadis, Stathi ; Antonopoulos, Nick ; Ramamoorthy, Subramanian</creatorcontrib><description>In this work, we evaluate adversarial robustness in the context of transfer learning from a source trained on CIFAR 100 to a target network trained on CIFAR 10. Specifically, we study the effects of using robust optimisation in the source and target networks. This allows us to identify transfer learning strategies under which adversarial defences are successfully retained, in addition to revealing potential vulnerabilities. We study the extent to which features learnt by a fast gradient sign method (FGSM) and its iterative alternative (PGD) can preserve their defence properties against black and white-box attacks under three different transfer learning strategies. We find that using PGD examples during training on the source task leads to more general robust features that are easier to transfer. Furthermore, under successful transfer, it achieves 5.2% more accuracy against white-box PGD attacks than suitable baselines. Overall, our empirical evaluations give insights on how well adversarial robustness under transfer learning can generalise.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Learning ; Optimization ; Robustness</subject><ispartof>arXiv.org, 2019-06</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2221587962?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Davchev, Todor</creatorcontrib><creatorcontrib>Korres, Timos</creatorcontrib><creatorcontrib>Fotiadis, Stathi</creatorcontrib><creatorcontrib>Antonopoulos, Nick</creatorcontrib><creatorcontrib>Ramamoorthy, Subramanian</creatorcontrib><title>An Empirical Evaluation of Adversarial Robustness under Transfer Learning</title><title>arXiv.org</title><description>In this work, we evaluate adversarial robustness in the context of transfer learning from a source trained on CIFAR 100 to a target network trained on CIFAR 10. Specifically, we study the effects of using robust optimisation in the source and target networks. This allows us to identify transfer learning strategies under which adversarial defences are successfully retained, in addition to revealing potential vulnerabilities. We study the extent to which features learnt by a fast gradient sign method (FGSM) and its iterative alternative (PGD) can preserve their defence properties against black and white-box attacks under three different transfer learning strategies. We find that using PGD examples during training on the source task leads to more general robust features that are easier to transfer. Furthermore, under successful transfer, it achieves 5.2% more accuracy against white-box PGD attacks than suitable baselines. Overall, our empirical evaluations give insights on how well adversarial robustness under transfer learning can generalise.</description><subject>Learning</subject><subject>Optimization</subject><subject>Robustness</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNissKgkAUQIcgSMp_GGgt6J18tJQwClqFe5lyjBG7Y_c6fn8u-oBW58A5KxGAUklUHAA2ImTu4ziGLIc0VYG4liir92jJPvUgq1kPXk_WoXSdLNvZEGuyS7m7h-cJDbP02BqSNWnkbpGb0YQWXzux7vTAJvxxK_bnqj5dopHcxxuemt55wiU1AJCkRX7MQP13fQGM8jyR</recordid><startdate>20190608</startdate><enddate>20190608</enddate><creator>Davchev, Todor</creator><creator>Korres, Timos</creator><creator>Fotiadis, Stathi</creator><creator>Antonopoulos, Nick</creator><creator>Ramamoorthy, Subramanian</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20190608</creationdate><title>An Empirical Evaluation of Adversarial Robustness under Transfer Learning</title><author>Davchev, Todor ; Korres, Timos ; Fotiadis, Stathi ; Antonopoulos, Nick ; Ramamoorthy, Subramanian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_22215879623</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Learning</topic><topic>Optimization</topic><topic>Robustness</topic><toplevel>online_resources</toplevel><creatorcontrib>Davchev, Todor</creatorcontrib><creatorcontrib>Korres, Timos</creatorcontrib><creatorcontrib>Fotiadis, Stathi</creatorcontrib><creatorcontrib>Antonopoulos, Nick</creatorcontrib><creatorcontrib>Ramamoorthy, Subramanian</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest - Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Davchev, Todor</au><au>Korres, Timos</au><au>Fotiadis, Stathi</au><au>Antonopoulos, Nick</au><au>Ramamoorthy, Subramanian</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>An Empirical Evaluation of Adversarial Robustness under Transfer Learning</atitle><jtitle>arXiv.org</jtitle><date>2019-06-08</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>In this work, we evaluate adversarial robustness in the context of transfer learning from a source trained on CIFAR 100 to a target network trained on CIFAR 10. Specifically, we study the effects of using robust optimisation in the source and target networks. This allows us to identify transfer learning strategies under which adversarial defences are successfully retained, in addition to revealing potential vulnerabilities. We study the extent to which features learnt by a fast gradient sign method (FGSM) and its iterative alternative (PGD) can preserve their defence properties against black and white-box attacks under three different transfer learning strategies. We find that using PGD examples during training on the source task leads to more general robust features that are easier to transfer. Furthermore, under successful transfer, it achieves 5.2% more accuracy against white-box PGD attacks than suitable baselines. Overall, our empirical evaluations give insights on how well adversarial robustness under transfer learning can generalise.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2019-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_2221587962
source ProQuest - Publicly Available Content Database
subjects Learning
Optimization
Robustness
title An Empirical Evaluation of Adversarial Robustness under Transfer Learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T05%3A57%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=An%20Empirical%20Evaluation%20of%20Adversarial%20Robustness%20under%20Transfer%20Learning&rft.jtitle=arXiv.org&rft.au=Davchev,%20Todor&rft.date=2019-06-08&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2221587962%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_22215879623%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2221587962&rft_id=info:pmid/&rfr_iscdi=true