Loading…

Adversarial Coreset Selection for Efficient Robust Training

It has been shown that neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output. Adversarial training is one of the most effective approaches to training robust models against such attacks. Unfortunately, this meth...

Full description

Saved in:
Bibliographic Details
Published in:International journal of computer vision 2023-12, Vol.131 (12), p.3307-3331
Main Authors: Dolatabadi, Hadi M., Erfani, Sarah M., Leckie, Christopher
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c436t-3d33b617e6f842d35e24433b096769e85a07f6b1e2c42c309fb1dbb3393ba2333
cites cdi_FETCH-LOGICAL-c436t-3d33b617e6f842d35e24433b096769e85a07f6b1e2c42c309fb1dbb3393ba2333
container_end_page 3331
container_issue 12
container_start_page 3307
container_title International journal of computer vision
container_volume 131
creator Dolatabadi, Hadi M.
Erfani, Sarah M.
Leckie, Christopher
description It has been shown that neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output. Adversarial training is one of the most effective approaches to training robust models against such attacks. Unfortunately, this method is much slower than vanilla training of neural networks since it needs to construct adversarial examples for the entire training data at every iteration. By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training. To this end, we first provide convergence guarantees for adversarial coreset selection. In particular, we show that the convergence bound is directly related to how well our coresets can approximate the gradient computed over the entire training data. Motivated by our theoretical analysis, we propose using this gradient approximation error as our adversarial coreset selection objective to reduce the training set size effectively. Once built, we run adversarial training over this subset of the training data. Unlike existing methods, our approach can be adapted to a wide variety of training objectives, including TRADES, ℓ p -PGD, and Perceptual Adversarial Training. We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2–3 times while experiencing a slight degradation in the clean and robust accuracy.
doi_str_mv 10.1007/s11263-023-01860-4
format article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2882797963</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A770600492</galeid><sourcerecordid>A770600492</sourcerecordid><originalsourceid>FETCH-LOGICAL-c436t-3d33b617e6f842d35e24433b096769e85a07f6b1e2c42c309fb1dbb3393ba2333</originalsourceid><addsrcrecordid>eNp9kctKxDAUhoMoOF5ewFXBlYuOJ5cmDa6GwRsIgpd1SNuTITI2mmRE395oBXEjh3Dg5_uSwE_IEYU5BVCniVImeQ2sHNpKqMUWmdFG8ZoKaLbJDDSDupGa7pK9lJ4AgLWMz8jZYnjDmGz0dl0tQ8SEubrHNfbZh7FyIVbnzvne45iru9BtUq4eovWjH1cHZMfZdcLDn71PHi_OH5ZX9c3t5fVycVP3gstc84HzTlKF0rWCDbxBJkSJQEslNbaNBeVkR5H1gvUctOvo0HWca95ZxjnfJ8fTvS8xvG4wZfMUNnEsTxrWtkxppeUXNZ-olV2j8aMLOdq-zIDPvg8jOl_yhVIgAYRmRTj5IxQm43te2U1K5vr-7i_LJraPIaWIzrxE_2zjh6FgvhowUwOmNGC-GzCiSHySUoHHFcbff_9jfQImv4Xr</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2882797963</pqid></control><display><type>article</type><title>Adversarial Coreset Selection for Efficient Robust Training</title><source>ABI/INFORM Global (ProQuest)</source><source>Springer Nature</source><creator>Dolatabadi, Hadi M. ; Erfani, Sarah M. ; Leckie, Christopher</creator><creatorcontrib>Dolatabadi, Hadi M. ; Erfani, Sarah M. ; Leckie, Christopher</creatorcontrib><description>It has been shown that neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output. Adversarial training is one of the most effective approaches to training robust models against such attacks. Unfortunately, this method is much slower than vanilla training of neural networks since it needs to construct adversarial examples for the entire training data at every iteration. By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training. To this end, we first provide convergence guarantees for adversarial coreset selection. In particular, we show that the convergence bound is directly related to how well our coresets can approximate the gradient computed over the entire training data. Motivated by our theoretical analysis, we propose using this gradient approximation error as our adversarial coreset selection objective to reduce the training set size effectively. Once built, we run adversarial training over this subset of the training data. Unlike existing methods, our approach can be adapted to a wide variety of training objectives, including TRADES, ℓ p -PGD, and Perceptual Adversarial Training. We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2–3 times while experiencing a slight degradation in the clean and robust accuracy.</description><identifier>ISSN: 0920-5691</identifier><identifier>EISSN: 1573-1405</identifier><identifier>DOI: 10.1007/s11263-023-01860-4</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Analysis ; Artificial Intelligence ; Computer Imaging ; Computer Science ; Convergence ; Image Processing and Computer Vision ; Iterative methods ; Neural networks ; Pattern Recognition ; Pattern Recognition and Graphics ; Perturbation ; Robustness ; Training ; Vision</subject><ispartof>International journal of computer vision, 2023-12, Vol.131 (12), p.3307-3331</ispartof><rights>The Author(s) 2023</rights><rights>COPYRIGHT 2023 Springer</rights><rights>The Author(s) 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c436t-3d33b617e6f842d35e24433b096769e85a07f6b1e2c42c309fb1dbb3393ba2333</citedby><cites>FETCH-LOGICAL-c436t-3d33b617e6f842d35e24433b096769e85a07f6b1e2c42c309fb1dbb3393ba2333</cites><orcidid>0000-0001-9418-1487 ; 0000-0002-4388-0517 ; 0000-0003-0885-0643</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2882797963/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2882797963?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,11688,27924,27925,36060,44363,74895</link.rule.ids></links><search><creatorcontrib>Dolatabadi, Hadi M.</creatorcontrib><creatorcontrib>Erfani, Sarah M.</creatorcontrib><creatorcontrib>Leckie, Christopher</creatorcontrib><title>Adversarial Coreset Selection for Efficient Robust Training</title><title>International journal of computer vision</title><addtitle>Int J Comput Vis</addtitle><description>It has been shown that neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output. Adversarial training is one of the most effective approaches to training robust models against such attacks. Unfortunately, this method is much slower than vanilla training of neural networks since it needs to construct adversarial examples for the entire training data at every iteration. By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training. To this end, we first provide convergence guarantees for adversarial coreset selection. In particular, we show that the convergence bound is directly related to how well our coresets can approximate the gradient computed over the entire training data. Motivated by our theoretical analysis, we propose using this gradient approximation error as our adversarial coreset selection objective to reduce the training set size effectively. Once built, we run adversarial training over this subset of the training data. Unlike existing methods, our approach can be adapted to a wide variety of training objectives, including TRADES, ℓ p -PGD, and Perceptual Adversarial Training. We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2–3 times while experiencing a slight degradation in the clean and robust accuracy.</description><subject>Analysis</subject><subject>Artificial Intelligence</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Convergence</subject><subject>Image Processing and Computer Vision</subject><subject>Iterative methods</subject><subject>Neural networks</subject><subject>Pattern Recognition</subject><subject>Pattern Recognition and Graphics</subject><subject>Perturbation</subject><subject>Robustness</subject><subject>Training</subject><subject>Vision</subject><issn>0920-5691</issn><issn>1573-1405</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>M0C</sourceid><recordid>eNp9kctKxDAUhoMoOF5ewFXBlYuOJ5cmDa6GwRsIgpd1SNuTITI2mmRE395oBXEjh3Dg5_uSwE_IEYU5BVCniVImeQ2sHNpKqMUWmdFG8ZoKaLbJDDSDupGa7pK9lJ4AgLWMz8jZYnjDmGz0dl0tQ8SEubrHNfbZh7FyIVbnzvne45iru9BtUq4eovWjH1cHZMfZdcLDn71PHi_OH5ZX9c3t5fVycVP3gstc84HzTlKF0rWCDbxBJkSJQEslNbaNBeVkR5H1gvUctOvo0HWca95ZxjnfJ8fTvS8xvG4wZfMUNnEsTxrWtkxppeUXNZ-olV2j8aMLOdq-zIDPvg8jOl_yhVIgAYRmRTj5IxQm43te2U1K5vr-7i_LJraPIaWIzrxE_2zjh6FgvhowUwOmNGC-GzCiSHySUoHHFcbff_9jfQImv4Xr</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Dolatabadi, Hadi M.</creator><creator>Erfani, Sarah M.</creator><creator>Leckie, Christopher</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PYYUZ</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-9418-1487</orcidid><orcidid>https://orcid.org/0000-0002-4388-0517</orcidid><orcidid>https://orcid.org/0000-0003-0885-0643</orcidid></search><sort><creationdate>20231201</creationdate><title>Adversarial Coreset Selection for Efficient Robust Training</title><author>Dolatabadi, Hadi M. ; Erfani, Sarah M. ; Leckie, Christopher</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c436t-3d33b617e6f842d35e24433b096769e85a07f6b1e2c42c309fb1dbb3393ba2333</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Analysis</topic><topic>Artificial Intelligence</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Convergence</topic><topic>Image Processing and Computer Vision</topic><topic>Iterative methods</topic><topic>Neural networks</topic><topic>Pattern Recognition</topic><topic>Pattern Recognition and Graphics</topic><topic>Perturbation</topic><topic>Robustness</topic><topic>Training</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Dolatabadi, Hadi M.</creatorcontrib><creatorcontrib>Erfani, Sarah M.</creatorcontrib><creatorcontrib>Leckie, Christopher</creatorcontrib><collection>Springer_OA刊</collection><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Database‎ (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global (ProQuest)</collection><collection>Computing Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ABI/INFORM Collection China</collection><collection>ProQuest Central Basic</collection><jtitle>International journal of computer vision</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dolatabadi, Hadi M.</au><au>Erfani, Sarah M.</au><au>Leckie, Christopher</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial Coreset Selection for Efficient Robust Training</atitle><jtitle>International journal of computer vision</jtitle><stitle>Int J Comput Vis</stitle><date>2023-12-01</date><risdate>2023</risdate><volume>131</volume><issue>12</issue><spage>3307</spage><epage>3331</epage><pages>3307-3331</pages><issn>0920-5691</issn><eissn>1573-1405</eissn><abstract>It has been shown that neural networks are vulnerable to adversarial attacks: adding well-crafted, imperceptible perturbations to their input can modify their output. Adversarial training is one of the most effective approaches to training robust models against such attacks. Unfortunately, this method is much slower than vanilla training of neural networks since it needs to construct adversarial examples for the entire training data at every iteration. By leveraging the theory of coreset selection, we show how selecting a small subset of training data provides a principled approach to reducing the time complexity of robust training. To this end, we first provide convergence guarantees for adversarial coreset selection. In particular, we show that the convergence bound is directly related to how well our coresets can approximate the gradient computed over the entire training data. Motivated by our theoretical analysis, we propose using this gradient approximation error as our adversarial coreset selection objective to reduce the training set size effectively. Once built, we run adversarial training over this subset of the training data. Unlike existing methods, our approach can be adapted to a wide variety of training objectives, including TRADES, ℓ p -PGD, and Perceptual Adversarial Training. We conduct extensive experiments to demonstrate that our approach speeds up adversarial training by 2–3 times while experiencing a slight degradation in the clean and robust accuracy.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11263-023-01860-4</doi><tpages>25</tpages><orcidid>https://orcid.org/0000-0001-9418-1487</orcidid><orcidid>https://orcid.org/0000-0002-4388-0517</orcidid><orcidid>https://orcid.org/0000-0003-0885-0643</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0920-5691
ispartof International journal of computer vision, 2023-12, Vol.131 (12), p.3307-3331
issn 0920-5691
1573-1405
language eng
recordid cdi_proquest_journals_2882797963
source ABI/INFORM Global (ProQuest); Springer Nature
subjects Analysis
Artificial Intelligence
Computer Imaging
Computer Science
Convergence
Image Processing and Computer Vision
Iterative methods
Neural networks
Pattern Recognition
Pattern Recognition and Graphics
Perturbation
Robustness
Training
Vision
title Adversarial Coreset Selection for Efficient Robust Training
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T07%3A10%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20Coreset%20Selection%20for%20Efficient%20Robust%20Training&rft.jtitle=International%20journal%20of%20computer%20vision&rft.au=Dolatabadi,%20Hadi%20M.&rft.date=2023-12-01&rft.volume=131&rft.issue=12&rft.spage=3307&rft.epage=3331&rft.pages=3307-3331&rft.issn=0920-5691&rft.eissn=1573-1405&rft_id=info:doi/10.1007/s11263-023-01860-4&rft_dat=%3Cgale_proqu%3EA770600492%3C/gale_proqu%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c436t-3d33b617e6f842d35e24433b096769e85a07f6b1e2c42c309fb1dbb3393ba2333%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2882797963&rft_id=info:pmid/&rft_galeid=A770600492&rfr_iscdi=true