Loading…

Fragility, Robustness and Antifragility in Deep Learning

We propose a systematic analysis of deep neural networks (DNNs) based on a signal processing technique for network parameter removal, in the form of synaptic filters that identifies the fragility, robustness and antifragility characteristics of DNN parameters. Our proposed analysis investigates if t...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-12
Main Authors: Chandresh Pravin, Martino, Ivan, Nicosia, Giuseppe, Ojha, Varun
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Chandresh Pravin
Martino, Ivan
Nicosia, Giuseppe
Ojha, Varun
description We propose a systematic analysis of deep neural networks (DNNs) based on a signal processing technique for network parameter removal, in the form of synaptic filters that identifies the fragility, robustness and antifragility characteristics of DNN parameters. Our proposed analysis investigates if the DNN performance is impacted negatively, invariantly, or positively on both clean and adversarially perturbed test datasets when the DNN undergoes synaptic filtering. We define three \textit{filtering scores} for quantifying the fragility, robustness and antifragility characteristics of DNN parameters based on the performances for (i) clean dataset, (ii) adversarial dataset, and (iii) the difference in performances of clean and adversarial datasets. We validate the proposed systematic analysis on ResNet-18, ResNet-50, SqueezeNet-v1.1 and ShuffleNet V2 x1.0 network architectures for MNIST, CIFAR10 and Tiny ImageNet datasets. The filtering scores, for a given network architecture, identify network parameters that are invariant in characteristics across different datasets over learning epochs. Vice-versa, for a given dataset, the filtering scores identify the parameters that are invariant in characteristics across different network architectures. We show that our synaptic filtering method improves the test accuracy of ResNet and ShuffleNet models on adversarial datasets when only the robust and antifragile parameters are selectively retrained at any given epoch, thus demonstrating applications of the proposed strategy in improving model robustness.
doi_str_mv 10.48550/arxiv.2312.09821
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2903143718</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2903143718</sourcerecordid><originalsourceid>FETCH-LOGICAL-a951-5713738887892a37bb1a0feb382b0b83169e067764c66bb7fd6efef4674bc313</originalsourceid><addsrcrecordid>eNo1jU1LAzEUAIMgWGp_gLeAV3fNy8vH22OpVoUFQb2XpE1KSsnWZFf03yuopzkMzDB2BaJVpLW4deUzfbQSQbaiIwlnbCYRoSEl5QVb1HoQQkhjpdY4Y7Qubp-Oafy64S-Dn-qYQ63c5R1f5jHFf8tT5nchnHgfXMkp7y_ZeXTHGhZ_nLPX9f3b6rHpnx-eVsu-cZ2GRltAi0RkqZMOrffgRAweSXrhCcF0QRhrjdoa472NOxNiiMpY5bcIOGfXv9VTGd6nUMfNYZhK_hluZCcQFFog_AYRS0a9</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2903143718</pqid></control><display><type>article</type><title>Fragility, Robustness and Antifragility in Deep Learning</title><source>ProQuest - Publicly Available Content Database</source><creator>Chandresh Pravin ; Martino, Ivan ; Nicosia, Giuseppe ; Ojha, Varun</creator><creatorcontrib>Chandresh Pravin ; Martino, Ivan ; Nicosia, Giuseppe ; Ojha, Varun</creatorcontrib><description>We propose a systematic analysis of deep neural networks (DNNs) based on a signal processing technique for network parameter removal, in the form of synaptic filters that identifies the fragility, robustness and antifragility characteristics of DNN parameters. Our proposed analysis investigates if the DNN performance is impacted negatively, invariantly, or positively on both clean and adversarially perturbed test datasets when the DNN undergoes synaptic filtering. We define three \textit{filtering scores} for quantifying the fragility, robustness and antifragility characteristics of DNN parameters based on the performances for (i) clean dataset, (ii) adversarial dataset, and (iii) the difference in performances of clean and adversarial datasets. We validate the proposed systematic analysis on ResNet-18, ResNet-50, SqueezeNet-v1.1 and ShuffleNet V2 x1.0 network architectures for MNIST, CIFAR10 and Tiny ImageNet datasets. The filtering scores, for a given network architecture, identify network parameters that are invariant in characteristics across different datasets over learning epochs. Vice-versa, for a given dataset, the filtering scores identify the parameters that are invariant in characteristics across different network architectures. We show that our synaptic filtering method improves the test accuracy of ResNet and ShuffleNet models on adversarial datasets when only the robust and antifragile parameters are selectively retrained at any given epoch, thus demonstrating applications of the proposed strategy in improving model robustness.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2312.09821</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Datasets ; Deep learning ; Filtration ; Fragility ; Invariants ; Machine learning ; Mathematical models ; Parameter identification ; Parameter robustness ; Robustness</subject><ispartof>arXiv.org, 2023-12</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2903143718?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,27925,37012,44590</link.rule.ids></links><search><creatorcontrib>Chandresh Pravin</creatorcontrib><creatorcontrib>Martino, Ivan</creatorcontrib><creatorcontrib>Nicosia, Giuseppe</creatorcontrib><creatorcontrib>Ojha, Varun</creatorcontrib><title>Fragility, Robustness and Antifragility in Deep Learning</title><title>arXiv.org</title><description>We propose a systematic analysis of deep neural networks (DNNs) based on a signal processing technique for network parameter removal, in the form of synaptic filters that identifies the fragility, robustness and antifragility characteristics of DNN parameters. Our proposed analysis investigates if the DNN performance is impacted negatively, invariantly, or positively on both clean and adversarially perturbed test datasets when the DNN undergoes synaptic filtering. We define three \textit{filtering scores} for quantifying the fragility, robustness and antifragility characteristics of DNN parameters based on the performances for (i) clean dataset, (ii) adversarial dataset, and (iii) the difference in performances of clean and adversarial datasets. We validate the proposed systematic analysis on ResNet-18, ResNet-50, SqueezeNet-v1.1 and ShuffleNet V2 x1.0 network architectures for MNIST, CIFAR10 and Tiny ImageNet datasets. The filtering scores, for a given network architecture, identify network parameters that are invariant in characteristics across different datasets over learning epochs. Vice-versa, for a given dataset, the filtering scores identify the parameters that are invariant in characteristics across different network architectures. We show that our synaptic filtering method improves the test accuracy of ResNet and ShuffleNet models on adversarial datasets when only the robust and antifragile parameters are selectively retrained at any given epoch, thus demonstrating applications of the proposed strategy in improving model robustness.</description><subject>Artificial neural networks</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Filtration</subject><subject>Fragility</subject><subject>Invariants</subject><subject>Machine learning</subject><subject>Mathematical models</subject><subject>Parameter identification</subject><subject>Parameter robustness</subject><subject>Robustness</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNo1jU1LAzEUAIMgWGp_gLeAV3fNy8vH22OpVoUFQb2XpE1KSsnWZFf03yuopzkMzDB2BaJVpLW4deUzfbQSQbaiIwlnbCYRoSEl5QVb1HoQQkhjpdY4Y7Qubp-Oafy64S-Dn-qYQ63c5R1f5jHFf8tT5nchnHgfXMkp7y_ZeXTHGhZ_nLPX9f3b6rHpnx-eVsu-cZ2GRltAi0RkqZMOrffgRAweSXrhCcF0QRhrjdoa472NOxNiiMpY5bcIOGfXv9VTGd6nUMfNYZhK_hluZCcQFFog_AYRS0a9</recordid><startdate>20231223</startdate><enddate>20231223</enddate><creator>Chandresh Pravin</creator><creator>Martino, Ivan</creator><creator>Nicosia, Giuseppe</creator><creator>Ojha, Varun</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231223</creationdate><title>Fragility, Robustness and Antifragility in Deep Learning</title><author>Chandresh Pravin ; Martino, Ivan ; Nicosia, Giuseppe ; Ojha, Varun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a951-5713738887892a37bb1a0feb382b0b83169e067764c66bb7fd6efef4674bc313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial neural networks</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Filtration</topic><topic>Fragility</topic><topic>Invariants</topic><topic>Machine learning</topic><topic>Mathematical models</topic><topic>Parameter identification</topic><topic>Parameter robustness</topic><topic>Robustness</topic><toplevel>online_resources</toplevel><creatorcontrib>Chandresh Pravin</creatorcontrib><creatorcontrib>Martino, Ivan</creatorcontrib><creatorcontrib>Nicosia, Giuseppe</creatorcontrib><creatorcontrib>Ojha, Varun</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest - Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chandresh Pravin</au><au>Martino, Ivan</au><au>Nicosia, Giuseppe</au><au>Ojha, Varun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fragility, Robustness and Antifragility in Deep Learning</atitle><jtitle>arXiv.org</jtitle><date>2023-12-23</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>We propose a systematic analysis of deep neural networks (DNNs) based on a signal processing technique for network parameter removal, in the form of synaptic filters that identifies the fragility, robustness and antifragility characteristics of DNN parameters. Our proposed analysis investigates if the DNN performance is impacted negatively, invariantly, or positively on both clean and adversarially perturbed test datasets when the DNN undergoes synaptic filtering. We define three \textit{filtering scores} for quantifying the fragility, robustness and antifragility characteristics of DNN parameters based on the performances for (i) clean dataset, (ii) adversarial dataset, and (iii) the difference in performances of clean and adversarial datasets. We validate the proposed systematic analysis on ResNet-18, ResNet-50, SqueezeNet-v1.1 and ShuffleNet V2 x1.0 network architectures for MNIST, CIFAR10 and Tiny ImageNet datasets. The filtering scores, for a given network architecture, identify network parameters that are invariant in characteristics across different datasets over learning epochs. Vice-versa, for a given dataset, the filtering scores identify the parameters that are invariant in characteristics across different network architectures. We show that our synaptic filtering method improves the test accuracy of ResNet and ShuffleNet models on adversarial datasets when only the robust and antifragile parameters are selectively retrained at any given epoch, thus demonstrating applications of the proposed strategy in improving model robustness.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2312.09821</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2903143718
source ProQuest - Publicly Available Content Database
subjects Artificial neural networks
Datasets
Deep learning
Filtration
Fragility
Invariants
Machine learning
Mathematical models
Parameter identification
Parameter robustness
Robustness
title Fragility, Robustness and Antifragility in Deep Learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T03%3A06%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fragility,%20Robustness%20and%20Antifragility%20in%20Deep%20Learning&rft.jtitle=arXiv.org&rft.au=Chandresh%20Pravin&rft.date=2023-12-23&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2312.09821&rft_dat=%3Cproquest%3E2903143718%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a951-5713738887892a37bb1a0feb382b0b83169e067764c66bb7fd6efef4674bc313%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2903143718&rft_id=info:pmid/&rfr_iscdi=true