Loading…
Secure Convolutional Neural Network-based Internet-of-Healthcare Applications
Convolutional neural networks (CNNs) have gained popularity for Internet-of-Healthcare (IoH) applications such as medical diagnostics. However, new research shows that adversarial attacks with slight imperceptible changes can undermine deep neural network techniques in healthcare. This raises questi...
Saved in:
Published in: | IEEE access 2023-01, Vol.11, p.1-1 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c409t-9f00ec2766c2ac06be42d65c61ddc35daf2189c7d2e52a29273f7bcadc1808d63 |
---|---|
cites | cdi_FETCH-LOGICAL-c409t-9f00ec2766c2ac06be42d65c61ddc35daf2189c7d2e52a29273f7bcadc1808d63 |
container_end_page | 1 |
container_issue | |
container_start_page | 1 |
container_title | IEEE access |
container_volume | 11 |
creator | Khriji, Lazhar Bouaafia, Soulef Messaoud, Seifeddine Ammari, Ahmed Chiheb Machhout, Mohsen |
description | Convolutional neural networks (CNNs) have gained popularity for Internet-of-Healthcare (IoH) applications such as medical diagnostics. However, new research shows that adversarial attacks with slight imperceptible changes can undermine deep neural network techniques in healthcare. This raises questions regarding the safety of deploying these IoH devices in clinical situations. In this paper, we review the techniques used in fighting against cyber-attacks. Then, we propose to study the robustness of some well-known CNN architectures' belonging to sequential, parallel, and residual families, such as LeNet5, MobileNetV1, VGG16, ResNet50, and InceptionV3 against fast gradient sign method (FGSM) and projected gradient descent (PGD) attacks, in the context of classification of chest radiographs (X-rays) based on the IoH application. Finally, we propose to improve the security of these CNN structures by studying standard and adversarial training. The results show that, among these models, smaller models with lower computational complexity are more secure against hostile threats than larger models that are frequently used in IoH applications. In contrast, we reveal that when these networks are learned adversarially, they can outperform standard trained networks. The experimental results demonstrate that the model performance breakpoint is represented by γ = 0.3 with a maximum loss of accuracy tolerated at 2%. |
doi_str_mv | 10.1109/ACCESS.2023.3266586 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10100924</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10100924</ieee_id><doaj_id>oai_doaj_org_article_ab4f52588a884d0784fd8b586448ac52</doaj_id><sourcerecordid>2803043561</sourcerecordid><originalsourceid>FETCH-LOGICAL-c409t-9f00ec2766c2ac06be42d65c61ddc35daf2189c7d2e52a29273f7bcadc1808d63</originalsourceid><addsrcrecordid>eNpNkU9Lw0AQxYMoWGo_gR4KnlP3fzbHEtQWqh6q52WyO9HU2K27ieK3NzVFOpcZHvN7A_OS5JKSGaUkv5kXxe16PWOE8RlnSkmtTpIRoypPueTq9Gg-TyYxbkhfupdkNkoe1mi7gNPCb79807W130IzfcQu_LX224f3tISIbrrcthi22Ka-ShcITftmoSfnu11TW9iT8SI5q6CJODn0cfJyd_tcLNLV0_2ymK9SK0jepnlFCFqWKWUZWKJKFMwpaRV1znLpoGJU5zZzDCUDlrOMV1lpwVmqiXaKj5Pl4Os8bMwu1B8QfoyH2vwJPrwaCG1tGzRQikoyqTVoLRzJtKicLvsfCaHBStZ7XQ9eu-A_O4yt2fgu9F-IhmnCieBS0X6LD1s2-BgDVv9XKTH7GMwQg9nHYA4x9NTVQNWIeERQQnIm-C8UmoOb</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2803043561</pqid></control><display><type>article</type><title>Secure Convolutional Neural Network-based Internet-of-Healthcare Applications</title><source>IEEE Xplore Open Access Journals</source><creator>Khriji, Lazhar ; Bouaafia, Soulef ; Messaoud, Seifeddine ; Ammari, Ahmed Chiheb ; Machhout, Mohsen</creator><creatorcontrib>Khriji, Lazhar ; Bouaafia, Soulef ; Messaoud, Seifeddine ; Ammari, Ahmed Chiheb ; Machhout, Mohsen</creatorcontrib><description>Convolutional neural networks (CNNs) have gained popularity for Internet-of-Healthcare (IoH) applications such as medical diagnostics. However, new research shows that adversarial attacks with slight imperceptible changes can undermine deep neural network techniques in healthcare. This raises questions regarding the safety of deploying these IoH devices in clinical situations. In this paper, we review the techniques used in fighting against cyber-attacks. Then, we propose to study the robustness of some well-known CNN architectures' belonging to sequential, parallel, and residual families, such as LeNet5, MobileNetV1, VGG16, ResNet50, and InceptionV3 against fast gradient sign method (FGSM) and projected gradient descent (PGD) attacks, in the context of classification of chest radiographs (X-rays) based on the IoH application. Finally, we propose to improve the security of these CNN structures by studying standard and adversarial training. The results show that, among these models, smaller models with lower computational complexity are more secure against hostile threats than larger models that are frequently used in IoH applications. In contrast, we reveal that when these networks are learned adversarially, they can outperform standard trained networks. The experimental results demonstrate that the model performance breakpoint is represented by γ = 0.3 with a maximum loss of accuracy tolerated at 2%.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2023.3266586</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Adversarial attacks ; Artificial neural networks ; Biological system modeling ; Computational modeling ; Convolutional neural networks ; COVID-19 ; Cybersecurity ; Health care ; Internet ; Internet of Healthcare ; Medical Data ; Medical services ; Neural networks ; Security ; Security and Privacy ; Training</subject><ispartof>IEEE access, 2023-01, Vol.11, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c409t-9f00ec2766c2ac06be42d65c61ddc35daf2189c7d2e52a29273f7bcadc1808d63</citedby><cites>FETCH-LOGICAL-c409t-9f00ec2766c2ac06be42d65c61ddc35daf2189c7d2e52a29273f7bcadc1808d63</cites><orcidid>0000-0002-1434-5689 ; 0000-0001-5205-4914 ; 0000-0002-9939-1624 ; 0000-0003-0657-6900</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10100924$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,27631,27922,27923,54931</link.rule.ids></links><search><creatorcontrib>Khriji, Lazhar</creatorcontrib><creatorcontrib>Bouaafia, Soulef</creatorcontrib><creatorcontrib>Messaoud, Seifeddine</creatorcontrib><creatorcontrib>Ammari, Ahmed Chiheb</creatorcontrib><creatorcontrib>Machhout, Mohsen</creatorcontrib><title>Secure Convolutional Neural Network-based Internet-of-Healthcare Applications</title><title>IEEE access</title><addtitle>Access</addtitle><description>Convolutional neural networks (CNNs) have gained popularity for Internet-of-Healthcare (IoH) applications such as medical diagnostics. However, new research shows that adversarial attacks with slight imperceptible changes can undermine deep neural network techniques in healthcare. This raises questions regarding the safety of deploying these IoH devices in clinical situations. In this paper, we review the techniques used in fighting against cyber-attacks. Then, we propose to study the robustness of some well-known CNN architectures' belonging to sequential, parallel, and residual families, such as LeNet5, MobileNetV1, VGG16, ResNet50, and InceptionV3 against fast gradient sign method (FGSM) and projected gradient descent (PGD) attacks, in the context of classification of chest radiographs (X-rays) based on the IoH application. Finally, we propose to improve the security of these CNN structures by studying standard and adversarial training. The results show that, among these models, smaller models with lower computational complexity are more secure against hostile threats than larger models that are frequently used in IoH applications. In contrast, we reveal that when these networks are learned adversarially, they can outperform standard trained networks. The experimental results demonstrate that the model performance breakpoint is represented by γ = 0.3 with a maximum loss of accuracy tolerated at 2%.</description><subject>Adversarial attacks</subject><subject>Artificial neural networks</subject><subject>Biological system modeling</subject><subject>Computational modeling</subject><subject>Convolutional neural networks</subject><subject>COVID-19</subject><subject>Cybersecurity</subject><subject>Health care</subject><subject>Internet</subject><subject>Internet of Healthcare</subject><subject>Medical Data</subject><subject>Medical services</subject><subject>Neural networks</subject><subject>Security</subject><subject>Security and Privacy</subject><subject>Training</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>DOA</sourceid><recordid>eNpNkU9Lw0AQxYMoWGo_gR4KnlP3fzbHEtQWqh6q52WyO9HU2K27ieK3NzVFOpcZHvN7A_OS5JKSGaUkv5kXxe16PWOE8RlnSkmtTpIRoypPueTq9Gg-TyYxbkhfupdkNkoe1mi7gNPCb79807W130IzfcQu_LX224f3tISIbrrcthi22Ka-ShcITftmoSfnu11TW9iT8SI5q6CJODn0cfJyd_tcLNLV0_2ymK9SK0jepnlFCFqWKWUZWKJKFMwpaRV1znLpoGJU5zZzDCUDlrOMV1lpwVmqiXaKj5Pl4Os8bMwu1B8QfoyH2vwJPrwaCG1tGzRQikoyqTVoLRzJtKicLvsfCaHBStZ7XQ9eu-A_O4yt2fgu9F-IhmnCieBS0X6LD1s2-BgDVv9XKTH7GMwQg9nHYA4x9NTVQNWIeERQQnIm-C8UmoOb</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Khriji, Lazhar</creator><creator>Bouaafia, Soulef</creator><creator>Messaoud, Seifeddine</creator><creator>Ammari, Ahmed Chiheb</creator><creator>Machhout, Mohsen</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-1434-5689</orcidid><orcidid>https://orcid.org/0000-0001-5205-4914</orcidid><orcidid>https://orcid.org/0000-0002-9939-1624</orcidid><orcidid>https://orcid.org/0000-0003-0657-6900</orcidid></search><sort><creationdate>20230101</creationdate><title>Secure Convolutional Neural Network-based Internet-of-Healthcare Applications</title><author>Khriji, Lazhar ; Bouaafia, Soulef ; Messaoud, Seifeddine ; Ammari, Ahmed Chiheb ; Machhout, Mohsen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c409t-9f00ec2766c2ac06be42d65c61ddc35daf2189c7d2e52a29273f7bcadc1808d63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adversarial attacks</topic><topic>Artificial neural networks</topic><topic>Biological system modeling</topic><topic>Computational modeling</topic><topic>Convolutional neural networks</topic><topic>COVID-19</topic><topic>Cybersecurity</topic><topic>Health care</topic><topic>Internet</topic><topic>Internet of Healthcare</topic><topic>Medical Data</topic><topic>Medical services</topic><topic>Neural networks</topic><topic>Security</topic><topic>Security and Privacy</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Khriji, Lazhar</creatorcontrib><creatorcontrib>Bouaafia, Soulef</creatorcontrib><creatorcontrib>Messaoud, Seifeddine</creatorcontrib><creatorcontrib>Ammari, Ahmed Chiheb</creatorcontrib><creatorcontrib>Machhout, Mohsen</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Xplore Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Khriji, Lazhar</au><au>Bouaafia, Soulef</au><au>Messaoud, Seifeddine</au><au>Ammari, Ahmed Chiheb</au><au>Machhout, Mohsen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Secure Convolutional Neural Network-based Internet-of-Healthcare Applications</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2023-01-01</date><risdate>2023</risdate><volume>11</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Convolutional neural networks (CNNs) have gained popularity for Internet-of-Healthcare (IoH) applications such as medical diagnostics. However, new research shows that adversarial attacks with slight imperceptible changes can undermine deep neural network techniques in healthcare. This raises questions regarding the safety of deploying these IoH devices in clinical situations. In this paper, we review the techniques used in fighting against cyber-attacks. Then, we propose to study the robustness of some well-known CNN architectures' belonging to sequential, parallel, and residual families, such as LeNet5, MobileNetV1, VGG16, ResNet50, and InceptionV3 against fast gradient sign method (FGSM) and projected gradient descent (PGD) attacks, in the context of classification of chest radiographs (X-rays) based on the IoH application. Finally, we propose to improve the security of these CNN structures by studying standard and adversarial training. The results show that, among these models, smaller models with lower computational complexity are more secure against hostile threats than larger models that are frequently used in IoH applications. In contrast, we reveal that when these networks are learned adversarially, they can outperform standard trained networks. The experimental results demonstrate that the model performance breakpoint is represented by γ = 0.3 with a maximum loss of accuracy tolerated at 2%.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2023.3266586</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-1434-5689</orcidid><orcidid>https://orcid.org/0000-0001-5205-4914</orcidid><orcidid>https://orcid.org/0000-0002-9939-1624</orcidid><orcidid>https://orcid.org/0000-0003-0657-6900</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2023-01, Vol.11, p.1-1 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_ieee_primary_10100924 |
source | IEEE Xplore Open Access Journals |
subjects | Adversarial attacks Artificial neural networks Biological system modeling Computational modeling Convolutional neural networks COVID-19 Cybersecurity Health care Internet Internet of Healthcare Medical Data Medical services Neural networks Security Security and Privacy Training |
title | Secure Convolutional Neural Network-based Internet-of-Healthcare Applications |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T12%3A44%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Secure%20Convolutional%20Neural%20Network-based%20Internet-of-Healthcare%20Applications&rft.jtitle=IEEE%20access&rft.au=Khriji,%20Lazhar&rft.date=2023-01-01&rft.volume=11&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2023.3266586&rft_dat=%3Cproquest_ieee_%3E2803043561%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c409t-9f00ec2766c2ac06be42d65c61ddc35daf2189c7d2e52a29273f7bcadc1808d63%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2803043561&rft_id=info:pmid/&rft_ieee_id=10100924&rfr_iscdi=true |