Loading…

Augment CAPTCHA Security Using Adversarial Examples with Neural Style Transfer

To counteract rising bots, many CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) have been developed throughout the years. Automated attacks [1], however, employing powerful deep learning techniques, have had high success rates over common CAPTCHAs, including im...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2023-01, Vol.11, p.1-1
Main Authors: Dinh, Nghia, Tran-Trung, Kiet, Hoang, Vinh Truong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c359t-217a5a97ab2d96376873ff1a0b8de5720d0ab34bb5e6c3a840d4838c66eb25083
container_end_page 1
container_issue
container_start_page 1
container_title IEEE access
container_volume 11
creator Dinh, Nghia
Tran-Trung, Kiet
Hoang, Vinh Truong
description To counteract rising bots, many CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) have been developed throughout the years. Automated attacks [1], however, employing powerful deep learning techniques, have had high success rates over common CAPTCHAs, including image-based and text-based CAPTCHAs. Optimistically, introducing imperceptible noise, Adversarial Examples have lately been shown to particularly impact DNN (Deep Neural Network) networks. The authors improved the CAPTCHA security architecture by increasing the resilience of Adversarial Examples when combined with Neural Style Transfer. The findings demonstrated that the proposed approach considerably improves the security of ordinary CAPTCHAs.
doi_str_mv 10.1109/ACCESS.2023.3298442
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_2851359166</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10192423</ieee_id><doaj_id>oai_doaj_org_article_76438b4683b5449f9d70891f193ad644</doaj_id><sourcerecordid>2851359166</sourcerecordid><originalsourceid>FETCH-LOGICAL-c359t-217a5a97ab2d96376873ff1a0b8de5720d0ab34bb5e6c3a840d4838c66eb25083</originalsourceid><addsrcrecordid>eNpNUU1Lw0AQDaJgqf0Fegh4bt3v7B5DqFooVUh7XjbJpKakSd1N1P57t6ZI5zLDm3lvHrwguMdohjFST3GSzNN0RhChM0qUZIxcBSOChZpSTsX1xXwbTJzbIV_SQzwaBau43-6h6cIkfl8nr3GYQt7bqjuGG1c12zAuvsA6YytTh_Mfsz_U4MLvqvsIV9BbD6bdsYZwbU3jSrB3wU1pageTcx8Hm-e5150u314WSbyc5pSrbkpwZLhRkclIoQSNhIxoWWKDMlkAjwgqkMkoyzIOIqdGMlQwSWUuBGSEI0nHwWLQLVqz0wdb7Y096tZU-g9o7VYb21V5DToSjMqMCUkzzpgqVREhqXCJFTWFYMxrPQ5aB9t-9uA6vWt723j7mkiOvWEshL-iw1VuW-cslP9fMdKnHPSQgz7loM85eNbDwKoA4IKBFfFb-gsCEIGM</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2851359166</pqid></control><display><type>article</type><title>Augment CAPTCHA Security Using Adversarial Examples with Neural Style Transfer</title><source>IEEE Xplore Open Access Journals</source><creator>Dinh, Nghia ; Tran-Trung, Kiet ; Hoang, Vinh Truong</creator><creatorcontrib>Dinh, Nghia ; Tran-Trung, Kiet ; Hoang, Vinh Truong</creatorcontrib><description>To counteract rising bots, many CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) have been developed throughout the years. Automated attacks [1], however, employing powerful deep learning techniques, have had high success rates over common CAPTCHAs, including image-based and text-based CAPTCHAs. Optimistically, introducing imperceptible noise, Adversarial Examples have lately been shown to particularly impact DNN (Deep Neural Network) networks. The authors improved the CAPTCHA security architecture by increasing the resilience of Adversarial Examples when combined with Neural Style Transfer. The findings demonstrated that the proposed approach considerably improves the security of ordinary CAPTCHAs.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2023.3298442</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>adversarial examples ; Adversarial machine learning ; Artificial neural networks ; CAPTCHA ; CAPTCHAs ; CNN ; cognitive ; Computation ; Deep learning ; DNN ; Image recognition ; Logic ; Machine learning ; Perturbation methods ; Resilience ; Security ; Training</subject><ispartof>IEEE access, 2023-01, Vol.11, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c359t-217a5a97ab2d96376873ff1a0b8de5720d0ab34bb5e6c3a840d4838c66eb25083</cites><orcidid>0000-0003-4871-7476 ; 0000-0002-3464-3894</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10192423$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,27624,27915,27916,54924</link.rule.ids></links><search><creatorcontrib>Dinh, Nghia</creatorcontrib><creatorcontrib>Tran-Trung, Kiet</creatorcontrib><creatorcontrib>Hoang, Vinh Truong</creatorcontrib><title>Augment CAPTCHA Security Using Adversarial Examples with Neural Style Transfer</title><title>IEEE access</title><addtitle>Access</addtitle><description>To counteract rising bots, many CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) have been developed throughout the years. Automated attacks [1], however, employing powerful deep learning techniques, have had high success rates over common CAPTCHAs, including image-based and text-based CAPTCHAs. Optimistically, introducing imperceptible noise, Adversarial Examples have lately been shown to particularly impact DNN (Deep Neural Network) networks. The authors improved the CAPTCHA security architecture by increasing the resilience of Adversarial Examples when combined with Neural Style Transfer. The findings demonstrated that the proposed approach considerably improves the security of ordinary CAPTCHAs.</description><subject>adversarial examples</subject><subject>Adversarial machine learning</subject><subject>Artificial neural networks</subject><subject>CAPTCHA</subject><subject>CAPTCHAs</subject><subject>CNN</subject><subject>cognitive</subject><subject>Computation</subject><subject>Deep learning</subject><subject>DNN</subject><subject>Image recognition</subject><subject>Logic</subject><subject>Machine learning</subject><subject>Perturbation methods</subject><subject>Resilience</subject><subject>Security</subject><subject>Training</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>DOA</sourceid><recordid>eNpNUU1Lw0AQDaJgqf0Fegh4bt3v7B5DqFooVUh7XjbJpKakSd1N1P57t6ZI5zLDm3lvHrwguMdohjFST3GSzNN0RhChM0qUZIxcBSOChZpSTsX1xXwbTJzbIV_SQzwaBau43-6h6cIkfl8nr3GYQt7bqjuGG1c12zAuvsA6YytTh_Mfsz_U4MLvqvsIV9BbD6bdsYZwbU3jSrB3wU1pageTcx8Hm-e5150u314WSbyc5pSrbkpwZLhRkclIoQSNhIxoWWKDMlkAjwgqkMkoyzIOIqdGMlQwSWUuBGSEI0nHwWLQLVqz0wdb7Y096tZU-g9o7VYb21V5DToSjMqMCUkzzpgqVREhqXCJFTWFYMxrPQ5aB9t-9uA6vWt723j7mkiOvWEshL-iw1VuW-cslP9fMdKnHPSQgz7loM85eNbDwKoA4IKBFfFb-gsCEIGM</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Dinh, Nghia</creator><creator>Tran-Trung, Kiet</creator><creator>Hoang, Vinh Truong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-4871-7476</orcidid><orcidid>https://orcid.org/0000-0002-3464-3894</orcidid></search><sort><creationdate>20230101</creationdate><title>Augment CAPTCHA Security Using Adversarial Examples with Neural Style Transfer</title><author>Dinh, Nghia ; Tran-Trung, Kiet ; Hoang, Vinh Truong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c359t-217a5a97ab2d96376873ff1a0b8de5720d0ab34bb5e6c3a840d4838c66eb25083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>adversarial examples</topic><topic>Adversarial machine learning</topic><topic>Artificial neural networks</topic><topic>CAPTCHA</topic><topic>CAPTCHAs</topic><topic>CNN</topic><topic>cognitive</topic><topic>Computation</topic><topic>Deep learning</topic><topic>DNN</topic><topic>Image recognition</topic><topic>Logic</topic><topic>Machine learning</topic><topic>Perturbation methods</topic><topic>Resilience</topic><topic>Security</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Dinh, Nghia</creatorcontrib><creatorcontrib>Tran-Trung, Kiet</creatorcontrib><creatorcontrib>Hoang, Vinh Truong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Xplore Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE/IET Electronic Library</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dinh, Nghia</au><au>Tran-Trung, Kiet</au><au>Hoang, Vinh Truong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Augment CAPTCHA Security Using Adversarial Examples with Neural Style Transfer</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2023-01-01</date><risdate>2023</risdate><volume>11</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>To counteract rising bots, many CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) have been developed throughout the years. Automated attacks [1], however, employing powerful deep learning techniques, have had high success rates over common CAPTCHAs, including image-based and text-based CAPTCHAs. Optimistically, introducing imperceptible noise, Adversarial Examples have lately been shown to particularly impact DNN (Deep Neural Network) networks. The authors improved the CAPTCHA security architecture by increasing the resilience of Adversarial Examples when combined with Neural Style Transfer. The findings demonstrated that the proposed approach considerably improves the security of ordinary CAPTCHAs.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2023.3298442</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0003-4871-7476</orcidid><orcidid>https://orcid.org/0000-0002-3464-3894</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2023-01, Vol.11, p.1-1
issn 2169-3536
2169-3536
language eng
recordid cdi_proquest_journals_2851359166
source IEEE Xplore Open Access Journals
subjects adversarial examples
Adversarial machine learning
Artificial neural networks
CAPTCHA
CAPTCHAs
CNN
cognitive
Computation
Deep learning
DNN
Image recognition
Logic
Machine learning
Perturbation methods
Resilience
Security
Training
title Augment CAPTCHA Security Using Adversarial Examples with Neural Style Transfer
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T23%3A33%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Augment%20CAPTCHA%20Security%20Using%20Adversarial%20Examples%20with%20Neural%20Style%20Transfer&rft.jtitle=IEEE%20access&rft.au=Dinh,%20Nghia&rft.date=2023-01-01&rft.volume=11&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2023.3298442&rft_dat=%3Cproquest_ieee_%3E2851359166%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c359t-217a5a97ab2d96376873ff1a0b8de5720d0ab34bb5e6c3a840d4838c66eb25083%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2851359166&rft_id=info:pmid/&rft_ieee_id=10192423&rfr_iscdi=true