Loading…
Augment CAPTCHA Security Using Adversarial Examples with Neural Style Transfer
To counteract rising bots, many CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) have been developed throughout the years. Automated attacks [1], however, employing powerful deep learning techniques, have had high success rates over common CAPTCHAs, including im...
Saved in:
Published in: | IEEE access 2023-01, Vol.11, p.1-1 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | To counteract rising bots, many CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) have been developed throughout the years. Automated attacks [1], however, employing powerful deep learning techniques, have had high success rates over common CAPTCHAs, including image-based and text-based CAPTCHAs. Optimistically, introducing imperceptible noise, Adversarial Examples have lately been shown to particularly impact DNN (Deep Neural Network) networks. The authors improved the CAPTCHA security architecture by increasing the resilience of Adversarial Examples when combined with Neural Style Transfer. The findings demonstrated that the proposed approach considerably improves the security of ordinary CAPTCHAs. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2023.3298442 |