Loading…
Using machine learning to identify common flaws in CAPTCHA design: FunCAPTCHA case analysis
Human Interactive Proofs (HIPs 11Human Interaction Proof, or also Human Interactive Proof. or CAPTCHAs 22Completely Automated Public Turing test to tell Computers and Humans Apart.) have become a first-level security measure on the Internet to avoid automatic attacks or minimize their effects. All t...
Saved in:
Published in: | Computers & security 2017-09, Vol.70, p.744-756 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Human Interactive Proofs (HIPs 11Human Interaction Proof, or also Human Interactive Proof. or CAPTCHAs 22Completely Automated Public Turing test to tell Computers and Humans Apart.) have become a first-level security measure on the Internet to avoid automatic attacks or minimize their effects. All the most widespread, successful or interesting CAPTCHA designs put to scrutiny have been successfully broken. Many of these attacks have been side-channel attacks. New designs are proposed to tackle these security problems while improving the human interface. FunCAPTCHA is the first commercial implementation of a gender classification CAPTCHA, with reported improvements in conversion rates. This article finds weaknesses in the security of FunCAPTCHA and uses simple machine learning (ML) analysis to test them. It shows a side-channel attack that leverages these flaws and successfully solves FunCAPTCHA on 90% of occasions without using meaningful image analysis. This simple yet effective security analysis can be applied with minor modifications to other HIPs proposals, allowing to check whether they leak enough information that would in turn allow for simple side-channel attacks. |
---|---|
ISSN: | 0167-4048 1872-6208 |
DOI: | 10.1016/j.cose.2017.05.005 |