Loading…
An OCaNet Model Based on Octave Convolution and Attention Mechanism for Iris Recognition
Iris recognition refers to identifying individuals based on iris patterns, which have been widely used in security systems, such as subway security and access control attendance, because everyone has a unique iris shape. In the study, we propose an OCaNet model for the iris recognition task. First,...
Saved in:
Published in: | Mathematical problems in engineering 2021-10, Vol.2021, p.1-10 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Iris recognition refers to identifying individuals based on iris patterns, which have been widely used in security systems, such as subway security and access control attendance, because everyone has a unique iris shape. In the study, we propose an OCaNet model for the iris recognition task. First, binarized threshold segmentation is used to locate the pupil and the pupil boundary is obtained; then, the Hough transform is applied to locate the outer edge of the iris; according to the located pupil and iris, the iris area image is obtained through image segmentation; finally, the iris image is normalized to adjust each original image to the same size and corresponding position, so as to eliminate the influence of translation, scaling, and rotation on iris recognition. Second, the normalized iris images are both input into the octave convolution module and attention module. The octave convolution module is used to extract the shape and contour features of the iris by decomposing the feature map into high and low frequencies. The attention module is applied to extract the color and texture characteristics of the iris. Finally, the two feature maps are concatenated and produce a distribution of output classes. Experimental results show that the proposed OCaNet model is significantly more accurate. |
---|---|
ISSN: | 1024-123X 1563-5147 |
DOI: | 10.1155/2021/3412060 |