Loading…
LSFM: Light Style and Feature Matching for Efficient Cross-Domain Palmprint Recognition
The exceptional feature extraction capabilities of deep neural networks (DNNs) have significantly advanced palmprint recognition. However, DNNs typically require training and testing data originate from the same distribution, which limits their practical applications. Moreover, existing unsupervised...
Saved in:
Published in: | IEEE transactions on information forensics and security 2024, Vol.19, p.9598-9612 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c148t-bbd3442122de04d28a058472f38b50de7b2c51f0338592f787705109360fe0793 |
container_end_page | 9612 |
container_issue | |
container_start_page | 9598 |
container_title | IEEE transactions on information forensics and security |
container_volume | 19 |
creator | Ruan, Song Li, Yantao Qin, Huafeng |
description | The exceptional feature extraction capabilities of deep neural networks (DNNs) have significantly advanced palmprint recognition. However, DNNs typically require training and testing data originate from the same distribution, which limits their practical applications. Moreover, existing unsupervised domain adaptation methods struggle to achieve high accuracy with efficiency. To address these challenges, we propose LSFM, an efficient Light Style and Feature Matching method that enhances palmprint recognition performance in cross-domain scenarios with fewer resources. Specifically, we develop an efficient style transfer model to mitigate domain shifts at the pixel level. We then align features across multiple task-specific layers in high dimensional space to reduce domain discrepancies, further improving cross-domain performance. Finally, we evaluate the effectiveness of the proposed LSFM through extensive experiments on two public multi-domain palmprint databases. The experimental results demonstrate that LSFM achieves superior performance with significantly reduced resource consumption, improving average accuracy to 94.87% and lowering the average equal error rate to 1.46%, while saving over 80% of resources. |
doi_str_mv | 10.1109/TIFS.2024.3476978 |
format | article |
fullrecord | <record><control><sourceid>crossref_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10711952</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10711952</ieee_id><sourcerecordid>10_1109_TIFS_2024_3476978</sourcerecordid><originalsourceid>FETCH-LOGICAL-c148t-bbd3442122de04d28a058472f38b50de7b2c51f0338592f787705109360fe0793</originalsourceid><addsrcrecordid>eNpNkN1Kw0AQhRdRsFYfQPBiXyB1Zn-yG--kNlpIUWzFy7BJdtuVNJFkvejbm9AiXs0wzDmc8xFyizBDhOR-s0zXMwZMzLhQcaL0GZmglHEUA8Pzvx35Jbnq-y8AITDWE_KZrdPVA838dhfoOhxqS01T0dSa8NNZujKh3PlmS13b0YVzvvS2CXTetX0fPbV74xv6Zur9d-eH87st223jg2-ba3LhTN3bm9Ocko90sZm_RNnr83L-mEUlCh2ioqi4EAwZqyyIimkDUgvFHNeFhMqqgpUSHXCuZcKc0kqBHPryGJwFlfApwaNvOUbqrMuHKHvTHXKEfCSTj2TykUx-IjNo7o4ab639968QE8n4Lw-OXlQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>LSFM: Light Style and Feature Matching for Efficient Cross-Domain Palmprint Recognition</title><source>IEEE Xplore (Online service)</source><creator>Ruan, Song ; Li, Yantao ; Qin, Huafeng</creator><creatorcontrib>Ruan, Song ; Li, Yantao ; Qin, Huafeng</creatorcontrib><description>The exceptional feature extraction capabilities of deep neural networks (DNNs) have significantly advanced palmprint recognition. However, DNNs typically require training and testing data originate from the same distribution, which limits their practical applications. Moreover, existing unsupervised domain adaptation methods struggle to achieve high accuracy with efficiency. To address these challenges, we propose LSFM, an efficient Light Style and Feature Matching method that enhances palmprint recognition performance in cross-domain scenarios with fewer resources. Specifically, we develop an efficient style transfer model to mitigate domain shifts at the pixel level. We then align features across multiple task-specific layers in high dimensional space to reduce domain discrepancies, further improving cross-domain performance. Finally, we evaluate the effectiveness of the proposed LSFM through extensive experiments on two public multi-domain palmprint databases. The experimental results demonstrate that LSFM achieves superior performance with significantly reduced resource consumption, improving average accuracy to 94.87% and lowering the average equal error rate to 1.46%, while saving over 80% of resources.</description><identifier>ISSN: 1556-6013</identifier><identifier>EISSN: 1556-6021</identifier><identifier>DOI: 10.1109/TIFS.2024.3476978</identifier><identifier>CODEN: ITIFA6</identifier><language>eng</language><publisher>IEEE</publisher><subject>Adaptation models ; Adversarial machine learning ; Convolutional neural networks ; Deep learning ; Feature extraction ; feature matching ; Generators ; light style ; Limiting ; Palmprint recognition ; Three-dimensional displays ; Training ; unsupervised domain adaptation</subject><ispartof>IEEE transactions on information forensics and security, 2024, Vol.19, p.9598-9612</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c148t-bbd3442122de04d28a058472f38b50de7b2c51f0338592f787705109360fe0793</cites><orcidid>0000-0003-4911-0393 ; 0000-0001-7648-5671 ; 0000-0003-4195-7178</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10711952$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,4009,27902,27903,27904,54775</link.rule.ids></links><search><creatorcontrib>Ruan, Song</creatorcontrib><creatorcontrib>Li, Yantao</creatorcontrib><creatorcontrib>Qin, Huafeng</creatorcontrib><title>LSFM: Light Style and Feature Matching for Efficient Cross-Domain Palmprint Recognition</title><title>IEEE transactions on information forensics and security</title><addtitle>TIFS</addtitle><description>The exceptional feature extraction capabilities of deep neural networks (DNNs) have significantly advanced palmprint recognition. However, DNNs typically require training and testing data originate from the same distribution, which limits their practical applications. Moreover, existing unsupervised domain adaptation methods struggle to achieve high accuracy with efficiency. To address these challenges, we propose LSFM, an efficient Light Style and Feature Matching method that enhances palmprint recognition performance in cross-domain scenarios with fewer resources. Specifically, we develop an efficient style transfer model to mitigate domain shifts at the pixel level. We then align features across multiple task-specific layers in high dimensional space to reduce domain discrepancies, further improving cross-domain performance. Finally, we evaluate the effectiveness of the proposed LSFM through extensive experiments on two public multi-domain palmprint databases. The experimental results demonstrate that LSFM achieves superior performance with significantly reduced resource consumption, improving average accuracy to 94.87% and lowering the average equal error rate to 1.46%, while saving over 80% of resources.</description><subject>Adaptation models</subject><subject>Adversarial machine learning</subject><subject>Convolutional neural networks</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>feature matching</subject><subject>Generators</subject><subject>light style</subject><subject>Limiting</subject><subject>Palmprint recognition</subject><subject>Three-dimensional displays</subject><subject>Training</subject><subject>unsupervised domain adaptation</subject><issn>1556-6013</issn><issn>1556-6021</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkN1Kw0AQhRdRsFYfQPBiXyB1Zn-yG--kNlpIUWzFy7BJdtuVNJFkvejbm9AiXs0wzDmc8xFyizBDhOR-s0zXMwZMzLhQcaL0GZmglHEUA8Pzvx35Jbnq-y8AITDWE_KZrdPVA838dhfoOhxqS01T0dSa8NNZujKh3PlmS13b0YVzvvS2CXTetX0fPbV74xv6Zur9d-eH87st223jg2-ba3LhTN3bm9Ocko90sZm_RNnr83L-mEUlCh2ioqi4EAwZqyyIimkDUgvFHNeFhMqqgpUSHXCuZcKc0kqBHPryGJwFlfApwaNvOUbqrMuHKHvTHXKEfCSTj2TykUx-IjNo7o4ab639968QE8n4Lw-OXlQ</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Ruan, Song</creator><creator>Li, Yantao</creator><creator>Qin, Huafeng</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-4911-0393</orcidid><orcidid>https://orcid.org/0000-0001-7648-5671</orcidid><orcidid>https://orcid.org/0000-0003-4195-7178</orcidid></search><sort><creationdate>2024</creationdate><title>LSFM: Light Style and Feature Matching for Efficient Cross-Domain Palmprint Recognition</title><author>Ruan, Song ; Li, Yantao ; Qin, Huafeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c148t-bbd3442122de04d28a058472f38b50de7b2c51f0338592f787705109360fe0793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptation models</topic><topic>Adversarial machine learning</topic><topic>Convolutional neural networks</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>feature matching</topic><topic>Generators</topic><topic>light style</topic><topic>Limiting</topic><topic>Palmprint recognition</topic><topic>Three-dimensional displays</topic><topic>Training</topic><topic>unsupervised domain adaptation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ruan, Song</creatorcontrib><creatorcontrib>Li, Yantao</creatorcontrib><creatorcontrib>Qin, Huafeng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><jtitle>IEEE transactions on information forensics and security</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ruan, Song</au><au>Li, Yantao</au><au>Qin, Huafeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>LSFM: Light Style and Feature Matching for Efficient Cross-Domain Palmprint Recognition</atitle><jtitle>IEEE transactions on information forensics and security</jtitle><stitle>TIFS</stitle><date>2024</date><risdate>2024</risdate><volume>19</volume><spage>9598</spage><epage>9612</epage><pages>9598-9612</pages><issn>1556-6013</issn><eissn>1556-6021</eissn><coden>ITIFA6</coden><abstract>The exceptional feature extraction capabilities of deep neural networks (DNNs) have significantly advanced palmprint recognition. However, DNNs typically require training and testing data originate from the same distribution, which limits their practical applications. Moreover, existing unsupervised domain adaptation methods struggle to achieve high accuracy with efficiency. To address these challenges, we propose LSFM, an efficient Light Style and Feature Matching method that enhances palmprint recognition performance in cross-domain scenarios with fewer resources. Specifically, we develop an efficient style transfer model to mitigate domain shifts at the pixel level. We then align features across multiple task-specific layers in high dimensional space to reduce domain discrepancies, further improving cross-domain performance. Finally, we evaluate the effectiveness of the proposed LSFM through extensive experiments on two public multi-domain palmprint databases. The experimental results demonstrate that LSFM achieves superior performance with significantly reduced resource consumption, improving average accuracy to 94.87% and lowering the average equal error rate to 1.46%, while saving over 80% of resources.</abstract><pub>IEEE</pub><doi>10.1109/TIFS.2024.3476978</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-4911-0393</orcidid><orcidid>https://orcid.org/0000-0001-7648-5671</orcidid><orcidid>https://orcid.org/0000-0003-4195-7178</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1556-6013 |
ispartof | IEEE transactions on information forensics and security, 2024, Vol.19, p.9598-9612 |
issn | 1556-6013 1556-6021 |
language | eng |
recordid | cdi_ieee_primary_10711952 |
source | IEEE Xplore (Online service) |
subjects | Adaptation models Adversarial machine learning Convolutional neural networks Deep learning Feature extraction feature matching Generators light style Limiting Palmprint recognition Three-dimensional displays Training unsupervised domain adaptation |
title | LSFM: Light Style and Feature Matching for Efficient Cross-Domain Palmprint Recognition |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T14%3A05%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=LSFM:%20Light%20Style%20and%20Feature%20Matching%20for%20Efficient%20Cross-Domain%20Palmprint%20Recognition&rft.jtitle=IEEE%20transactions%20on%20information%20forensics%20and%20security&rft.au=Ruan,%20Song&rft.date=2024&rft.volume=19&rft.spage=9598&rft.epage=9612&rft.pages=9598-9612&rft.issn=1556-6013&rft.eissn=1556-6021&rft.coden=ITIFA6&rft_id=info:doi/10.1109/TIFS.2024.3476978&rft_dat=%3Ccrossref_ieee_%3E10_1109_TIFS_2024_3476978%3C/crossref_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c148t-bbd3442122de04d28a058472f38b50de7b2c51f0338592f787705109360fe0793%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10711952&rfr_iscdi=true |