Loading…
FedSIS: Federated Split Learning with Intermediate Representation Sampling for Privacy-preserving Generalized Face Presentation Attack Detection
Lack of generalization to unseen domains/attacks is the Achilles heel of most face presentation attack detection (FacePAD) algorithms. Existing attempts to enhance the generalizability of FacePAD solutions assume that data from multiple source domains are available with a single entity to enable cen...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 11 |
container_issue | |
container_start_page | 1 |
container_title | |
container_volume | |
creator | Alkhunaizi, Naif Srivatsan, Koushik Almalik, Faris Almakky, Ibrahim Nandakumar, Karthik |
description | Lack of generalization to unseen domains/attacks is the Achilles heel of most face presentation attack detection (FacePAD) algorithms. Existing attempts to enhance the generalizability of FacePAD solutions assume that data from multiple source domains are available with a single entity to enable centralized training. In practice, data from different source domains may be collected by diverse entities, who are often unable to share their data due to legal and privacy constraints. While collaborative learning paradigms such as federated learning (FL) can overcome this problem, standard FL methods are ill-suited for domain generalization because they struggle to surmount the twin challenges of handling non-iid client data distributions during training and generalizing to unseen domains during inference. In this work, a novel framework called Federated Split learning with Intermediate representation Sampling (FedSIS) is introduced for privacy-preserving domain generalization. In FedSIS, a hybrid Vision Transformer (ViT) architecture is learned using a combination of FL and split learning to achieve robustness against statistical heterogeneity in the client data distributions without any sharing of raw data (thereby preserving privacy). To further improve generalization to unseen domains, a novel feature augmentation strategy called intermediate representation sampling is employed, and discriminative information from intermediate blocks of a ViT is distilled using a shared adapter network. The FedSIS approach has been evaluated on two well-known benchmarks for cross-domain FacePAD to demonstrate that it is possible to achieve state-of-the-art generalization performance without data sharing. Code: https://github.com/Naiftt/FedSIS |
doi_str_mv | 10.1109/IJCB57857.2023.10448785 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10448785</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10448785</ieee_id><sourcerecordid>10448785</sourcerecordid><originalsourceid>FETCH-LOGICAL-i204t-88535f1de2154c361b237a4d743c4f396de298314d9ab1e47fbcb49183a45bc33</originalsourceid><addsrcrecordid>eNpNkM1OwzAQhA0SElXpGyDhF0ixvU4ccyuFlKBKIALnynE2YGjTyrGKylPwyLj8SJxGO_vNHIaQM87GnDN9Xt5OL1OVp2osmIAxZ1Lm8TwgI610DikDUCLLDslASCUTnWl9TEZ9_8oY45kQnMOAfBbYVGV1QaOiNwEbWm2WLtA5Gt-57pm-u_BCyy6gX2HjIkEfcOOxxy6Y4NYdrcwqJiLZrj29925r7C75Jvx2b8-wi81L9xG7C2MxMv_SkxCMfaNXGNDujRNy1Jplj6NfHZKn4vpxepPM72bldDJPnGAyJHmeQtryBgVPpYWM1wKUkY2SYGULOoufOAKXjTY1R6na2tZS8xyMTGsLMCSnP70OERcb71bG7xZ_I8IXo_RpNQ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>FedSIS: Federated Split Learning with Intermediate Representation Sampling for Privacy-preserving Generalized Face Presentation Attack Detection</title><source>IEEE Xplore All Conference Series</source><creator>Alkhunaizi, Naif ; Srivatsan, Koushik ; Almalik, Faris ; Almakky, Ibrahim ; Nandakumar, Karthik</creator><creatorcontrib>Alkhunaizi, Naif ; Srivatsan, Koushik ; Almalik, Faris ; Almakky, Ibrahim ; Nandakumar, Karthik</creatorcontrib><description>Lack of generalization to unseen domains/attacks is the Achilles heel of most face presentation attack detection (FacePAD) algorithms. Existing attempts to enhance the generalizability of FacePAD solutions assume that data from multiple source domains are available with a single entity to enable centralized training. In practice, data from different source domains may be collected by diverse entities, who are often unable to share their data due to legal and privacy constraints. While collaborative learning paradigms such as federated learning (FL) can overcome this problem, standard FL methods are ill-suited for domain generalization because they struggle to surmount the twin challenges of handling non-iid client data distributions during training and generalizing to unseen domains during inference. In this work, a novel framework called Federated Split learning with Intermediate representation Sampling (FedSIS) is introduced for privacy-preserving domain generalization. In FedSIS, a hybrid Vision Transformer (ViT) architecture is learned using a combination of FL and split learning to achieve robustness against statistical heterogeneity in the client data distributions without any sharing of raw data (thereby preserving privacy). To further improve generalization to unseen domains, a novel feature augmentation strategy called intermediate representation sampling is employed, and discriminative information from intermediate blocks of a ViT is distilled using a shared adapter network. The FedSIS approach has been evaluated on two well-known benchmarks for cross-domain FacePAD to demonstrate that it is possible to achieve state-of-the-art generalization performance without data sharing. Code: https://github.com/Naiftt/FedSIS</description><identifier>EISSN: 2474-9699</identifier><identifier>EISBN: 9798350337266</identifier><identifier>DOI: 10.1109/IJCB57857.2023.10448785</identifier><language>eng</language><publisher>IEEE</publisher><subject>Data privacy ; Faces ; Federated learning ; Task analysis ; Training ; Transformers</subject><ispartof>2023 IEEE International Joint Conference on Biometrics (IJCB), 2023, p.1-11</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10448785$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10448785$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Alkhunaizi, Naif</creatorcontrib><creatorcontrib>Srivatsan, Koushik</creatorcontrib><creatorcontrib>Almalik, Faris</creatorcontrib><creatorcontrib>Almakky, Ibrahim</creatorcontrib><creatorcontrib>Nandakumar, Karthik</creatorcontrib><title>FedSIS: Federated Split Learning with Intermediate Representation Sampling for Privacy-preserving Generalized Face Presentation Attack Detection</title><title>2023 IEEE International Joint Conference on Biometrics (IJCB)</title><addtitle>IJCB</addtitle><description>Lack of generalization to unseen domains/attacks is the Achilles heel of most face presentation attack detection (FacePAD) algorithms. Existing attempts to enhance the generalizability of FacePAD solutions assume that data from multiple source domains are available with a single entity to enable centralized training. In practice, data from different source domains may be collected by diverse entities, who are often unable to share their data due to legal and privacy constraints. While collaborative learning paradigms such as federated learning (FL) can overcome this problem, standard FL methods are ill-suited for domain generalization because they struggle to surmount the twin challenges of handling non-iid client data distributions during training and generalizing to unseen domains during inference. In this work, a novel framework called Federated Split learning with Intermediate representation Sampling (FedSIS) is introduced for privacy-preserving domain generalization. In FedSIS, a hybrid Vision Transformer (ViT) architecture is learned using a combination of FL and split learning to achieve robustness against statistical heterogeneity in the client data distributions without any sharing of raw data (thereby preserving privacy). To further improve generalization to unseen domains, a novel feature augmentation strategy called intermediate representation sampling is employed, and discriminative information from intermediate blocks of a ViT is distilled using a shared adapter network. The FedSIS approach has been evaluated on two well-known benchmarks for cross-domain FacePAD to demonstrate that it is possible to achieve state-of-the-art generalization performance without data sharing. Code: https://github.com/Naiftt/FedSIS</description><subject>Data privacy</subject><subject>Faces</subject><subject>Federated learning</subject><subject>Task analysis</subject><subject>Training</subject><subject>Transformers</subject><issn>2474-9699</issn><isbn>9798350337266</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2023</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNpNkM1OwzAQhA0SElXpGyDhF0ixvU4ccyuFlKBKIALnynE2YGjTyrGKylPwyLj8SJxGO_vNHIaQM87GnDN9Xt5OL1OVp2osmIAxZ1Lm8TwgI610DikDUCLLDslASCUTnWl9TEZ9_8oY45kQnMOAfBbYVGV1QaOiNwEbWm2WLtA5Gt-57pm-u_BCyy6gX2HjIkEfcOOxxy6Y4NYdrcwqJiLZrj29925r7C75Jvx2b8-wi81L9xG7C2MxMv_SkxCMfaNXGNDujRNy1Jplj6NfHZKn4vpxepPM72bldDJPnGAyJHmeQtryBgVPpYWM1wKUkY2SYGULOoufOAKXjTY1R6na2tZS8xyMTGsLMCSnP70OERcb71bG7xZ_I8IXo_RpNQ</recordid><startdate>20230925</startdate><enddate>20230925</enddate><creator>Alkhunaizi, Naif</creator><creator>Srivatsan, Koushik</creator><creator>Almalik, Faris</creator><creator>Almakky, Ibrahim</creator><creator>Nandakumar, Karthik</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20230925</creationdate><title>FedSIS: Federated Split Learning with Intermediate Representation Sampling for Privacy-preserving Generalized Face Presentation Attack Detection</title><author>Alkhunaizi, Naif ; Srivatsan, Koushik ; Almalik, Faris ; Almakky, Ibrahim ; Nandakumar, Karthik</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i204t-88535f1de2154c361b237a4d743c4f396de298314d9ab1e47fbcb49183a45bc33</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Data privacy</topic><topic>Faces</topic><topic>Federated learning</topic><topic>Task analysis</topic><topic>Training</topic><topic>Transformers</topic><toplevel>online_resources</toplevel><creatorcontrib>Alkhunaizi, Naif</creatorcontrib><creatorcontrib>Srivatsan, Koushik</creatorcontrib><creatorcontrib>Almalik, Faris</creatorcontrib><creatorcontrib>Almakky, Ibrahim</creatorcontrib><creatorcontrib>Nandakumar, Karthik</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Alkhunaizi, Naif</au><au>Srivatsan, Koushik</au><au>Almalik, Faris</au><au>Almakky, Ibrahim</au><au>Nandakumar, Karthik</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>FedSIS: Federated Split Learning with Intermediate Representation Sampling for Privacy-preserving Generalized Face Presentation Attack Detection</atitle><btitle>2023 IEEE International Joint Conference on Biometrics (IJCB)</btitle><stitle>IJCB</stitle><date>2023-09-25</date><risdate>2023</risdate><spage>1</spage><epage>11</epage><pages>1-11</pages><eissn>2474-9699</eissn><eisbn>9798350337266</eisbn><abstract>Lack of generalization to unseen domains/attacks is the Achilles heel of most face presentation attack detection (FacePAD) algorithms. Existing attempts to enhance the generalizability of FacePAD solutions assume that data from multiple source domains are available with a single entity to enable centralized training. In practice, data from different source domains may be collected by diverse entities, who are often unable to share their data due to legal and privacy constraints. While collaborative learning paradigms such as federated learning (FL) can overcome this problem, standard FL methods are ill-suited for domain generalization because they struggle to surmount the twin challenges of handling non-iid client data distributions during training and generalizing to unseen domains during inference. In this work, a novel framework called Federated Split learning with Intermediate representation Sampling (FedSIS) is introduced for privacy-preserving domain generalization. In FedSIS, a hybrid Vision Transformer (ViT) architecture is learned using a combination of FL and split learning to achieve robustness against statistical heterogeneity in the client data distributions without any sharing of raw data (thereby preserving privacy). To further improve generalization to unseen domains, a novel feature augmentation strategy called intermediate representation sampling is employed, and discriminative information from intermediate blocks of a ViT is distilled using a shared adapter network. The FedSIS approach has been evaluated on two well-known benchmarks for cross-domain FacePAD to demonstrate that it is possible to achieve state-of-the-art generalization performance without data sharing. Code: https://github.com/Naiftt/FedSIS</abstract><pub>IEEE</pub><doi>10.1109/IJCB57857.2023.10448785</doi><tpages>11</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2474-9699 |
ispartof | 2023 IEEE International Joint Conference on Biometrics (IJCB), 2023, p.1-11 |
issn | 2474-9699 |
language | eng |
recordid | cdi_ieee_primary_10448785 |
source | IEEE Xplore All Conference Series |
subjects | Data privacy Faces Federated learning Task analysis Training Transformers |
title | FedSIS: Federated Split Learning with Intermediate Representation Sampling for Privacy-preserving Generalized Face Presentation Attack Detection |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T16%3A57%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=FedSIS:%20Federated%20Split%20Learning%20with%20Intermediate%20Representation%20Sampling%20for%20Privacy-preserving%20Generalized%20Face%20Presentation%20Attack%20Detection&rft.btitle=2023%20IEEE%20International%20Joint%20Conference%20on%20Biometrics%20(IJCB)&rft.au=Alkhunaizi,%20Naif&rft.date=2023-09-25&rft.spage=1&rft.epage=11&rft.pages=1-11&rft.eissn=2474-9699&rft_id=info:doi/10.1109/IJCB57857.2023.10448785&rft.eisbn=9798350337266&rft_dat=%3Cieee_CHZPO%3E10448785%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i204t-88535f1de2154c361b237a4d743c4f396de298314d9ab1e47fbcb49183a45bc33%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10448785&rfr_iscdi=true |