Loading…
FBN: Federated Bert Network with client-server architecture for cross-lingual signature verification
•Bert is embedded into the federated learning framework with C-S architecture.•Federated Bert Network integrates an improved local-model aggregation algorithm.•Length Alignment Algorithm adapts multi-attribute time series to batch calculating.•Our model is superior in cross-lingual verification and...
Saved in:
Published in: | Pattern recognition 2023-10, Vol.142, p.109681, Article 109681 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Bert is embedded into the federated learning framework with C-S architecture.•Federated Bert Network integrates an improved local-model aggregation algorithm.•Length Alignment Algorithm adapts multi-attribute time series to batch calculating.•Our model is superior in cross-lingual verification and privacy data protection.
Online signature verification has a great challenge due to the poor performance of deep learning techniques on cross-lingual datasets under privacy constraints. In this paper, we propose a novel Federated Bert Network (FBN) by embedding the Bidirectional Encoder Representations from Transformers (Bert) into a Federated Learning (FL) framework with client-server architecture. A new Length Alignment Algorithm is employed to unify the signature pairs’ sequence length, and the input representations are fed into the different clients to complete the independent learning of local-models. In addition, the server (coordinator) uses the improved Federated Average Algorithm with Reward-Punishment Mechanism (FedAvgRP) to aggregate these local-models and further generate a global-model. After multiple iterations, the optimal model can be obtained and cross-tested on four datasets (SVC 2004, MCYT-330, BioecurID, and Ours) with skilled forged (random forged) EERs of 7.65% (4.76%), 10.73% (8.46%), 10.09% (7.13%), and 8.28% (5.74%), respectively, far higher than that of the independent learning of state-of-the-art methods. Compared with the domain adaptation and improved FL models, our FBN model performs best in random and skilled forgery scenarios. Moreover, the FedAvgRP algorithm helps our model maintain high performance in the face of data attacks. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2023.109681 |