Loading…
RI-L1Approx: A novel Resnet-Inception-based Fast L1-approximation method for face recognition
Performance of deep learning methods for face recognition often relies on abundant data, posing challenges in surveillance and security where data availability is limited and environments are unconstrained. To address this challenge, we propose a novel few shot learning approach termed Resnet Incept...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2025-01, Vol.613, Article 128708 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Performance of deep learning methods for face recognition often relies on abundant data, posing challenges in surveillance and security where data availability is limited and environments are unconstrained. To address this challenge, we propose a novel few shot learning approach termed Resnet Inception-based Fast L1 approximation (RI-L1Approx) for face recognition with limited number of image samples. The method relies on L1 norm approximation of test sample with known class samples. Initially, facial features are extracted by leveraging ResNet-Inception hybrid network’s abilities to learn rich hierarchical representations from facial images. The extracted features are subsequently employed for L1 norm approximation of known features, which are referred to as approximation samples. The L1 norm approximation promotes sparsity by encouraging a subset of approximation samples to possess zero coefficients. This process helps in selecting the most discriminative and informative approximation samples, leading to improved classification capabilities. The proposed method is evaluated on benchmark facial recognition datasets, demonstrating its effectiveness. Comparative experiments with state-of-the-art techniques highlight its superior recognition accuracy. Remarkably, the RI-L1Approx model achieved high accuracy rates of 84.86% with just one sample per class and 96.144% with thirteen samples per class during experimentation. This is significantly better than existing deep learning approaches, which require a large amount of data to train the model for similar performance. |
---|---|
ISSN: | 0925-2312 |
DOI: | 10.1016/j.neucom.2024.128708 |