Loading…
Sample-Level and Class-Level Adaptive Training for Face Recognition
Marginal softmax loss function has been widely used for face recognition, where a universal angular margin is added between weight prototypes. However, this method neglects the differences between classes and samples. On class-level, the imbalanced real world training dataset requires different marg...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Marginal softmax loss function has been widely used for face recognition, where a universal angular margin is added between weight prototypes. However, this method neglects the differences between classes and samples. On class-level, the imbalanced real world training dataset requires different margin for the head and tail classes to equally squeeze each class's feature space. On the sample-level, it's also necessary to assign larger importance for the hard samples during training. In this paper, we address these two issues by combining two strategies: (1) explicitly assign the adaptive margin according to the image quantity so that the margin is enlarged for the tail classes; (2) semantically identify the 'hard positive, samples and misclassified samples [1] to attach adaptive weights to increase the training emphasis on these samples. Extensive experiments on LFW/CFP/AGEDB and IJB-B/IJB-C show our method's effectiveness. |
---|---|
ISSN: | 1945-788X |
DOI: | 10.1109/ICME52920.2022.9859920 |