Loading…
Uniform misclassification loss for unbiased model prediction
•Propose a novel loss function, termed as Uniform Misclassification Loss (UML) for unbiased model prediction by equalizing and minimizing the model’s misclassification rate across different demographic subgroups.•Introduce a novel metric, Joint Performance Disparity Measure (JPD) for joint estimatio...
Saved in:
Published in: | Pattern recognition 2023-12, Vol.144, p.109689, Article 109689 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Propose a novel loss function, termed as Uniform Misclassification Loss (UML) for unbiased model prediction by equalizing and minimizing the model’s misclassification rate across different demographic subgroups.•Introduce a novel metric, Joint Performance Disparity Measure (JPD) for joint estimation of bias in model prediction and the overall model performance.•Extensive experiments are performed to mitigate bias across single demographic group (e.g. subgroups of gender) and multiple demographic groups (e.g. intersectional subgroups of gender and age) in balanced and imbalanced training settings.•Comparisons are performed with existing algorithms.
Deep learning algorithms have achieved tremendous success over the past few years. However, the biased behavior of deep models, where the models favor/disfavor certain demographic subgroups, is a major concern in the deep learning community. Several adverse consequences of biased predictions have been observed in the past. One solution to alleviate the problem is to train deep models for fair outcomes. Therefore, in this research, we propose a novel loss function, termed as Uniform Misclassification Loss (UML) to train deep models for unbiased outcomes. The proposed UML function penalizes the model for the worst-performing subgroup for mitigating bias and enhancing the overall model performance. The proposed loss function is also effective while training with imbalanced data as well. Further, a metric, Joint Performance Disparity Measure (JPD) is introduced to jointly measure the overall model performance and the bias in model prediction. Multiple experiments have been performed on four publicly available datasets for facial attribute prediction and comparisons are performed with existing bias mitigation algorithms. Experimental results are reported using performance and bias evaluation metrics. The proposed loss function outperforms existing bias mitigation algorithms that showcase its effectiveness in obtaining unbiased outcomes and improved performance. |
---|---|
ISSN: | 0031-3203 |
DOI: | 10.1016/j.patcog.2023.109689 |