Loading…
A Hierarchically Discriminative Loss with Group Regularization for Fine-Grained Image Classification
Fine-grained visual classification targets the discrimination of subordinate categories within broader classes, such as avian species, aerial crafts, and notably, in medical diagnostics like breast cancer. In the domain of breast cancer classification, there has been an emergence of numerous fine-gr...
Saved in:
Published in: | ACM transactions on multimedia computing communications and applications 2024-10 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Fine-grained visual classification targets the discrimination of subordinate categories within broader classes, such as avian species, aerial crafts, and notably, in medical diagnostics like breast cancer. In the domain of breast cancer classification, there has been an emergence of numerous fine-grained models leveraging pathological images. Elevating the interpretability of the model’s discriminative processes among these nuanced categories holds promise in enhancing the transparency of AI decision-making. However, the effective utilization of hierarchical label structures within medical data remains a crucial consideration. In this work, we explore how the label hierarchy can be used to better learn subtle feature embeddings. We observe that the semantic relationships between fine-grained categories can help to analyze the misclassified samples. To this end, with the label hierarchy, we introduce two novel losses to cultivate subtle feature representations, coordinate with feature learning, and align with the principles of Explainable AI (XAI). First, we propose hierarchically discriminative loss to enhance the interactions between fine- and coarse-level features, which can help reinforce the discriminability of fine-grained features for reducing false predictions belonging to the out-group relation. Second, we introduce the in-group regularization loss to establish the interactions between the target class and those confusing classes belonging to the in-group relation. Treating another confusing class as a distraction can regularize feature learning of the target class. Thus, it allows the network to discover more discriminative features and reduces in-group false predictions. Five commonly used datasets for fine-grained classification are extensively evaluated. Our experimental results validate the effectiveness of our proposed novel losses when compared to state-of-the-art methods that utilize hierarchical multi-granularity labels. |
---|---|
ISSN: | 1551-6857 1551-6865 |
DOI: | 10.1145/3698398 |