Loading…
Learning representative features via constrictive annular loss for image classification
Deep convolutional neural networks (DCNNs) have achieved significant performance on image classification task. How to use more powerful loss function to train robust DCNN for image classification has become a recent trend in the community. In this paper, we present an elegant yet effective loss func...
Saved in:
Published in: | Applied intelligence (Dordrecht, Netherlands) Netherlands), 2019-08, Vol.49 (8), p.3082-3092 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep convolutional neural networks (DCNNs) have achieved significant performance on image classification task. How to use more powerful loss function to train robust DCNN for image classification has become a recent trend in the community. In this paper, we present an elegant yet effective loss function: Constrictive Annular Loss (CA-Loss), to boost the classification performance of the DCNNs. CA-Loss can adaptively constrict the features to a suitable scale leading to more representative features, even for the imbalanced dataset. CA-Loss can be easily combined with softmax loss to jointly supervise the DCNNs. Furthermore, CA-Loss does not require additional supervisory information, and it can be easily optimized by the classical optimization algorithm (e.g. Stochastic gradient descent). We conduct extensive experiments on two large scale classification benchmarks and three artificially imbalanced datasets. CA-Loss achieves the state-of-the-art accuracy on these datasets, which strongly demonstrates the effectiveness of our proposed loss function. |
---|---|
ISSN: | 0924-669X 1573-7497 |
DOI: | 10.1007/s10489-019-01434-3 |