Loading…

Hierarchical Prototypes Polynomial Softmax Loss Function for Visual Classification

A well-designed loss function can effectively improve the characterization ability of network features without increasing the amount of calculation in the model inference stage, and has become the focus of attention in recent research. Given that the existing lightweight network adds a loss to the l...

Full description

Saved in:
Bibliographic Details
Published in:Applied sciences 2022-10, Vol.12 (20), p.10336
Main Authors: Xiao, Chengcheng, Liu, Xiaowen, Sun, Chi, Liu, Zhongyu, Ding, Enjie
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A well-designed loss function can effectively improve the characterization ability of network features without increasing the amount of calculation in the model inference stage, and has become the focus of attention in recent research. Given that the existing lightweight network adds a loss to the last layer, which severely attenuates the gradient during the backpropagation process, we propose a hierarchical polynomial kernel prototype loss function in this study. In this function, the addition of a polynomial kernel loss function to multiple stages of the deep neural network effectively enhances the efficiency of gradient return, and only adds multi-layer prototype loss functions in the training stage without increasing the calculation of the inference stage. In addition, the good non-linear expression ability of the polynomial kernel improves the characteristic expression performance of the network. Verification on multiple public datasets shows that the lightweight network trained with the proposed hierarchical polynomial kernel loss function has a higher accuracy than other loss functions.
ISSN:2076-3417
2076-3417
DOI:10.3390/app122010336