Loading…
Comparison of state-of-the-art deep learning APIs for image multi-label classification using semantic metrics
•Performance comparison of state-of-the-art image multi-label classification APIs.•Propose semantic metrics to face the challenge of different APIs’ object-class sets.•Show the merits of semantic metrics in comparing APIs trained on various datasets. Image understanding heavily relies on accurate mu...
Saved in:
Published in: | Expert systems with applications 2020-12, Vol.161, p.113656, Article 113656 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Performance comparison of state-of-the-art image multi-label classification APIs.•Propose semantic metrics to face the challenge of different APIs’ object-class sets.•Show the merits of semantic metrics in comparing APIs trained on various datasets.
Image understanding heavily relies on accurate multi-label classification. In recent years, deep learning algorithms have become very successful for such tasks, and various commercial and open-source APIs have been released for public use. However, these APIs are often trained on different datasets, which, besides affecting their performance, might pose a challenge to their performance evaluation. This challenge concerns the different object-class dictionaries of the APIs’ training dataset and the benchmark dataset, in which the predicted labels are semantically similar to the benchmark labels but considered different simply because they have different wording in the dictionaries. To face this challenge, we propose semantic similarity metrics to obtain richer understating of the APIs predicted labels and thus their performance. In this study, we evaluate and compare the performance of 13 of the most prominent commercial and open-source APIs in a best-of-breed challenge on the Visual Genome and Open Images benchmark datasets. Our findings demonstrate that, while using traditional metrics, the Microsoft Computer Vision, Imagga, and IBM APIs performed better than others. However, applying semantic metrics also unveil the InceptionResNet-v2, Inception-v3, and ResNet50 APIs, which are trained only with the simple ImageNet dataset, as challengers for top semantic performers. |
---|---|
ISSN: | 0957-4174 1873-6793 |
DOI: | 10.1016/j.eswa.2020.113656 |