Loading…

Enhancing semantic image retrieval with limited labeled examples via deep learning

With the rapid growth of the Internet, a large number of multi-modal objects such as images and their social tags can easily be downloaded from the Web. The use of such objects can improve training process in the presence of few or limited number of labeled images provided. In order to leverage thes...

Full description

Saved in:
Bibliographic Details
Published in:Knowledge-based systems 2019-01, Vol.163, p.252-266
Main Authors: Xu, Haijiao, Huang, Changqin, Wang, Dianhui
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the rapid growth of the Internet, a large number of multi-modal objects such as images and their social tags can easily be downloaded from the Web. The use of such objects can improve training process in the presence of few or limited number of labeled images provided. In order to leverage these unlabeled and labeled multi-modal Web objects for enhancing the performance of unimodal image retrieval, we propose a novel approach called Semi-supervised Multi-concept Retrieval to semantic image retrieval via Deep Learning (SMRDL) in this paper. Differing from conventional methods that use multiple and independent concepts in a semantic multi-concept query, our proposed approach regards multiple concepts as a holistic scene for multi-concept scene learning of unimodal retrieval. In particular, we first train a multi-modal Convolutional Neural Network (CNN) as a concept classifier for images and texts, and then use it to annotate unlabeled Web images. For each of unlabeled images, we then obtain its most relevant concept annotations by using a new strategy of annotation promotion. Finally, we employ a unimodal visual CNN to train a concept classifier in visual modality, which uses both unlabeled and labeled examples for concept learning of unimodal retrieval. The results of our comprehensive experiments on two datasets of MIR Flickr 2011 and NUS-WIDE have shown that our proposed approach outperforms several state-of-the-art methods.
ISSN:0950-7051
1872-7409
DOI:10.1016/j.knosys.2018.08.032