Loading…
Transferring Knowledge From Texts to Images by Combining Deep Semantic Feature Descriptors
Deep learning techniques have been successfully applied to image processing tasks. Nevertheless, these techniques can be sensitive to the occurrence of small training sets, which is commonly faced when model training requires labelled instances. Transfer learning has been adopted to overcome this li...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep learning techniques have been successfully applied to image processing tasks. Nevertheless, these techniques can be sensitive to the occurrence of small training sets, which is commonly faced when model training requires labelled instances. Transfer learning has been adopted to overcome this limitation by leveraging richer information in auxiliary domains to enhance the learning process in a target domain. In this paper, we proposed a new solution for transfer learning to label images (i.e., the target domain) by reusing labelled textual data (i.e., the auxiliary domain). A convolutional encoder is used to find latent features for images, while a probabilistic generative model is used to find semantic topics (traits) for texts. An ensemble of classifiers is then used to predict semantic topics to new input images according to their latent features. Experiments were performed to evaluate whether the latent features in both domains can actually be related and also to verify the use of the predicted semantic topics to classify images. Promising results were achieved compared to different baselines. |
---|---|
ISSN: | 2161-4407 |
DOI: | 10.1109/IJCNN.2018.8489058 |