Loading…
Fine-tuning deep convolutional neural networks for distinguishing illustrations from photographs
•Automatically detecting illustrations is needed for the target system.•Deep Convolutional Neural Networks have been successful in computer vision tasks.•DCNN with fine-tuning outperformed the other models including handcrafted features. Systems for aggregating illustrations require a function for a...
Saved in:
Published in: | Expert systems with applications 2016-12, Vol.66, p.295-301 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Automatically detecting illustrations is needed for the target system.•Deep Convolutional Neural Networks have been successful in computer vision tasks.•DCNN with fine-tuning outperformed the other models including handcrafted features.
Systems for aggregating illustrations require a function for automatically distinguishing illustrations from photographs as they crawl the network to collect images. A previous attempt to implement this functionality by designing basic features that were deemed useful for classification achieved an accuracy of only about 58%. On the other hand, deep neural networks had been successful in computer vision tasks, and convolutional neural networks (CNNs) had performed good at extracting such useful image features automatically. We evaluated alternative methods to implement this classification functionality with focus on deep neural networks. As the result of experiments, the method that fine-tuned deep convolutional neural network (DCNN) acquired 96.8% accuracy, outperforming the other models including the custom CNN models that were trained from scratch. We conclude that DCNN with fine-tuning is the best method for implementing a function for automatically distinguishing illustrations from photographs. |
---|---|
ISSN: | 0957-4174 1873-6793 |
DOI: | 10.1016/j.eswa.2016.08.057 |