Loading…

A Generic Semi-Supervised Deep Learning-Based Approach for Automated Surface Inspection

Automated surface inspection (ASI) is critical to quality control in industrial manufacturing processes. Recent advances in deep learning have produced new ASI methods that automatically learn high-level features from training samples while being robust to changes and capable of detecting different...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2020, Vol.8, p.114088-114099
Main Authors: Zheng, Xiaoqing, Wang, Hongcheng, Chen, Jie, Kong, Yaguang, Zheng, Song
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Automated surface inspection (ASI) is critical to quality control in industrial manufacturing processes. Recent advances in deep learning have produced new ASI methods that automatically learn high-level features from training samples while being robust to changes and capable of detecting different types of surfaces and defects. However, they usually rely heavily on manpower to collect and label training samples. In this paper, a generic semi-supervised deep learning-based approach for ASI that requires a small quantity of labeled training data is proposed. While the approach follows the MixMatch rules to conduct sophisticated data augmentation, we introduce a new loss function calculation method and propose a new convolutional neural network based on a residual structure to achieve accurate defect detection. An experiment on two public datasets (DAGM and NEU) and one industrial dataset (CCL) is carried out. For public datasets, the experimental results are compared against several best benchmarks in the literature. For the industrial dataset, the results are compared against deep learning methods based on benchmark neural networks. The proposed method achieves the best performance in all comparisons. In addition, a comparative experiment of model performance given a different number of labeled samples is conducted, demonstrating that the proposed method can achieve good performance with few labeled training samples.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3003588