Loading…

Moving Object Detection Method via ResNet-18 With Encoder-Decoder Structure in Complex Scenes

In complex scenes, dynamic background, illumination variation, and shadow are important factors, which make conventional moving object detection algorithms suffer from poor performance. To solve this problem, a moving object detection method via ResNet-18 with encoder-decoder structure is proposed t...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2019, Vol.7, p.108152-108160
Main Authors: Ou, Xianfeng, Yan, Pengcheng, Zhang, Yiming, Tu, Bing, Zhang, Guoyun, Wu, Jianhui, Li, Wujing
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In complex scenes, dynamic background, illumination variation, and shadow are important factors, which make conventional moving object detection algorithms suffer from poor performance. To solve this problem, a moving object detection method via ResNet-18 with encoder-decoder structure is proposed to segment moving objects from complex scenes. ResNet-18 with encoder-decoder structure possesses pixel-level classification capability to divide pixels into foreground and background, and it performs well in feature extraction because of its layers are so shallow that many more low-scale features will be retained. First, the object frames and their corresponding artificial labels are input to the network. Then, feature vectors will be generated by the encoder, and they are converted into segmentation maps by the decoder through deconvolution processing. Third, a rough matching of the moving object regions will be obtained, and finally, the Euclidean distance is used to match the moving object regions accurately. The proposed method is suitable for the scenes where dynamic background, illumination variation, and shadow exist, and experimental results on the public standard CDnet2014 and I2R datasets, from both qualitative and quantitative comparison aspects, demonstrate that the proposed method outperforms state-of-the-art algorithms significantly, and its mean F-measure increased by 1.99%~29.17%.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2931922