Loading…

Object recognition and detection with deep learning for autonomous driving applications

Autonomous driving requires reliable and accurate detection and recognition of surrounding objects in real drivable environments. Although different object detection algorithms have been proposed, not all are robust enough to detect and recognize occluded or truncated objects. In this paper, we prop...

Full description

Saved in:
Bibliographic Details
Published in:Simulation (San Diego, Calif.) Calif.), 2017-09, Vol.93 (9), p.759-769
Main Authors: Uçar, Ayşegül, Demir, Yakup, Güzeliş, Cüneyt
Format: Article
Language:English
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Autonomous driving requires reliable and accurate detection and recognition of surrounding objects in real drivable environments. Although different object detection algorithms have been proposed, not all are robust enough to detect and recognize occluded or truncated objects. In this paper, we propose a novel hybrid Local Multiple system (LM-CNN-SVM) based on Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) due to their powerful feature extraction capability and robust classification property, respectively. In the proposed system, we divide first the whole image into local regions and employ multiple CNNs to learn local object features. Secondly, we select discriminative features by using Principal Component Analysis. We then import into multiple SVMs applying both empirical and structural risk minimization instead of using a direct CNN to increase the generalization ability of the classifier system. Finally, we fuse SVM outputs. In addition, we use the pre-trained AlexNet and a new CNN architecture. We carry out object recognition and pedestrian detection experiments on the Caltech-101 and Caltech Pedestrian datasets. Comparisons to the best state-of-the-art methods show that the proposed system achieved better results.
ISSN:0037-5497
1741-3133
DOI:10.1177/0037549717709932