Loading…
Camera and Radar Sensor Fusion for Robust Vehicle Localization via Vehicle Part Localization
Many production vehicles are now equipped with both cameras and radar in order to provide various driver-assistance systems (DAS) with position information of surrounding objects. These sensors, however, cannot provide position information accurate enough to realize highly automated driving function...
Saved in:
Published in: | IEEE access 2020, Vol.8, p.75223-75236 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Many production vehicles are now equipped with both cameras and radar in order to provide various driver-assistance systems (DAS) with position information of surrounding objects. These sensors, however, cannot provide position information accurate enough to realize highly automated driving functions and other advanced driver-assistance systems (ADAS). Sensor fusion methods were proposed to overcome these limitations, but they tend to show limited detection performance gain in terms of accuracy and robustness. In this study, we propose a camera-radar sensor fusion framework for robust vehicle localization based on vehicle part (rear corner) detection and localization. The main idea of the proposed method is to reinforce the azimuth angle accuracy of the radar information by detecting and localizing the rear corner part of the target vehicle from an image. This part-based fusion approach enables accurate vehicle localization as well as robust performance with respect to occlusions. For efficient part detection, several candidate points are generated around the initial radar point. Then, a widely adopted deep learning approach is used to detect and localize the left and right corners of target vehicles. The corner detection network outputs their reliability score based on the localization uncertainty of the center point in corner parts. Using these position reliability scores along with a particle filter, the most probable rear corner positions are estimated. Estimated positions (pixel coordinate) are translated into angular data, and the surrounding vehicle is localized with respect to the ego-vehicle by combining the angular data of the rear corner and the radar's range data in the lateral and longitudinal direction. The experimental test results show that the proposed method provides significantly better localization performance in the lateral direction, with greatly reduced maximum errors (radar: 3.02m, proposed method: 0.66m) and root mean squared errors (radar: 0.57m, proposed method: 0.18m). |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2020.2985075 |