Loading…
Improving Radar-Camera Fusion-based 3D Object Detection for Autonomous Vehicles
Object detection is essential for autonomous driving as it provides knowledge of the state of the surrounding objects. Aside from the objects' 3D bounding boxes, 3D object detection also estimates the objects' velocity and attribute. Radar-camera fusion-based object detection methods have...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Object detection is essential for autonomous driving as it provides knowledge of the state of the surrounding objects. Aside from the objects' 3D bounding boxes, 3D object detection also estimates the objects' velocity and attribute. Radar-camera fusion-based object detection methods have the potential to deliver accurate and robust results even in harsh conditions while being more affordable than lidar-based solutions. However, current radar-camera fusion-based methods are still significantly outperformed by their lidar-based counterparts, even when radar has access to direct velocity measurement which lidar lacks. In this work, we propose a feature-level radar-camera fusion-based object detection architecture. Our architecture associates preliminary detections produced by an image-based pipeline with radar detection clusters using a frustum-based association mechanism. By using radar detection clusters instead of individual points, we can extract features that allow the model to use the geometry of the clusters to refine the preliminary detections. We evaluated our architecture on the nuScenes dataset and achieved a nuScenes detection score (NDS) of 0.465. We also find that our architecture achieved significantly better objects' orientation, velocity, and attribute estimation results compared to other radar-camera fusion-based methods. |
---|---|
ISSN: | 2470-640X |
DOI: | 10.1109/ICSET57543.2022.10011030 |