Loading…
Feature Aware Re-weighting (FAR) in Bird’s Eye View for LiDAR-based 3D object detection in autonomous driving applications
3D object detection is a key element for the perception of autonomous vehicles. LiDAR sensors are commonly used to perceive the surrounding area, producing a sparse representation of the scene in the form of a point cloud. The current trend is to use deep learning neural network architectures that p...
Saved in:
Published in: | Robotics and autonomous systems 2024-05, Vol.175, p.104664, Article 104664 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | 3D object detection is a key element for the perception of autonomous vehicles. LiDAR sensors are commonly used to perceive the surrounding area, producing a sparse representation of the scene in the form of a point cloud. The current trend is to use deep learning neural network architectures that predict 3D bounding boxes. The vast majority of architectures process the LiDAR point cloud directly but, due to computation and memory constraints, at some point they compress the input to a 2D Bird’s Eye View (BEV) representation. In this work, we propose a novel 2D neural network architecture, namely the Feature Aware Re-weighting Network, for feature extraction in BEV using local context via an attention mechanism, to improve the 3D detection performance of LiDAR-based detectors. Extensive experiments on five state-of-the-art detectors and three benchmarking datasets, namely KITTI, Waymo and nuScenes, demonstrate the effectiveness of the proposed method in terms of both detection performance and minimal added computational burden. We release our code at https://github.com/grgzam/FAR.
•Feature Aware Re-weighting (FAR) in BEV for LiDAR-based 3D object detection in autonomous driving applications.•Extracted local context in BEV via attention, is essential for an improved performance.•Modularity of FAR Network allows for adaptation in existing 3D object detectors.•FAR Network is evaluated on five SOTA detectors and three datasets, KITTI, Waymo and nuScenes. |
---|---|
ISSN: | 0921-8890 1872-793X |
DOI: | 10.1016/j.robot.2024.104664 |