Loading…
V2VFusion: Multimodal Fusion for Enhanced Vehicle-to-Vehicle Cooperative Perception
Current vehicle-to-vehicle (V2V) research mainly centers on either LiDAR or camera-based perception. Yet, combining data from multiple sensors offers a more complete and precise understanding of the environment. This paper presents V2VFusion, a multimodal perception framework that fuses Li-DAR and c...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Current vehicle-to-vehicle (V2V) research mainly centers on either LiDAR or camera-based perception. Yet, combining data from multiple sensors offers a more complete and precise understanding of the environment. This paper presents V2VFusion, a multimodal perception framework that fuses Li-DAR and camera sensor inputs to improve the performance of V2V systems. Firstly, we implement a baseline system for multi-modal fusion in V2V scenarios, effectively integrating data from LiDAR and camera sensors. This baseline provides a comparable benchmark for subsequent research. Secondly, we explore different fusion strategies, including concatenation, element-wise summation, and transformer methods, to investigate their impact on fusion performance. Lastly, we conduct experiments and evaluation on the OPV2V dataset. The experimental results demonstrate that the multimodal perception method achieves better performance and robustness in V2V tasks, providing more accurate object detection results, thereby improving the safety and reliability of autonomous driving systems. |
---|---|
ISSN: | 2688-0938 |
DOI: | 10.1109/CAC59555.2023.10450676 |