Options
2025
Conference Paper
Title
Improving Object Detection Through Multi-Perspective LiDAR Fusion
Abstract
Detection of relevant objects in the driving environment is crucial for autonomous driving. Using LiDAR scans and image detection based on neural networks for this task is one possibility and already well researched. With advances in the V2N communication stack, the task of object detection can be shifted towards the edge-cloud, which would enable collaborative data collection and consideration of multiple perspectives in preparation for the detection. In this paper, we present an initial analysis of this idea, by utilizing the Eclipse MOSAIC co-simulation framework to develop and test the fusion of multi-perspective LiDAR frames and subsequent object detection. We generate synthetic LiDAR data from the views of multiple vehicles for detection training and use them to assess the robustness of our approach in regard to positioning and latency requirements. We found that a data fusion from multiple perspectives primarily improves detection of largely or fully occluded objects, which could help situation recognition and, therefore, decision making.