Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Development of a Method for the Fusion of Environment Perception Sensor Data for an Automated Vehicle

: Uecker, Marc
: Linnhoff, Clemens; Kuijper, Arjan

Darmstadt, 2021, 112 S.
Darmstadt, TU, Master Thesis, 2021
Master Thesis
Fraunhofer IGD ()
Lead Topic: Smart City; Research Line: Computer vision (CV); Research Line: Machine Learning (ML); machine learning; range image fusion; sensor fusion; semantic data modeling; realtime sensor visualization

Autonomous driving is currently one of the most anticipated future technologies of the automotive world, and researchers from all over the world are dedicated to this task. In the same pursuit, the aDDa project at TU Darmstadt is a collaboration of researchers and students, focused on jointly engineering a car into a fully autonomous vehicle. As such, the aDDa research vehicle is outfitted with a wide array of sensors for environment perception. Within the scope of the aDDa project, this thesis covers the fusion of data from LIDAR, RADAR and camera sensors into a unified environment model. Specifically, this work focuses on providing real-time environment perception, including fusion and interpretation of data from different sensors using only on-board hardware resources. The developed method is a software pipeline, consisting of an analytic low-level sensor fusion stage, a 3D semantic segmentation model based on deep learning, and analytical clustering and tracking methods, as well as a proof-of-concept for estimating drivable space. This method was designed to maximize robustness, by minimizing the influence of the used machine learning approach on the reliability of obstacle detection. The sensor fusion pipeline runs in real-time with an output frequency of 10 Hz, and a pipeline delay of 120 to 190 milliseconds in the encountered situations on public roads. An evaluation of several scenarios shows that the developed system can reliably detect a target vehicle in a variety of real-world situations. The full contributions of this work not only include the development of a sensor fusion pipeline, but also methods for sensor calibration, as well as a novel method for generating training data for the used machine learning approach. In contrast to existing manual methods of data annotation, this work presents a scalable solution for annotating real-world sensor recordings to generate training data for 3D machine perception approaches for autonomous driving.