CC BY-SA 4.0Beyerer, JürgenDürr, FabianFabianDürr2023-10-302023-10-302023978-3-7315-1314-8https://publica.fraunhofer.de/handle/publica/452381https://doi.org/10.24406/publica-208910.5445/KSP/100016115810.24406/publica-2089The understanding and interpretation of complex 3D environments is a key challenge of autonomous driving. Lidar sensors and their recorded point clouds are particularly interesting for this challenge since they provide accurate 3D information about the environment. This work presents a multimodal approach based on deep learning for panoptic segmentation of 3D point clouds. It builds upon and combines the three key aspects multi view architecture, temporal feature fusion, and deep sensor fusion.enTemporal FusionSensor FusionSemantic SegmentationPanoptic SegmentationZeitliche FusionSemantische SegmentierungPanoptische SegmentierungSensorfusionDeep LearningMultimodal Panoptic Segmentation of 3D Point Cloudsdoctoral thesis