Options
2020
Conference Paper
Title
Iterative Deep Fusion for 3D Semantic Segmentation
Abstract
Understanding and interpreting a scene is a keytask of environment perception for autonomous driving, which is why autonomous vehicles are equipped with a wide range of different sensors. Semantic segmentation of sensor data provides valuable information for this task and is often seen as key enabler. In this paper, we are presenting a deep learning approach for3D semantic segmentation of lidar point clouds. The proposed architecture uses a range view representation of 3D point clouds and additionally exploits camera features to increase accuracy and robustness. In contrast to other approaches, which fuse lidar and camera feature maps once, we fuse them iteratively and at different scales inside our network architecture. We demonstrate the benefits of the presented iterative deep fusion approach over single fusion approaches on a large benchmark dataset. Our evaluation shows considerable improvements, resulting from the additional use of camera features. Furthermore, our fusion strategy outperforms the current state-of-the-art strategy by a considerable margin. Despite the use of camera features, the presented approach is also trainable solely with point cloud labels.
Author(s)