Duerr, FabianFabianDuerrWeigel, HendrikHendrikWeigelBeyerer, JürgenJürgenBeyerer2022-08-182022-08-182022https://publica.fraunhofer.de/handle/publica/41988210.1109/icra46639.2022.9811998Panoptic segmentation of point clouds is one of the key challenges of 3D scene understanding, requiring the simultaneous prediction of semantics and object instances. Tasks like autonomous driving strongly depend on these information to get a holistic understanding of their 3D environment. This work presents a novel proposal free framework for lidar-based panoptic segmentation, which exploits three different point cloud representations, leveraging their strengths and compensating their weaknesses. The efficient projection-based range view and bird's eye view are combined and further extended by a point-based network with a novel attention-based neighborhood aggregation for improved semantic features. Cluster-based object recognition in bird's eye view enables an efficient and high-quality instance segmentation. Semantic and instance segmentation are fused and further refined by a novel instance classification for the final panoptic segmentation. The results on two challenging large-scale datasets, nuScenes and SemanticKITTI, show the success of the proposed framework, which outperforms all existing approaches on nuScenes and achieves state-of-the-art results on SemanticKITTI.enRangeBird: Multi View Panoptic Segmentation of 3D Point Clouds with Neighborhood Attentionconference paper