Extraction and matching of 3D features for LiDAR-based self-localization in an urban environment
Geolocation of vehicles, objects or people is commonly done using global navigation satellite system (GNSS) receivers. Such a receiver for GNSS-based positioning is either built into the vehicle, or separate handheld devices like a smartphone or similar are used. Self-localization in this way is simple and accurate up to a few meters. Environments where no GNSS service is available require other strategies for self-localization. Especially in the military domain, it is necessary to be prepared for such GNSS-denied scenarios. Awareness of the own position in relation to other units is crucial in military operations, especially where joint operations have to be coordinated geographically and temporally. However, even if a common map-like representation of the terrain is available, precise self-localization relative to this map is not necessarily easy. In this paper, we propose an approach for LiDAR-based localization of a vehicle-based sensor platform in an urban environment. Our approach is to use 360° scanning LiDAR sensors to generate short-duration point clouds of the local environment. In these point clouds, we detect pole-like 3D features such as traffic sign poles, lampposts or tree trunks. The relative distance and orientation of these features to each other is rather unique, and the matrix of these individual distances and orientations can be used to determine the position of the sensor relative to a current map. This map can either be created in advance for the entire area, or a cooperative preceding vehicle with an equivalent sensor setup can generate it. By matching the found LiDARbased 3D features with those of the map, not only the position of the sensor platform but also its orientation can be determined. We provide first experimental results of the proposed method, which were achieved with measurements by Fraunhofer IOSB’s sensor-equipped vehicle MODISSA.