Now showing 1 - 6 of 6
  • Publication
    Efficient Tour Planning for a Measurement Vehicle by Combining Next Best View and Traveling Salesman
    Path planning for a measuring vehicle requires solving two popular problems from computer science, namely the search for the optimal tour and the search for the optimal viewpoint. Combining both problems results in a new variation of the Traveling Salesman Problem, which we refer to as the Explorational Traveling Salesman Problem. The solution to this problem is the optimal tour with a minimum of observations. In this paper, we formulate the basic problem, discuss it in context of the existing literature and present an iterative solution algorithm. We demonstrate how the method can be applied directly to LiDAR data using an occupancy grid. The ability of our algorithm to generate suitably efficient tours is verified based on two synthetic benchmark datasets, utilizing a ground truth determined by an exhaustive search.
  • Publication
    Information Acquisition on Pedestrian Movements in Urban Traffic with a Mobile Multi-Sensor System
    This paper presents an approach which combines LiDAR sensors and cameras of a mobile multi-sensor system to obtain information about pedestrians in the vicinity of the sensor platform. Such information can be used, for example, in the context of driver assistance systems. In the first step, our approach starts by using LiDAR sensor data to detect and track pedestrians, benefiting from LiDAR's capability to directly provide accurate 3D data. After LiDAR-based detection, the approach leverages the typically higher data density provided by 2D cameras to determine the body pose of the detected pedestrians. The approach combines several state-of-the-art machine learning techniques: it uses a neural network and a subsequent voting process to detect pedestrians in LiDAR sensor data. Based on the known geometric constellation of the different sensors and the knowledge of the intrinsic parameters of the cameras, image sections are generated with the respective regions of interest showing only the detected pedestrians. These image sections are then processed with a method for image-based human pose estimation to determine keypoints for different body parts. These keypoints are finally projected from 2D image coordinates to 3D world coordinates using the assignment of the original LiDAR points to a particular pedestrian.
  • Publication
    Extrinsic self-calibration of an operational mobile LiDAR system
    In this paper, we describe a method for automatic extrinsic self-calibration of an operational mobile LiDAR sensing system (MLS), that is additionally equipped with a POS position and orientation subsystem (e.g., GNSS/IMU, odometry). While commercial mobile mapping systems or civil LiDAR-equipped cars can be calibrated on a regular basis using a dedicated calibration setup, we aim at a method for automatic in-field (re-)calibration of such sensor systems, which is even suitable for future military combat vehicles. Part of the intended use of a mobile LiDAR or laser scanning system is 3D mapping of the terrain by POS-based direct georeferencing of the range measurements, resulting in 3D point clouds of the terrain. The basic concept of our calibration approach is to minimize the average scatter of the 3D points, assuming a certain occurrence of smooth surfaces in the scene which are scanned multiple times. The point scatter is measured by local principal component analysis (PCA). Parameters describing the sensor installation are adjusted to reach a minimal value of the PCA's average smallest eigenvalue. While sensor displacements (lever arms) are still difficult to correct in this way, our approach succeeds in eliminating misalignments of the 3D sensors (boresight alignment). The focus of this paper is on quantifying the influence of driving maneuvers and, particularly, scene characteristics on the calibration method and its results. One finding is that a curvy driving style in an urban environment provides the best conditions for the calibration of the MLS system, but other structured environments may still be acceptable.
  • Publication
    Using neural networks to detect objects in MLS point clouds based on local point neighborhoods
    This paper presents an approach which uses a PointNet-like neural network to detect objects of certain types in MLS point clouds. In our case, it is used for the detection of pedestrians, but the approach can easily be adapted to other object classes. In the first step, we process local point neighborhoods with the neural network to determine a descriptive feature. This is then further processed to generate two outputs of the network. The first output classifies the neighborhood and determines if it is part of an object of interest. If this is the case, the second output determines where it is located in relation to the object center. This regression output allows us to use a voting process for the actual object detection. This processing step is inspired by approaches based on implicit shape models (ISM). It is able to deal with a certain amount of incorrectly classified neighborhoods, since it combines the results of multiple neighborhoods for the detection of an object. A benefit of our approach as compared to other machine learning methods is its low demand for training data. In our experiments, we achieved a promising detection performance even with less than 1000 training examples.
  • Publication
    A representation of MLS data as a basis for terrain navigability analysis and sensor depolyment planning
    Recording an ever-changing urban environment in a structured manner requires sensor deployment planning. In case of mobile sensor platforms, this also includes verifying the terrain navigability. Solving both tasks would usually require different application-specific data structures and tools. In this work, we propose a theoretical framework that provides a uniform representation for spatial information as well as the tools required to combine, manipulate and visualize it. We provide an efficient implementation of the framework utilizing octree-based evidence grids. Our approach can be used to solve complex tasks by combining simple spatial information sources, which we demonstrate by providing simple solutions to the aforementioned applications. Despite the use of a volumetric approach, our runtimes are within the range of minutes.
  • Publication
    Fast and adaptive surface reconstruction from mobile laser scanning data of urban areas
    ( 2015)
    Gordon, Marvin
    ;
    ;
    The availability of 3D environment models enables many applications such as visualization, planning or simulation. With the use of current mobile laser scanners it is possible to map large areas in relatively short time. One of the emerging problems is to handle the resulting huge amount of data. We present a fast and adaptive approach to represent connected 3D points by surface patches while keeping fine structures untouched. Our approach results in a reasonable reduction of the data and, on the other hand, it preserves details of the captured scene. At all times during data acquisition and processing, the 3D points are organized in an octree with adaptive cell size for fast handling of the data. Cells of the octree are filled with points and split into subcells, if the points do not lie on one plane or are not evenly distributed on the plane. In order to generate a polygon model, each octree cell and its corresponding plane are intersected. As a main result, our approach allows the online generation of an expandable 3D model of controllable granularity. Experiments have been carried out using a sensor vehicle with two laser scanners at an urban test site. The results of the experiments show that the demanded compromise between data reduction and preservation of details can be reached.