Now showing 1 - 6 of 6
  • Publication
    The MODISSA testbed: A multi-purpose platform for the prototypical realization of vehicle-related applications using optical sensors
    We present the current state of development of the sensor-equipped car MODISSA, with which Fraunhofer IOSB realizes a configurable experimental platform for hardware evaluation and software development in the context of mobile mapping and vehicle-related safety and protection. MODISSA is based on a van that has successively been equipped with a variety of optical sensors over the past few years, and contains hardware for complete raw data acquisition, georeferencing, real-time data analysis, and immediate visualization on in-car displays. We demonstrate the capabilities of MODISSA by giving a deeper insight into experiments with its specific configuration in the scope of three different applications. Other research groups can benefit from these experiences when setting up their own mobile sensor system, especially regarding the selection of hardware and software, the knowledge of possible sources of error, and the handling of the acquired sensor data.
  • Publication
    LiDAR-based localization and automatic control of UAVs for mobile remote reconnaissance
    Sensor-based monitoring of the surroundings of civilian vehicles is primarily relevant for driver assistance in road traffic, whereas in military vehicles, far-reaching reconnaissance of the environment is crucial for accomplishing the respective mission. Modern military vehicles are typically equipped with electro-optical sensor systems for such observation or surveillance purposes. However, especially when the line-of-sight to the onward route is obscured or visibility conditions are generally limited, more enhanced methods for reconnaissance are needed. The obvious benefit of micro-drones (UAVs) for remote reconnaissance is well known. The spatial mobility of UAVs can provide additional information that cannot be obtained on the vehicle itself. For example, the UAV could keep a fixed position in front and above the vehicle to gather information about the area ahead, or it could fly above or around obstacles to clear hidden areas. In a military context, this is usually referred to as manned-unmanned teaming (MUM-T). In this paper, we propose the use of vehicle-based electro-optical sensors as an alternative way for automatic control of (cooperative) UAVs in the vehicle's vicinity. In its most automated form, the external control of the UAV only requires a 3D nominal position relative to the vehicle or in absolute geocoordinates. The flight path there and the maintaining of this position including obstacle avoidance are automatically calculated on-board the vehicle and permanently communicated to the UAV as control commands. We show first results of an implementation of this approach using 360° scanning LiDAR sensors mounted on a mobile sensor unit. The control loop of detection, tracking and guidance of a cooperative UAV in the local environment is demonstrated by two experiments. We show the automatic LiDAR-controlled navigation of a UAV from a starting point A to a destination point B. with and without an obstacle between A and B. The obstacle in the direct path is detected and an alternative flight route is calculated and used.
  • Publication
    A multi-sensorial approach for the protection of operational vehicles by detection and classification of small flying objects
    Due to the high availability and the easy handling of small drones, the number of reported incidents caused by UAVs both intentionally and accidentally is increasing. To be capable to prevent such incidents in future, it is essential to be able to detect UAVs. However, not every small flying object poses a potential threat and therefore the object not only has to be detected, but also classified or identified. Typical 360CR scanning LiDAR systems can be deployed to detect and track small objects in the 3D sensor data at ranges of up to 50 m. Unfortunately, in most cases the verification and classification of the detected objects is not possible due to the low resolution of this type of sensor. In high-resolution 2D images, a differentiation of flying objects seems to be more practical, and at least cameras in the visible spectrum are well established and inexpensive. The major drawback of this type of sensor is the dependence on an adequate illumination. An active illumination could be a solution to this problem, but it is usually impossible to illuminate the scene permanently. A more practical way would be to select a sensor with a different spectral sensitivity, for example in the thermal IR. In this paper, we present an approach for a complete chain of detection, tracking and classification of small flying objects such as micro UAVs or birds, using a mobile multi-sensor platform with two 360CR LiDAR scanners and pan-and-tilt cameras in the visible and thermal IR spectrum. The flying objects are initially detected and tracked in 3D LiDAR data. After detection, the cameras (a grayscale camera in the visible spectrum and a bolometer sensitive in the wavelength range of 7.5 µm to 14 µm) are automatically pointed to the object's position, and each sensor records a 2D image. A convolutional neural network (CNN) realizes the identification of the region of interest (ROI) as well as the object classification (we consider classes of eight different types of UAVs and birds). In particular, we compare the classification results of the CNN for the two camera types, i.e. for the different wavelengths. The high number of training data for the CNN as well as the test data used for the experiments described in this paper were recorded at a field trial of the NATO group SET-260 (""Assessment of EO/IR Technologies for Detection of Small UAVs in an Urban Environment"") at CENZUB, Sissonne, France.
  • Publication
    Image based classification of small flying objects detected in LiDAR point clouds
    The number of reported incidents caused by UAVs, intentional as well as accidental, is rising. To avoid such incidents in future, it is essential to be able to detect UAVs. However, not every small flying object is a potential threat and therefore the object not only has to be detected, but classified or identified. Typical 360° scanning LiDAR sensors, like those developed for automotive applications, can be deployed for the detection and tracking of small objects in ranges up to 50 m. Unfortunately, the verification and classification of the detected objects is not possible in most cases, due to the low resolution of that kind of sensor. In visual images a differentiation of flying objects seems more practical. In this paper, we present a method for the distinction between UAVs and birds in multi-sensor data (LiDAR point clouds and visual images). The flying objects are initially detected and tracked in LiDAR data. After detection, a grayscale camera is automatically pointed onto the object and an image is recorded. The differentiation between UAV and bird is then realized by a convolutional neural network (CNN). In addition, we investigate the potential of this approach for a more detailed classification of the type of UAV. The paper shows first results of this multi-sensor classification approach. The high number of training data for the CNN as well as the test data of the experiments were recorded at a field trial of the NATO group SET-260 ("Assessment of EO/IR Technologies for Detection of Small UAVs in an Urban Environment").
  • Publication
    UAV detection, tracking, and classification by sensor fusion of a 360° lidar system and an alignable classification sensor
    The number of reported incidents caused by UAVs, intentional as well as accidental, is rising. To avoid such incidents in future, it is essential to be able to detect UAVs. However, not every UAV is a potential threat and therefore the UAV not only has to be detected, but classified or identified. 360o scanning LiDAR systems can be deployed for the detection and tracking of (micro) UAVs in ranges up to 50 m. Unfortunately, the verification and classification of the detected objects is not possible in most cases, due to the low resolution of that kind of sensor. In this paper, we propose an automatic alignment of an additional sensor (mounted on a pan-tilt head) for the identification of the detected objects. The classification sensor is directed by the tracking results of the panoramic LiDAR sensor. If the alignable sensor is an RGB- or infrared camera, the identification of the objects can be done by state-of-the-art image processing algorithms. If a higher-resolution LiDAR sensor is used for this task, algorithms have to be developed and implemented. For example, the classification could be realized by a 3D model matching method. After the handoff of the object position from the 360o LiDAR to the verification sensor, this second system can be used for a further tracking of the object, e.g., if the trajectory of the UAV leaves the field of view of the primary LiDAR system. The paper shows first results of this multi-sensor classification approach.
  • Publication
    Potential of lidar sensors for the detection of UAVs
    The number of reported incidents caused by UAVs, intentional as well as accidental, is rising. To avoid such incidents in future, it is essential to be able to detect UAVs. LiDAR systems are well known to be adequate sensors for object detection and tracking. In contrast to the detection of pedestrians or cars in traffic scenarios, the challenges of UAV detection lie in the small size, the various shapes and materials, and in the high speed and volatility of their movement. Due to the small size of the object and the limited sensor resolution, a UAV can hardly be detected in a single frame. It rather has to be spotted by its motion in the scene. In this paper, we present a fast approach for the tracking and detection of (low) flying small objects like commercial mini/micro UAVs. Unlike with the typical sequence -track-after-detect-, we start with looking for clues by finding minor 3D details in the 360° LiDAR scans of scene. If these clues are detectable in consecutive scans (possibly including a movement), the probability for the actual detection of a UAV is rising. For the algorithm development and a performance analysis, we collected data during a field trial with several different UAV types and several different sensor types (acoustic, radar, EO/IR, LiDAR). The results show that UAVs can be detected by the proposed methods, as long as the movements of the UAVs correspond to the LiDAR sensor's capabilities in scanning performance, range and resolution. Based on data collected during the field trial, the paper shows first results of this analysis.