Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

A multi-sensorial approach for the protection of operational vehicles by detection and classification of small flying objects

: Hammer, Marcus; Borgmann, Björn; Hebel, Marcus; Arens, Michael


Kamerman, G.W. ; Society of Photo-Optical Instrumentation Engineers -SPIE-, Bellingham/Wash.:
Electro-Optical Remote Sensing XIV : 21-25 September 2020, Online Only
Bellingham, WA: SPIE, 2020 (Proceedings of SPIE 11538)
Paper 1153807, 13 S.
Conference "Electro-Optical Remote Sensing" <14, 2020, Online>
Fraunhofer IOSB ()
flying object classification; 3D object detection; scanline analysis; Faster R-CNN

Due to the high availability and the easy handling of small drones, the number of reported incidents caused by UAVs both intentionally and accidentally is increasing. To be capable to prevent such incidents in future, it is essential to be able to detect UAVs. However, not every small flying object poses a potential threat and therefore the object not only has to be detected, but also classified or identified. Typical 360◦ scanning LiDAR systems can be deployed to detect and track small objects in the 3D sensor data at ranges of up to 50 m. Unfortunately, in most cases the verification and classification of the detected objects is not possible due to the low resolution of this type of sensor. In high-resolution 2D images, a differentiation of flying objects seems to be more practical, and at least cameras in the visible spectrum are well established and inexpensive. The major drawback of this type of sensor is the dependence on an adequate illumination. An active illumination could be a solution to this problem, but it is usually impossible to illuminate the scene permanently. A more practical way would be to select a sensor with a different spectral sensitivity, for example in the thermal IR. In this paper, we present an approach for a complete chain of detection, tracking and classification of small flying objects such as micro UAVs or birds, using a mobile multi-sensor platform with two 360◦ LiDAR scanners and pan-and-tilt cameras in the visible and thermal IR spectrum. The flying objects are initially detected and tracked in 3D LiDAR data. After detection, the cameras (a grayscale camera in the visible spectrum and a bolometer sensitive in the wavelength range of 7.5 µm to 14 µm) are automatically pointed to the object’s position, and each sensor records a 2D image. A convolutional neural network (CNN) realizes the identification of the region of interest (ROI) as well as the object classification (we consider classes of eight different types of UAVs and birds). In particular, we compare the classification results of the CNN for the two camera types, i.e. for the different wavelengths. The high number of training data for the CNN as well as the test data used for the experiments described in this paper were recorded at a field trial of the NATO group SET-260 (“Assessment of EO/IR Technologies for Detection of Small UAVs in an Urban Environment”) at CENZUB, Sissonne, France.