An Introspective System for Adaptive Sensor Fusion
The interaction of autonomous vehicles in open context environments is still a challenge for its safety-critical applications. Especially under adverse weather conditions, the underlying system is continuously challenged, to perceive and observe its dynamic environment safely and reliably. That is why several types of sensors, like cameras or lidars, are employed on the vehicles. Many state-of-the-art applications simply use one fixed fusion strategy for perception. However, one static fusion architecture is often not sufficient to cope with environmental changes. Therefore, we introduce an introspective system that includes situational awareness to adapt its sensor fusion strategy. The idea is that each type of sensor configuration, each level of fusion, and each fusion technique has advantages under specific conditions. In this thesis, we outline a proof-of concept, in which our proposed introspective system predicts the optimal fusion strategy. The goal is to increase the detection performance in terms of the mean intersection of union. We focus on 2D object detection by combining camera and lidar sensor data. We use the LGSVL simulator to create environments with different and varying weather conditions, while regarding simulated sensor both from a lidar and a camera setup. We merge the camera image with the projected depth image of the lidar using multiple fusion strategies. As the lidar sensor is not affected by the weather conditions within the simulator, we applied data augmentation, to simulate the weather effects of rain and fog on the 3D point cloud data. Based on the YOLOv2 multimodal-based fusion architecture, a one-stage detector, we implemented and trained seven different detection models to represent different fusion strategies. The detection performance produced by the individual models is used for the training of our proposed introspective system. In our analysis, we show the potential of such an introspective system that outperforms any baseline detection model. We obtain drifting performance results of the individual detection model for particular weather conditions. Our proposed system learns the context of the fusion strategy and the environmental conditions leading to an overall average performance gain of 2.25% within the test set (20k samples). Dividing the test set into 70 single scenarios, we observe a performance boost up to 3.50%. Our system fails only in 1 out of 35 scenarios.
München, TU, Master Thesis, 2020