Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Situation-Aware Refinement for Semantic Segmentation Models: Closing the Safety Gap

: Habermayr, Lukas
: Kuhn, Christopher; Zacchi, Joao-Vitor; Kurzidem, Iwo

München, 2021, IV, 97 pp.
München, TU, Master Thesis, 2021
Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie StMWi

Master Thesis
Fraunhofer IKS ()
perception module; image segmentation; model refinement; safety critical; dynamic safety management; Safe Intelligence

Safety-critical applications, such as autonomous driving, heavily depend on a reliable and safe perception. The quality of perception modules relies on various external environmental factors and miss-detection can lead to severe consequences. Today's safety standards are not sufficient to cope with changing external factors and the black-box behavior of perception modules based on convolutional neural networks. In this thesis, we introduce an approach to systematically detect and cancel out weaknesses of perception modules. The task of image segmentation serves as an exemplary and safety-critical perception module. More specifically, we use a state-of-the-art DeepLapV3+. The idea is to systematically test the image segmentation model under different external factors and in changing scenarios. Based on an analysis of the test performances, we refine the training data set of the image segmentation task in order to improve the overall model performance. We leverage the CARLA simulator to generate training, test, and validation data. We evaluate and discuss the performance of differently refined image segmentation models by common performance metrics and by a visual inspection. On average, the systematically refined image segmentation model outperforms a randomly refined model by 8%. Further, we use the data generated during the refinement process to train a simple decision tree classifier. For brevity, we divide the data into two classes indicating the performance of the image segmentation model. The tree learns to predict the performance of the image segmentation model based on external factors with an accuracy of 0.9. This proof-of-concept shows the feasibility of a so-called "dynamic safety management" paving the way to a safe use of perception modules in safety-critical applications.