Safe Traffic Sign Recognition through Data Augmentation for Autonomous Vehicles Software
Context: Since autonomous vehicles operate in an open context, their software components, including data-driven ones, have to reliably process inputs (e.g., obtained by cameras) in order to make safe decisions. A key challenge when providing reliable data-driven components is insufficient training data, which could lead to wrong interpretation of the environment, thereby causing accidents. Aim: The goal of our research is to extend available training data of data-driven components for safe autonomous vehicles using the example of traffic sign recognition. Method: We developed an approach to create realistic image augmentations of various quality deficits and applied them on the German traffic sign recognition benchmark dataset (GTSRB). Results: The approach results in images augmented with (any combination of) seven different quality deficits affecting traffic sign recognition (rain, dirt on lens, steam on lens, darkness, motion blur, dirt on sign, backlight) and considers dependencies between combined quality deficits and influences from other contextual information. Conclusion: Our approach can be used to obtain more comprehensive datasets, especially also including samples with quality deficits that are difficult to gather. By structuring the augmentation into a set of basic components, the approach can be adapted for other application domains (e.g., person detection).