Now showing 1 - 4 of 4
  • Publication
    SafeSens - Uncertainty Quantification of Complex Perception Systems
    Safety testing and validation of complex autonomous systems requires a comprehensive and reliable analysis of performance and uncertainty. Especially uncertainty quantification plays a vital part in perception systems operating in open context environments that are neither foreseeable nor deterministic. Therefore, safety assurance based on field tests or corner cases alone is not a feasible option as effort and potential risks are high. Simulations offer a way out. They allow, for example, simulation of potentially hazardous situations, without any real danger, by systematically computing a variety of different (input) parameters quickly. In order to do so, simulations need accurate models to represent the complex system and in particular include uncertainty as inherent property to accurately reflect the interdependence between system components and the environment. We present an approach to creating perception architectures via suitable meta-models to enable a holistic safety analysis to quantify the uncertainties within the system. The models include aleatoric or epistemic uncertainty, dependent on the nature of the approximated component. A showcase of the proposed method highlights, how validation under uncertainty can be used for a camera-based object detection.
  • Publication
    AI for Safety: How to use Explainable Machine Learning Approaches for Safety Analyses
    Current research in machine learning (ML) and safety focuses on safety assurance of ML. We, however, show how to interpret results of explainable ML approaches for safety. We investigate how individual evaluation of data clusters in specific explainable, outside-model estimators can be analyzed to identify insufficiencies at different levels, such as (1) input feature, (2) data or (3) the ML model itself. Additionally, we link our finding to required artifacts of safety within the automotive domain, such as unknown unknowns from ISO 21448 or equivalence class as mentioned in ISO/TR 4804. In our case study we analyze and evaluate the results from an explainable, outside-model estimator (i.e., white-box model) by performance evaluation, decision tree visualization, data distribution and input feature correlation. As explainability is key for safety analyses, the utilized model is a random forest, with extensions via boosting and multi-output regression. The model training is based on an introspective data set, optimized for reliable safety estimation. Our results show that technical limitations can be identified via homogeneous data clusters and assigned to a corresponding equivalence class. For unknown unknowns, each level of insufficiency (input, data and model) must be analyzed separately and systematically narrowed down by process of elimination. In our case study we identify "Fog density" as an unknown unknown input feature for the introspective model.
  • Publication
    Safety Assessment: From Black-Box to White-Box
    Safety assurance for Machine-Learning (ML) based applications such as object detection is a challenging task due to the black-box nature of many ML methods and the associated uncertainties of its output. To increase evidence in the safe behavior of such ML algorithms an explainable and/or interpretable introspective model can help to investigate the black-box prediction quality. For safety assessment this explainable model should be of reduced complexity and humanly comprehensible, so that any decision regarding safety can be traced back to known and comprehensible factors. We present an approach to create an explainable, introspective model (i.e., white-box) for a deep neural network (i.e., black-box) to determine how safety-relevant input features influence the prediction performance, in particular, for confidence and Bounding Box (BBox) regression. For this, Random Forest (RF) models are trained to predict a YOLOv5 object detector output, for specifically selected safety-relevant input features from the open context environment. The RF predicts the YOLOv5 output reliability for three safety related target variables, namely: softmax score, BBox center shift and BBox size shift. The results indicate that the RF prediction for softmax score are only reliable within certain constrains, while the RF prediction for BBox center/size shift are only reliable for small offsets.
  • Publication
    Safety Assurance of Machine Learning for Chassis Control Functions
    ( 2021) ; ; ; ;
    Unterreiner, Michael
    ;
    Graeber, Torben
    ;
    Becker, Philipp
    This paper describes the application of machine learning techniques and an associated assurance case for a safety-relevant chassis control system. The method applied during the assurance process is described including the sources of evidence and deviations from previous ISO 26262 based approaches. The paper highlights how the choice of machine learning approach supports the assurance case, especially regarding the inherent explainability of the algorithm and its robustness to minor input changes. In addition, the challenges that arise if applying more complex machine learning technique, for example in the domain of automated driving, are also discussed. The main contribution of the paper is the demonstration of an assurance approach for machine learning for a comparatively simple function. This allowed the authors to develop a convincing assurance case, whilst identifying pragmatic considerations in the application of machine learning for safety-relevant functions.