Now showing 1 - 6 of 6
  • Publication
    SafeSens - Uncertainty Quantification of Complex Perception Systems
    Safety testing and validation of complex autonomous systems requires a comprehensive and reliable analysis of performance and uncertainty. Especially uncertainty quantification plays a vital part in perception systems operating in open context environments that are neither foreseeable nor deterministic. Therefore, safety assurance based on field tests or corner cases alone is not a feasible option as effort and potential risks are high. Simulations offer a way out. They allow, for example, simulation of potentially hazardous situations, without any real danger, by systematically computing a variety of different (input) parameters quickly. In order to do so, simulations need accurate models to represent the complex system and in particular include uncertainty as inherent property to accurately reflect the interdependence between system components and the environment. We present an approach to creating perception architectures via suitable meta-models to enable a holistic safety analysis to quantify the uncertainties within the system. The models include aleatoric or epistemic uncertainty, dependent on the nature of the approximated component. A showcase of the proposed method highlights, how validation under uncertainty can be used for a camera-based object detection.
  • Publication
    AI for Safety: How to use Explainable Machine Learning Approaches for Safety Analyses
    Current research in machine learning (ML) and safety focuses on safety assurance of ML. We, however, show how to interpret results of explainable ML approaches for safety. We investigate how individual evaluation of data clusters in specific explainable, outside-model estimators can be analyzed to identify insufficiencies at different levels, such as (1) input feature, (2) data or (3) the ML model itself. Additionally, we link our finding to required artifacts of safety within the automotive domain, such as unknown unknowns from ISO 21448 or equivalence class as mentioned in ISO/TR 4804. In our case study we analyze and evaluate the results from an explainable, outside-model estimator (i.e., white-box model) by performance evaluation, decision tree visualization, data distribution and input feature correlation. As explainability is key for safety analyses, the utilized model is a random forest, with extensions via boosting and multi-output regression. The model training is based on an introspective data set, optimized for reliable safety estimation. Our results show that technical limitations can be identified via homogeneous data clusters and assigned to a corresponding equivalence class. For unknown unknowns, each level of insufficiency (input, data and model) must be analyzed separately and systematically narrowed down by process of elimination. In our case study we identify "Fog density" as an unknown unknown input feature for the introspective model.
  • Publication
    Safety Assessment: From Black-Box to White-Box
    Safety assurance for Machine-Learning (ML) based applications such as object detection is a challenging task due to the black-box nature of many ML methods and the associated uncertainties of its output. To increase evidence in the safe behavior of such ML algorithms an explainable and/or interpretable introspective model can help to investigate the black-box prediction quality. For safety assessment this explainable model should be of reduced complexity and humanly comprehensible, so that any decision regarding safety can be traced back to known and comprehensible factors. We present an approach to create an explainable, introspective model (i.e., white-box) for a deep neural network (i.e., black-box) to determine how safety-relevant input features influence the prediction performance, in particular, for confidence and Bounding Box (BBox) regression. For this, Random Forest (RF) models are trained to predict a YOLOv5 object detector output, for specifically selected safety-relevant input features from the open context environment. The RF predicts the YOLOv5 output reliability for three safety related target variables, namely: softmax score, BBox center shift and BBox size shift. The results indicate that the RF prediction for softmax score are only reliable within certain constrains, while the RF prediction for BBox center/size shift are only reliable for small offsets.
  • Publication
    Safety Assurance of Machine Learning for Chassis Control Functions
    ( 2021) ; ; ; ;
    Unterreiner, Michael
    ;
    Graeber, Torben
    ;
    Becker, Philipp
    This paper describes the application of machine learning techniques and an associated assurance case for a safety-relevant chassis control system. The method applied during the assurance process is described including the sources of evidence and deviations from previous ISO 26262 based approaches. The paper highlights how the choice of machine learning approach supports the assurance case, especially regarding the inherent explainability of the algorithm and its robustness to minor input changes. In addition, the challenges that arise if applying more complex machine learning technique, for example in the domain of automated driving, are also discussed. The main contribution of the paper is the demonstration of an assurance approach for machine learning for a comparatively simple function. This allowed the authors to develop a convincing assurance case, whilst identifying pragmatic considerations in the application of machine learning for safety-relevant functions.
  • Publication
    Dynamic Risk Management for Safely Automating Connected Driving Maneuvers
    Autonomous vehicles (AV)s have the potential for significantly improving road safety by reducing the number of accidents caused by inattentive and unreliable human drivers. Allowing the AVs to negotiate maneuvers and to exchange data can further increase traffic safety and efficiency. Simultaneously, these improvements lead to new classes of risk that need to be managed in order to guarantee safety. This is a challenging task since such systems have to face various forms of uncertainty that current safety approaches only handle through static worst-case assumptions, leading to overly restrictive safety requirements and a decreased level of utility. This work provides a novel solution for dynamic quantification of the relationship between uncertainty and risk at run time in order to find the trade-off between system's safety and the functionality achieved after the application of risk mitigating measures. Our approach is evaluated on the example of a highway overtake maneuver under consideration of uncertainty stemming from wireless communication channels. Our results show improved utility while ensuring the freedom of unacceptable risks, thus illustrating the potential of dynamic risk management.
  • Publication
    Trustworthy AI for Intelligent Traffic Systems (ITS)
    (Fraunhofer IKS, 2021)
    Bortoli, Stefano
    ;
    Grossi, Margherita
    ;
    ; ; ;
    AI-enabled Intelligent Traffic Systems (ITS) offer the potential to greatly improve the efficiency of traffic flow in inner cities resulting in shorter travel times, increased fuel efficiency and reduction in harmful emissions. These systems make use of data collected in real-time across different locations in order to adapt signaling infrastructure (such as traffic lights and lane signals) based on a set of optimized algorithms. Consequences of failures in such systems can range from increased congestion and the associated rise in traffic accidents to increased vehicle emissions over time. This white paper summarizes the results of consultations between safety, mobility and smart city experts to explore the consequences of the application of AI methods in Intelligent Traffic Systems. The consultations were held as a roundtable event on the 1st July 2021, hosted by Fraunhofer IKS and addressed the following questions: How does the use of AI fundamentally change our understanding of safety and risk related to such systems? Which challenges are introduced when using AI for decision making functions in Smart Cities and Intelligent Traffic Systems? How should these challenges be addressed in future? Based on these discussions, the white paper summarizes current and future challenges of introducing AI into Intelligent Traffic Systems in a trustworthy manner. Here, special focus is laid on the complex, heterogeneous, multi-disciplinary nature of ITS in Smart Cities. In doing so, we motivate a combined consideration of the emerging complexity and inherent uncertainty related to such systems and the need for collaboration and communication between a broad range of disciplines.