Now showing 1 - 10 of 23
  • Publication
    SafeSens - Uncertainty Quantification of Complex Perception Systems
    Safety testing and validation of complex autonomous systems requires a comprehensive and reliable analysis of performance and uncertainty. Especially uncertainty quantification plays a vital part in perception systems operating in open context environments that are neither foreseeable nor deterministic. Therefore, safety assurance based on field tests or corner cases alone is not a feasible option as effort and potential risks are high. Simulations offer a way out. They allow, for example, simulation of potentially hazardous situations, without any real danger, by systematically computing a variety of different (input) parameters quickly. In order to do so, simulations need accurate models to represent the complex system and in particular include uncertainty as inherent property to accurately reflect the interdependence between system components and the environment. We present an approach to creating perception architectures via suitable meta-models to enable a holistic safety analysis to quantify the uncertainties within the system. The models include aleatoric or epistemic uncertainty, dependent on the nature of the approximated component. A showcase of the proposed method highlights, how validation under uncertainty can be used for a camera-based object detection.
  • Publication
    AI for Safety: How to use Explainable Machine Learning Approaches for Safety Analyses
    Current research in machine learning (ML) and safety focuses on safety assurance of ML. We, however, show how to interpret results of explainable ML approaches for safety. We investigate how individual evaluation of data clusters in specific explainable, outside-model estimators can be analyzed to identify insufficiencies at different levels, such as (1) input feature, (2) data or (3) the ML model itself. Additionally, we link our finding to required artifacts of safety within the automotive domain, such as unknown unknowns from ISO 21448 or equivalence class as mentioned in ISO/TR 4804. In our case study we analyze and evaluate the results from an explainable, outside-model estimator (i.e., white-box model) by performance evaluation, decision tree visualization, data distribution and input feature correlation. As explainability is key for safety analyses, the utilized model is a random forest, with extensions via boosting and multi-output regression. The model training is based on an introspective data set, optimized for reliable safety estimation. Our results show that technical limitations can be identified via homogeneous data clusters and assigned to a corresponding equivalence class. For unknown unknowns, each level of insufficiency (input, data and model) must be analyzed separately and systematically narrowed down by process of elimination. In our case study we identify "Fog density" as an unknown unknown input feature for the introspective model.
  • Publication
    Towards the Quantitative Verification of Deep Learning for Safe Perception
    Deep learning (DL) is seen as an inevitable building block for perceiving the environment with sufficient detail and accuracy as required by automated driving functions. Despite this, its black-box nature and the therewith intertwined unpredictability still hinders its use in safety-critical systems. As such, this work addresses the problem of making this seemingly unpredictable nature measurable by providing a risk-based verification strategy, such as required by ISO 21448. In detail, a method is developed to break down acceptable risk into quantitative performance targets of individual DL-based components along the perception architecture. To verify these targets, the DL input space is split into areas according to the dimensions of a fine-grained operational design domain (μODD) . As it is not feasible to reach full test coverage, the strategy suggests to distribute test efforts across these areas according to the associated risk. Moreover, the testing approach provides answers with respect to how much test coverage and confidence in the test result is required and how these figures relate to safety integrity levels (SILs).
  • Publication
    On Perceptual Uncertainty in Autonomous Driving under Consideration of Contextual Awareness
    ( 2022)
    Saad, Ahmad
    ;
    Bangalore, Nischal
    ;
    ;
    Despite recent advances in automotive sensor technology and artificial intelligence that lead to breakthroughs in sensing capabilities, environment perception in the field of autonomous driving (AD) is still too unreliable for safe operation. Evaluating and managing uncertainty will aid autonomous vehicles (AV) in recognizing perceptual limitations in order to adequately react in critical situations. In this work, we propose an uncertainty evaluation framework in AD based on Dempster-Shafer (DS) theory, that takes context awareness into consideration, a factor that has been so far under-investigated. We formulate uncertainty as a function of context awareness, and examine the effect of redundancy on uncertainty. We also present a modular simulation tool that enables assessing perception architectures in realistic traffic use cases. Our findings show that considering context awareness decreases uncertainty by at least one order of magnitude. We also show that uncertainty behaves exponentially as a function of sensor redundancy.
  • Publication
    Towards Continuous Safety Assurance for Autonomous Systems
    ( 2022) ;
    Carella, Francesco
    ;
    Ensuring the safety of autonomous systems over time and in light of unforeseeable changes is an unsolved task. This work outlines a continuous assurance strategy to ensure the safe ageing of such systems. Due to the difficulty of quantifying uncertainty in an empirically sound manner or at least providing a complete list of uncertainty during the system design, alternative run-time monitoring approaches are proposed to enable a system to self-identify its exposure to a yet unknown hazardous condition and subsequently trigger immediate safety reactions as well as to initiate a redesign and update process in order to ensure the future safety of the system. Moreover, this work unifies the inconsistently used terminology found in literature regarding the automation of different aspects of safety assurance and provides a conceptual framework for understanding the difference between known unknowns and unknown unknowns.
  • Publication
    Safety Assurance of Machine Learning for Chassis Control Functions
    ( 2021) ; ; ; ;
    Unterreiner, Michael
    ;
    Graeber, Torben
    ;
    Becker, Philipp
    This paper describes the application of machine learning techniques and an associated assurance case for a safety-relevant chassis control system. The method applied during the assurance process is described including the sources of evidence and deviations from previous ISO 26262 based approaches. The paper highlights how the choice of machine learning approach supports the assurance case, especially regarding the inherent explainability of the algorithm and its robustness to minor input changes. In addition, the challenges that arise if applying more complex machine learning technique, for example in the domain of automated driving, are also discussed. The main contribution of the paper is the demonstration of an assurance approach for machine learning for a comparatively simple function. This allowed the authors to develop a convincing assurance case, whilst identifying pragmatic considerations in the application of machine learning for safety-relevant functions.
  • Publication
    Dynamic Risk Management for Safely Automating Connected Driving Maneuvers
    Autonomous vehicles (AV)s have the potential for significantly improving road safety by reducing the number of accidents caused by inattentive and unreliable human drivers. Allowing the AVs to negotiate maneuvers and to exchange data can further increase traffic safety and efficiency. Simultaneously, these improvements lead to new classes of risk that need to be managed in order to guarantee safety. This is a challenging task since such systems have to face various forms of uncertainty that current safety approaches only handle through static worst-case assumptions, leading to overly restrictive safety requirements and a decreased level of utility. This work provides a novel solution for dynamic quantification of the relationship between uncertainty and risk at run time in order to find the trade-off between system's safety and the functionality achieved after the application of risk mitigating measures. Our approach is evaluated on the example of a highway overtake maneuver under consideration of uncertainty stemming from wireless communication channels. Our results show improved utility while ensuring the freedom of unacceptable risks, thus illustrating the potential of dynamic risk management.
  • Publication
    Trustworthy AI for Intelligent Traffic Systems (ITS)
    (Fraunhofer IKS, 2021)
    Bortoli, Stefano
    ;
    Grossi, Margherita
    ;
    ; ; ;
    AI-enabled Intelligent Traffic Systems (ITS) offer the potential to greatly improve the efficiency of traffic flow in inner cities resulting in shorter travel times, increased fuel efficiency and reduction in harmful emissions. These systems make use of data collected in real-time across different locations in order to adapt signaling infrastructure (such as traffic lights and lane signals) based on a set of optimized algorithms. Consequences of failures in such systems can range from increased congestion and the associated rise in traffic accidents to increased vehicle emissions over time. This white paper summarizes the results of consultations between safety, mobility and smart city experts to explore the consequences of the application of AI methods in Intelligent Traffic Systems. The consultations were held as a roundtable event on the 1st July 2021, hosted by Fraunhofer IKS and addressed the following questions: How does the use of AI fundamentally change our understanding of safety and risk related to such systems? Which challenges are introduced when using AI for decision making functions in Smart Cities and Intelligent Traffic Systems? How should these challenges be addressed in future? Based on these discussions, the white paper summarizes current and future challenges of introducing AI into Intelligent Traffic Systems in a trustworthy manner. Here, special focus is laid on the complex, heterogeneous, multi-disciplinary nature of ITS in Smart Cities. In doing so, we motivate a combined consideration of the emerging complexity and inherent uncertainty related to such systems and the need for collaboration and communication between a broad range of disciplines.
  • Publication
    A Systematic Approach to Analyzing Perception Architectures in Autonomous Vehicles
    ( 2020) ;
    Saad, Ahmad
    ;
    Simulations are commonly used to validate the design of autonomous systems. However, as these systems are increasingly deployed into safety-critical environments with aleatoric uncertainties, and with the increase in components that employ machine learning algorithms with epistemic uncertainties, validation methods which consider uncertainties are lacking. We present an approach that evaluates signal propagation in logical system architectures, in particular environment perception-chains, focusing on effects of uncertainty to determine functional limitations. The perception based autonomous driving systems are represented by connected elements to constitute a certain functionality. The elements are based on (meta-) models to describe technical components and their behavior. The surrounding environment, in which the system is deployed, is modeled by parameters that are derived from a quasi-static scene. All parameter variations completely define input-states for the designed perception architecture. The input-states are treated as random variables inside the model of components to simulate aleatoric/epistemic uncertainty. The dissimilarity between the model-input and -output serves as measure for total uncertainty present in the system. The uncertainties are propagated through consecutive components and calculated by the same manner. The final result consists of input-states which model uncertainty effects for the specified functionality and therefore highlight shortcomings of the designed architecture.
  • Publication
    Systematische Analyse von Einflussfaktoren auf die Sensorik bei der Umfelderkennung zur Bestimmung kritischer Situationen
    Die vorgeschlagene systematische Analyse basiert auf der Simulation von Signalpropagation durch eine logische Systemarchitektur für ein gegebenes Szenario zur Identifikation von Sensorik-Messwerten mit hohen Unsicherheitswerten. Sensorik-Messwerte mit hohen Unsicherheitswerten, welche für die definierte Funktionalitätrelevant sind, stellen kritische Situationen dar. Diese kritischen Situationen erfordern die Untersuchung möglicher (externer) Einflussfaktoren.