Now showing 1 - 10 of 28
No Thumbnail Available
Publication

Uncertainty-aware RSS

2023 , Carella, Francesco , Oleinichenko, Oleg , Schleiß, Philipp

In this preliminary work, the authors present a potential solution on the issue of real time parameter estimation within a safety critical application. When computing the frontal safety distance, each vehicles type requires, in principle, a different safety distances depending on its capability to brake at a greater or lower rate. In order to account for different braking capabilities, an object detection and recognition algorithm must be employed, and thus, some classification uncertainty is introduced in the system. We propose to employ such a solution, in order to maximise the utility of the system by accounting for different vehicle types, while considering the uncertainty, in order to preserve safety.

No Thumbnail Available
Publication

AI for Safety: How to use Explainable Machine Learning Approaches for Safety Analyses

2023 , Kurzidem, Iwo , Burton, Simon , Schleiß, Philipp

Current research in machine learning (ML) and safety focuses on safety assurance of ML. We, however, show how to interpret results of explainable ML approaches for safety. We investigate how individual evaluation of data clusters in specific explainable, outside-model estimators can be analyzed to identify insufficiencies at different levels, such as (1) input feature, (2) data or (3) the ML model itself. Additionally, we link our finding to required artifacts of safety within the automotive domain, such as unknown unknowns from ISO 21448 or equivalence class as mentioned in ISO/TR 4804. In our case study we analyze and evaluate the results from an explainable, outside-model estimator (i.e., white-box model) by performance evaluation, decision tree visualization, data distribution and input feature correlation. As explainability is key for safety analyses, the utilized model is a random forest, with extensions via boosting and multi-output regression. The model training is based on an introspective data set, optimized for reliable safety estimation. Our results show that technical limitations can be identified via homogeneous data clusters and assigned to a corresponding equivalence class. For unknown unknowns, each level of insufficiency (input, data and model) must be analyzed separately and systematically narrowed down by process of elimination. In our case study we identify "Fog density" as an unknown unknown input feature for the introspective model.

No Thumbnail Available
Publication

Towards the Quantitative Verification of Deep Learning for Safe Perception

2022 , Schleiß, Philipp , Hagiwara, Yuki , Kurzidem, Iwo , Carella, Francesco

Deep learning (DL) is seen as an inevitable building block for perceiving the environment with sufficient detail and accuracy as required by automated driving functions. Despite this, its black-box nature and the therewith intertwined unpredictability still hinders its use in safety-critical systems. As such, this work addresses the problem of making this seemingly unpredictable nature measurable by providing a risk-based verification strategy, such as required by ISO 21448. In detail, a method is developed to break down acceptable risk into quantitative performance targets of individual DL-based components along the perception architecture. To verify these targets, the DL input space is split into areas according to the dimensions of a fine-grained operational design domain (μODD) . As it is not feasible to reach full test coverage, the strategy suggests to distribute test efforts across these areas according to the associated risk. Moreover, the testing approach provides answers with respect to how much test coverage and confidence in the test result is required and how these figures relate to safety integrity levels (SILs).

No Thumbnail Available
Publication

Dynamic Risk Management for Safely Automating Connected Driving Maneuvers

2021 , Grobelna, Marta , Zacchi, Joao-Vitor , Schleiß, Philipp , Burton, Simon

Autonomous vehicles (AV)s have the potential for significantly improving road safety by reducing the number of accidents caused by inattentive and unreliable human drivers. Allowing the AVs to negotiate maneuvers and to exchange data can further increase traffic safety and efficiency. Simultaneously, these improvements lead to new classes of risk that need to be managed in order to guarantee safety. This is a challenging task since such systems have to face various forms of uncertainty that current safety approaches only handle through static worst-case assumptions, leading to overly restrictive safety requirements and a decreased level of utility. This work provides a novel solution for dynamic quantification of the relationship between uncertainty and risk at run time in order to find the trade-off between system's safety and the functionality achieved after the application of risk mitigating measures. Our approach is evaluated on the example of a highway overtake maneuver under consideration of uncertainty stemming from wireless communication channels. Our results show improved utility while ensuring the freedom of unacceptable risks, thus illustrating the potential of dynamic risk management.

No Thumbnail Available
Publication

Toward Safe Human Machine Interface and Computer-Aided Diagnostic Systems

2023 , Hagiwara, Yuki , Espinoza, Delfina , Schleiß, Philipp , Mata, Núria , Doan, Nguyen Anh Vu

Computer-Aided Diagnosis (CADx) systems are safety-critical systems that provide automated medical diagnoses based on their input data. They are Artificial Intelligence based systems which make use of Machine Learning or Deep Learning techniques to differentiate between healthy and unhealthy medical images, as well as, physiological signals acquired from patients. Although current CADx systems offer many advantages in diagnostics, validation is still a challenge, i.e. ensuring that no false negative happens while limiting the occurrence of false positives. This is a major concern since such safety-critical systems have to be verified before deployment into a clinical environment. For that reason, this paper aims to improve the reliability of the CADx systems by adding a Human Machine Interface (HMI) component to enhance the data acquisition process and providing a safety-related framework which includes the HMI/CADx system life cycle to bridge the identified gaps.

No Thumbnail Available
Publication

On Perceptual Uncertainty in Autonomous Driving under Consideration of Contextual Awareness

2022 , Saad, Ahmad , Bangalore, Nischal , Kurzidem, Iwo , Schleiß, Philipp

Despite recent advances in automotive sensor technology and artificial intelligence that lead to breakthroughs in sensing capabilities, environment perception in the field of autonomous driving (AD) is still too unreliable for safe operation. Evaluating and managing uncertainty will aid autonomous vehicles (AV) in recognizing perceptual limitations in order to adequately react in critical situations. In this work, we propose an uncertainty evaluation framework in AD based on Dempster-Shafer (DS) theory, that takes context awareness into consideration, a factor that has been so far under-investigated. We formulate uncertainty as a function of context awareness, and examine the effect of redundancy on uncertainty. We also present a modular simulation tool that enables assessing perception architectures in realistic traffic use cases. Our findings show that considering context awareness decreases uncertainty by at least one order of magnitude. We also show that uncertainty behaves exponentially as a function of sensor redundancy.

No Thumbnail Available
Publication

Towards Continuous Safety Assurance for Autonomous Systems

2022 , Schleiß, Philipp , Carella, Francesco , Kurzidem, Iwo

Ensuring the safety of autonomous systems over time and in light of unforeseeable changes is an unsolved task. This work outlines a continuous assurance strategy to ensure the safe ageing of such systems. Due to the difficulty of quantifying uncertainty in an empirically sound manner or at least providing a complete list of uncertainty during the system design, alternative run-time monitoring approaches are proposed to enable a system to self-identify its exposure to a yet unknown hazardous condition and subsequently trigger immediate safety reactions as well as to initiate a redesign and update process in order to ensure the future safety of the system. Moreover, this work unifies the inconsistently used terminology found in literature regarding the automation of different aspects of safety assurance and provides a conceptual framework for understanding the difference between known unknowns and unknown unknowns.

No Thumbnail Available
Publication

SafeSens - Uncertainty Quantification of Complex Perception Systems

2023 , Kurzidem, Iwo , Burton, Simon , Schleiß, Philipp

Safety testing and validation of complex autonomous systems requires a comprehensive and reliable analysis of performance and uncertainty. Especially uncertainty quantification plays a vital part in perception systems operating in open context environments that are neither foreseeable nor deterministic. Therefore, safety assurance based on field tests or corner cases alone is not a feasible option as effort and potential risks are high. Simulations offer a way out. They allow, for example, simulation of potentially hazardous situations, without any real danger, by systematically computing a variety of different (input) parameters quickly. In order to do so, simulations need accurate models to represent the complex system and in particular include uncertainty as inherent property to accurately reflect the interdependence between system components and the environment. We present an approach to creating perception architectures via suitable meta-models to enable a holistic safety analysis to quantify the uncertainties within the system. The models include aleatoric or epistemic uncertainty, dependent on the nature of the approximated component. A showcase of the proposed method highlights, how validation under uncertainty can be used for a camera-based object detection.

No Thumbnail Available
Publication

Safety Assessment: From Black-Box to White-Box

2022 , Kurzidem, Iwo , Misik, Adam , Schleiß, Philipp , Burton, Simon

Safety assurance for Machine-Learning (ML) based applications such as object detection is a challenging task due to the black-box nature of many ML methods and the associated uncertainties of its output. To increase evidence in the safe behavior of such ML algorithms an explainable and/or interpretable introspective model can help to investigate the black-box prediction quality. For safety assessment this explainable model should be of reduced complexity and humanly comprehensible, so that any decision regarding safety can be traced back to known and comprehensible factors. We present an approach to create an explainable, introspective model (i.e., white-box) for a deep neural network (i.e., black-box) to determine how safety-relevant input features influence the prediction performance, in particular, for confidence and Bounding Box (BBox) regression. For this, Random Forest (RF) models are trained to predict a YOLOv5 object detector output, for specifically selected safety-relevant input features from the open context environment. The RF predicts the YOLOv5 output reliability for three safety related target variables, namely: softmax score, BBox center shift and BBox size shift. The results indicate that the RF prediction for softmax score are only reliable within certain constrains, while the RF prediction for BBox center/size shift are only reliable for small offsets.

No Thumbnail Available
Publication

Safety Assurance of Machine Learning for Chassis Control Functions

2021 , Burton, Simon , Kurzidem, Iwo , Schwaiger, Adrian , Schleiß, Philipp , Unterreiner, Michael , Graeber, Torben , Becker, Philipp

This paper describes the application of machine learning techniques and an associated assurance case for a safety-relevant chassis control system. The method applied during the assurance process is described including the sources of evidence and deviations from previous ISO 26262 based approaches. The paper highlights how the choice of machine learning approach supports the assurance case, especially regarding the inherent explainability of the algorithm and its robustness to minor input changes. In addition, the challenges that arise if applying more complex machine learning technique, for example in the domain of automated driving, are also discussed. The main contribution of the paper is the demonstration of an assurance approach for machine learning for a comparatively simple function. This allowed the authors to develop a convincing assurance case, whilst identifying pragmatic considerations in the application of machine learning for safety-relevant functions.