Now showing 1 - 4 of 4
  • Publication
    Safety Assurance of Machine Learning for Chassis Control Functions
    ( 2021) ; ; ; ;
    Unterreiner, Michael
    ;
    Graeber, Torben
    ;
    Becker, Philipp
    This paper describes the application of machine learning techniques and an associated assurance case for a safety-relevant chassis control system. The method applied during the assurance process is described including the sources of evidence and deviations from previous ISO 26262 based approaches. The paper highlights how the choice of machine learning approach supports the assurance case, especially regarding the inherent explainability of the algorithm and its robustness to minor input changes. In addition, the challenges that arise if applying more complex machine learning technique, for example in the domain of automated driving, are also discussed. The main contribution of the paper is the demonstration of an assurance approach for machine learning for a comparatively simple function. This allowed the authors to develop a convincing assurance case, whilst identifying pragmatic considerations in the application of machine learning for safety-relevant functions.
  • Publication
    Benchmarking Uncertainty Estimation Methods for Deep Learning with Safety-Related Metrics
    Deep neural networks generally perform very well on giving accurate predictions, but they often lack in recognizing when these predictions may be wrong. This absence of awareness regarding the reliability of given outputs is a big obstacle in deploying such models in safety-critical applications. There are certain approaches that try to address this problem by designing the models to give more reliable values for their uncertainty. However, even though the performance of these models are compared to each other in various ways, there is no thorough evaluation comparing them in a safety-critical context using metrics that are designed to describe trade-offs between performance and safe system behavior. In this paper we attempt to fill this gap by evaluating and comparing several state-of-the-art methods for estimating uncertainty for image classifcation with respect to safety-related requirements and metrics that are suitable to describe the models performance in safety-critical domains. We show the relationship of remaining error for predictions with high confidence and its impact on the performance for three common datasets. In particular, Deep Ensembles and Learned Confidence show high potential to significantly reduce the remaining error with only moderate performance penalties.
  • Publication
    Machine Learning in sicherheitskritischen Systemen
    Der Einsatz von Machine Learning (ML), insbesondere von Deep Learning, ermöglicht erst viele hochkomplexe Anwendungen, beispielsweise in der Medizintechnik oder bei autonomen Systemen. Derzeit gibt es beim Einsatz in solchen sicherheitskritischen Systemen jedoch noch einige Herausforderungen. Drei dieser Probleme und Möglichkeiten, wie diese in Zukunft gehandhabt werden können, sollen im Nachfolgenden am Beispiel des autonomen Fahrens vorgestellt werden.
  • Publication
    Managing Uncertainty of AI-based Perception for Autonomous Systems
    ( 2019)
    Henne, Maximilian
    ;
    ;
    With the advent of autonomous systems, machine perception is a decisive safety-critical part to make such systems become reality. However, presently used AI-based perception does not meet the required reliability for usage in real-world systems beyond prototypes, as for autonomous cars. In this work, we describe the challenge of reliable perception for autonomous systems. Furthermore, we identify methods and approaches to quantify the uncertainty of AI-based perception. Along with dynamic management of the safety, we show a path to how uncertainty information can be utilized for the perception, so that it will meet the high dependability demands of life-critical autonomous systems.