Now showing 1 - 4 of 4
  • Publication
    Towards the Quantitative Verification of Deep Learning for Safe Perception
    Deep learning (DL) is seen as an inevitable building block for perceiving the environment with sufficient detail and accuracy as required by automated driving functions. Despite this, its black-box nature and the therewith intertwined unpredictability still hinders its use in safety-critical systems. As such, this work addresses the problem of making this seemingly unpredictable nature measurable by providing a risk-based verification strategy, such as required by ISO 21448. In detail, a method is developed to break down acceptable risk into quantitative performance targets of individual DL-based components along the perception architecture. To verify these targets, the DL input space is split into areas according to the dimensions of a fine-grained operational design domain (μODD) . As it is not feasible to reach full test coverage, the strategy suggests to distribute test efforts across these areas according to the associated risk. Moreover, the testing approach provides answers with respect to how much test coverage and confidence in the test result is required and how these figures relate to safety integrity levels (SILs).
  • Publication
    Towards Continuous Safety Assurance for Autonomous Systems
    ( 2022) ;
    Carella, Francesco
    ;
    Ensuring the safety of autonomous systems over time and in light of unforeseeable changes is an unsolved task. This work outlines a continuous assurance strategy to ensure the safe ageing of such systems. Due to the difficulty of quantifying uncertainty in an empirically sound manner or at least providing a complete list of uncertainty during the system design, alternative run-time monitoring approaches are proposed to enable a system to self-identify its exposure to a yet unknown hazardous condition and subsequently trigger immediate safety reactions as well as to initiate a redesign and update process in order to ensure the future safety of the system. Moreover, this work unifies the inconsistently used terminology found in literature regarding the automation of different aspects of safety assurance and provides a conceptual framework for understanding the difference between known unknowns and unknown unknowns.
  • Publication
    On Perceptual Uncertainty in Autonomous Driving under Consideration of Contextual Awareness
    ( 2022)
    Saad, Ahmad
    ;
    Bangalore, Nischal
    ;
    ;
    Despite recent advances in automotive sensor technology and artificial intelligence that lead to breakthroughs in sensing capabilities, environment perception in the field of autonomous driving (AD) is still too unreliable for safe operation. Evaluating and managing uncertainty will aid autonomous vehicles (AV) in recognizing perceptual limitations in order to adequately react in critical situations. In this work, we propose an uncertainty evaluation framework in AD based on Dempster-Shafer (DS) theory, that takes context awareness into consideration, a factor that has been so far under-investigated. We formulate uncertainty as a function of context awareness, and examine the effect of redundancy on uncertainty. We also present a modular simulation tool that enables assessing perception architectures in realistic traffic use cases. Our findings show that considering context awareness decreases uncertainty by at least one order of magnitude. We also show that uncertainty behaves exponentially as a function of sensor redundancy.
  • Publication
    Methoden zur Absicherung von KI-basierten Perzeptionsarchitekturen in autonomen Systemen
    Der Bedarf nach Automatisierung in komplexen Umgebungen und der damit verbundene Bedarf nach einer korrekten Umfeldwahrnehmung bring derzeitige Sicherheitskonzepte an ihre Grenzen. Der Einsatz von KI zur Erhöhung der Wahrnehmungsperformanz verstärkt hierbei diese Herausforderung noch durch die Einführung von zusätzlichen Arten an Unsicherheiten. In diesem Beitrag wird folglich die Möglichkeit zur Quantifizierung des durch funktionale Unzulänglichkeiten entstandenen Gefährdungsrisikos diskutiert und zusätzlich notwenige Absicherungsmaßnahmen aufgezeigt. Zudem wird in diesem Kontext der Einsatz von Simulationen als Mittel zur Erzeugung von Performanzevidenzen für KI-basierte Funktionen betrachtet.