Now showing 1 - 3 of 3
  • Publication
    Uncertainty-aware RSS
    ( 2023)
    Carella, Francesco
    ;
    ;
    In this preliminary work, the authors present a potential solution on the issue of real time parameter estimation within a safety critical application. When computing the frontal safety distance, each vehicles type requires, in principle, a different safety distances depending on its capability to brake at a greater or lower rate. In order to account for different braking capabilities, an object detection and recognition algorithm must be employed, and thus, some classification uncertainty is introduced in the system. We propose to employ such a solution, in order to maximise the utility of the system by accounting for different vehicle types, while considering the uncertainty, in order to preserve safety.
  • Publication
    Towards the Quantitative Verification of Deep Learning for Safe Perception
    Deep learning (DL) is seen as an inevitable building block for perceiving the environment with sufficient detail and accuracy as required by automated driving functions. Despite this, its black-box nature and the therewith intertwined unpredictability still hinders its use in safety-critical systems. As such, this work addresses the problem of making this seemingly unpredictable nature measurable by providing a risk-based verification strategy, such as required by ISO 21448. In detail, a method is developed to break down acceptable risk into quantitative performance targets of individual DL-based components along the perception architecture. To verify these targets, the DL input space is split into areas according to the dimensions of a fine-grained operational design domain (μODD) . As it is not feasible to reach full test coverage, the strategy suggests to distribute test efforts across these areas according to the associated risk. Moreover, the testing approach provides answers with respect to how much test coverage and confidence in the test result is required and how these figures relate to safety integrity levels (SILs).
  • Publication
    Towards Continuous Safety Assurance for Autonomous Systems
    ( 2022) ;
    Carella, Francesco
    ;
    Ensuring the safety of autonomous systems over time and in light of unforeseeable changes is an unsolved task. This work outlines a continuous assurance strategy to ensure the safe ageing of such systems. Due to the difficulty of quantifying uncertainty in an empirically sound manner or at least providing a complete list of uncertainty during the system design, alternative run-time monitoring approaches are proposed to enable a system to self-identify its exposure to a yet unknown hazardous condition and subsequently trigger immediate safety reactions as well as to initiate a redesign and update process in order to ensure the future safety of the system. Moreover, this work unifies the inconsistently used terminology found in literature regarding the automation of different aspects of safety assurance and provides a conceptual framework for understanding the difference between known unknowns and unknown unknowns.