Now showing 1 - 2 of 2
  • Publication
    Toward Safe Human Machine Interface and Computer-Aided Diagnostic Systems
    ( 2023) ;
    Espinoza, Delfina
    ;
    ;
    Mata, Núria
    ;
    Doan, Nguyen Anh Vu
    Computer-Aided Diagnosis (CADx) systems are safety-critical systems that provide automated medical diagnoses based on their input data. They are Artificial Intelligence based systems which make use of Machine Learning or Deep Learning techniques to differentiate between healthy and unhealthy medical images, as well as, physiological signals acquired from patients. Although current CADx systems offer many advantages in diagnostics, validation is still a challenge, i.e. ensuring that no false negative happens while limiting the occurrence of false positives. This is a major concern since such safety-critical systems have to be verified before deployment into a clinical environment. For that reason, this paper aims to improve the reliability of the CADx systems by adding a Human Machine Interface (HMI) component to enhance the data acquisition process and providing a safety-related framework which includes the HMI/CADx system life cycle to bridge the identified gaps.
  • Publication
    Towards the Quantitative Verification of Deep Learning for Safe Perception
    Deep learning (DL) is seen as an inevitable building block for perceiving the environment with sufficient detail and accuracy as required by automated driving functions. Despite this, its black-box nature and the therewith intertwined unpredictability still hinders its use in safety-critical systems. As such, this work addresses the problem of making this seemingly unpredictable nature measurable by providing a risk-based verification strategy, such as required by ISO 21448. In detail, a method is developed to break down acceptable risk into quantitative performance targets of individual DL-based components along the perception architecture. To verify these targets, the DL input space is split into areas according to the dimensions of a fine-grained operational design domain (μODD) . As it is not feasible to reach full test coverage, the strategy suggests to distribute test efforts across these areas according to the associated risk. Moreover, the testing approach provides answers with respect to how much test coverage and confidence in the test result is required and how these figures relate to safety integrity levels (SILs).